text
stringlengths 1
2.55M
| id
stringlengths 21
25
| metadata
dict |
---|---|---|
\section{Introduction}
The Internet has evolved into a platform to deliver services from a platform to disseminate information. Consequently, misuse and policy violations by the attackers are routine affairs nowadays. Denning~\cite{den87} introduced the concept of detecting the cyber threats by constant monitoring of network audit trails using
Intrusion Detection System (IDS) to discover abnormal patterns or signatures of network or system usage. Recent advancements in IDS are related to the use of machine learning and soft computing techniques that have reduced the high false positive rates which were observed in the earlier generations of IDSs~\cite{kim14}~\cite{tsai09}. The statistical models in the data mining techniques provide excellent intrusion detection capability to the designers of the existing IDSs which have increased their popularity. However, the inherent complications of IDSs such as competence, accuracy, and usability parameters make them unsuitable for deployment in a live system having high traffic volume. Further, the learning process of IDSs requires a large amount of training data which may not be always available, and it also requires a lot of computing power and time. Studies have revealed that it is difficult to handle high-speed network traffic by the existing IDSs due to their complex decision-making process. Attackers can take advantage of this shortcoming to hide their exploits and can overload an IDS using extraneous information while they are executing an attack. Therefore, building an efficient intrusion detection is vital for the security of the network system to prevent an attack in the shortest possible time.
A traditional IDS may discover network threats by matching current network behavior patterns with that of known attacks. The underlying assumption is that the behavior pattern in each attack is inherently different compared to the normal activity. Thus, only with the knowledge of normal behavior patterns, it may be possible to detect a new attack. However, the automatic generation of these patterns (or rules) is a challenging task, and most of the existing techniques require human intervention during pattern generation. Moreover, the lack of exhaustive prior knowledge (or labeled data) regarding the attacks makes this problem more challenging. It is advantageous for any IDS to consider unlabeled examples along with the available (may be small in number) labeled examples of the target class. This strategy helps in improving
the accuracy of the IDSs against the new attacks.
An IDS which can use both labeled and unlabeled examples is known as a semi-supervised IDS. Another important aspect of any intrusion detection system is the time required to detect abnormal activity. Detection in real time or near real time is preferred as it can prevent substantial damage to the resources. Thus, the primary objective of this work is to develop a {\it semi-supervised intrusion detection system} for near real-time detection of cyber threats.
Numerous security breaches of computer networks have encouraged researchers and practitioners to design several Intrusion Detection Systems. For a comprehensive review, we refer to \cite{lia13}. Researchers have adopted various approaches to design IDSs, and a majority
of them modeled the design problem as a classification problem. In \cite{amb16}, a feature selection method is used with a standard classifier like SVM as the conventional classifiers perform poorly due to the presence of redundant or irrelevant features. Authors
of \cite{wan17} also adopted a similar approach. Most of these designs share one common disadvantage, i.e., they follow a supervised learning approach. Recently, a new semi-supervised IDS has been proposed in~\cite{rana17}, and it outperforms the existing semi-supervised IDSs, but it suffers from the low accuracy of detection.
It is essential to understand the behavior patterns of the known attacks, as well as the behaviors of normal activity to discover and prevent the attacks. Generation of patterns or signatures to model the normal as well as the abnormal activities is a tedious process, and it can be automatized using the application of LAD. Peter L. Hammer introduced the concept of logical analysis of data (or LAD) in the
year $1986$~\cite{ham86} and subsequently developed it as a technique to find the useful rules and patterns from the past observations to classify new observations~\cite{bor00,cra88}. Patterns (or rules) can provide a very efficient way to solve various problems in different application areas, e.g., classification, development of rule-based decision support system, feature selection, medical diagnosis, network traffic analysis, etc. The initial versions of LAD~\cite{ale07,cra88,ham86} were designed to work with the binary data having either of the two labels, i.e., positive or negative. Thus, the data or observations were part of a two-class system. A specific goal of LAD is to learn the logical patterns which set apart observations of a class from the rest of the classes.
LAD has been used to analyze problems involving medical data.
A typical dataset consists of two disjoint sets $\Omega^+, \Omega^-$ which represent a set of observations consisting of positive and negative examples, respectively. Here, each observation is a vector consisting of different attribute values. In the domain of medical data analysis, each vector represents the medical record of a patient, and the patients in $\Omega^+$ have a specific medical condition. On the other hand, $\Omega^-$ represents the medical records of the patients who do not have that condition. Subsequently, if a new vector / patient is given, one has to decide whether the new vector belongs to $\Omega^+$ or $\Omega^-$, i.e., one has to determine whether the patient has the particular medical condition or not. Thus, in this example, the medical diagnosis problem can be interpreted as a two-class classification problem. The central theme of LAD is the selection of such patterns (or rules) which can collectively classify all the known observations. LAD stands out in comparison with other classification methods since a pattern can explain the classification outcome to human experts using formal reasoning.
Conventional LAD requires labeled examples for the pattern or rule generation. However, there exist several application domains (e.g., intrusion detection system, fraud detection, document clustering, etc.) where the existence of labeled examples are rare or insufficient. To harness the strength of LAD in these application domains, one needs to extend LAD for unsupervised and semi-supervised pattern generation~\cite{bruni15}. Here, we introduce a preprocessing methodology using which we can extend the LAD in such a manner that it can use unlabeled observations along with the labeled observations for pattern generation. Consequently, it acts like a {\it semi-supervised} learning approach. The central theme is to use the classical LAD to generate initial positive and negative patterns from the available labeled observations. Once the patterns are available, we measure the closeness of the unlabeled observations with the initial positive or negative patterns using balance score. The observations with high positive balance score are labeled as the positive observations and the observations having high negative balance score are labeled as the negative examples. Once labels are generated, the standard LAD can be used as it is. We have used this approach successfully in the design of a new {\it semi-supervised} and {\it lightweight} Intrusion Detection System (IDS) which outperforms the existing methods in terms of accuracy and requirement of computational power.
Creation of signatures or patterns to model the normal as well as the abnormal network activities can be accomplished using the semi-supervised LAD (or S-LAD in short), and in this effort, we have used S-LAD to design a semi-supervised IDS. Here, S-LAD is used to generate the patterns which can differentiate the normal activities from the malicious activities, and these patterns are later converted to rules for the classification of unknown network behavior(s). The proposed SSIDS has two phases, the offline phase is used to design a rule-based classifier. This phase uses historical observations, both labeled and unlabeled, to find the patterns or rules of classification, and require a significant amount of processing power. Once the classification rules are generated, the online phase uses those rules to classify any new observation. The online phase requires much less processing power than the offline phase, and it can detect threats in near real-time. The accuracy of proposed semi-supervised IDS is much better than any state-of-the-art semi-supervised IDS and comparable with the supervised IDSs.
The main contributions of the proposed paper are: (1) a new implementation of LAD having extensively modified pattern generation algorithm; (2) a new strategy to extend LAD that is suitable for the design of semi-supervised classifiers; (3) a LAD-based design of a lightweight semi-supervised intrusion detection system that outperforms any existing semi-supervised IDSs.
The rest of the paper is organized as follows. Next section gives a brief description of our modified implementation of LAD and Section~\ref{slad} describes the proposed method to extend LAD to the semi-supervised LAD. Details of the proposed SSIDS is available in the Section~\ref{sids}. Performance evaluation and comparative results are available in the Section~\ref{expr} and we conclude the paper in the Section~\ref{sec-con}.
\section{Proposed Implementation of LAD}
\label{lad}
LAD is a data analysis technique which is inspired by the combinatorial optimization methods.
As pointed out earlier, the initial version of LAD was designed to work with the binary data only. Let us first briefly describe the basic steps of LAD when it is applied to the binary data. An observation having $n$ attributes may be represented as a binary vector of length $n+1$ as the last bit (a.k.a. the class label) indicates whether it is a member of $\Omega^+$ or $\Omega^-$. Thus, the set of binary observations $\Omega$ ($=\Omega^+ \cup \Omega^- \subseteq \{0,1\}^n$) can be represented by a partially defined Boolean function (pdBf in short) $\phi$, indicating a mapping of $\Omega \rightarrow \{0,1\}$. The goal of LAD is to find an extension $f$ of the pdBf $\phi$ which can classify all the unknown vectors in the sample space.
However, this goal is clearly unachievable and we try to find an approximate extension $f^\prime$ of $f$. $f^\prime$ should approximate $f$ as closely as possible based on the several optimality criteria. Normally, the extension is represented in a disjunctive normal form (DNF). In brief, the LAD involves following steps~\cite{ale07}.
{\small
\begin{enumerate}
\item {\it Binarization of Observations. We have used a slightly modified implementation of binarization here.}
\item {\it Elimination of Redundancy (or Support Sets Generation).}
\item {\it Pattern Generation. Our extensively modified pattern generation algorithm makes the 'Theory Formation' step redundant.}
\item {\it Theory Formation. We have omitted this step.
\item {\it Classifier Design and Validation}.
\end{enumerate}
}
There are many application domains from the finance to the medical where the naturally occurring data are not binary~\cite{ale07,bor97}. Thus, to apply LAD in those domains, a method to convert any data to binary is discussed in the subsection~\ref{binr}. Moreover, we have modified the original pattern generation algorithm in such a manner that the coverages of every pair of patterns have a very low intersection. Thus, the step ``theory formation" is
no longer required. Recently, a technique to produce internally orthogonal patterns (i.e., the coverages of every pair of patterns have empty intersection) is also reported in~\cite{burni18}.
\subsection{Binarization of Observations}
\label{binr}
A threshold (a.k.a. cut-point) based method was proposed to convert the numerical data to binary.
Any numerical attribute $x$ is associated with two types
of Boolean variables, i.e. the {\it level variables} and the {\it interval variables}.
Level variables are related to the cut-points and indicate whether the original attribute value is greater than or less than the given cut-point $\beta$. For each cut-point $\beta$, we create a Boolean variable $b(x, \beta)$ such that
{\small
\begin{equation}
b(x, \beta)=\begin{cases}
1, & \text{if $x \ge \beta$}.\\
0, & \text{otherwise}.
\end{cases}
\end{equation}
}
\noindent Similarly, interval variables are created for each pair of cut-points $\beta_1$ and $\beta_2$ and represented by Boolean
variable $b(x, \beta_1, \beta_2)$ such that
{\small
\begin{equation}
b(x, \beta_1, \beta_2)=\begin{cases}
1, & \text{if $ \beta_1 \le x < \beta_2$}.\\
0, & \text{otherwise}.
\end{cases}
\end{equation}
}
We are yet to discuss how the cut-points are determined. The cut-points should be chosen carefully such that the resultant pdBf should have an extension in the class of all Boolean functions $\mathcal{C}_{ALL}$~\cite{bor97}. Let us consider the numerical attribute $x$ having $k+1$ distinct values present in the observations and the attribute values are ordered such that $x_0>x_1>\ldots >x_k$. We introduce a cut-point between
$x_i$ and $x_{i+1}$ if they belong to different classes. The resulting pdBf is referred to as the {\it master} pdBf if we create cut-point for each pair of values.
Note that, the resultant master pdBf
has extension in $\mathcal{C}_{ALL}$ if and only if $\Omega^+ \cap \Omega^- = \emptyset$.
The process for selection of cut-points is explained below using an example from~\cite{tkd19}.
The original dataset presented in the Table~\ref{dset1} is converted to the Table~\ref{dset2} by adding the class labels (or truth values of pdBf). Those observations that are the members of $\Omega^+$ have $1$ as the class label and rest of the observations have $0$ as the class labels. Now, if we want to convert the numeric attribute $A$ to binary, we form another dataset as represented in the Table~\ref{dset3}. Next, we sort the dataset over the attribute $A$ to get a new dataset $D$ that is presented in the Table~\ref{dset4}. After that, we apply the following steps to get the cut points.
\begin{enumerate}
\item Preprocessing of $D$: This step is a slight modification of the
usual technique used in \cite{bor97,bor00}, and other related papers.
If two or more consecutive observations have the same attribute value $v_i$ but different class labels,
remove all those observations except one observation. Now, we change the existing class label of $v_i$ to a new and unique class label which does not appear in $D$ and include that in the set of class labels of $D$. Refer to Table~\ref{dset5}.
\item Now, if two consecutive observations $A_i$ and $A_{i+1}$ have different class labels, introduce a new
cut-point $\beta^A_j$ as
$$\beta^A_j = \frac{1}{2} (A_i + A_{i+1})$$
\end{enumerate}
\noindent If we follow the above mentioned steps, the obtained cut-points are $\beta^A_1 = 3.05$, $\beta^A_2 = 2.45$, $\beta^A_3 = 1.65$. Thus, we will have six Boolean variables consisting of three level variables and three interval variables corresponding to these cut-points.
\noindent
\begin{table*}[!ht]
\resizebox{1.7\columnwidth}{!}{
\small
\begin{minipage}{0.32\textwidth}
\resizebox{1.0\textwidth}{!}{
\small
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Attributes&$A$&$B$&$C$\\
&&&\\
\hline
$\Omega^+$:positive&3.5&3.8&2.8\\
\cline{2-4}
examples & 2.6 & 1.6 & 5.2 \\
\cline{2-4}
& 1.0 & 2.1 & 3.8 \\
\hline
\hline
$\Omega^-$:negative & 3.5 & 1.6 & 3.8 \\
\cline{2-4}
examples & 2.3 & 2.1 & 1.0 \\
\hline
\multicolumn{4}{c}{}
\end{tabular}
}
\caption{}
\label{dset1}
\end{minipage}
\begin{minipage}{0.26\textwidth}
\resizebox{1.0\textwidth}{!}{
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$A$ & $B$ & $C$ & Class\\
& & & Labels\\
\hline
3.5 & 3.8 & 2.8 & 1 \\
\hline
2.6 & 1.6 & 5.2 & 1 \\
\hline
1.0 & 2.1 & 3.8 & 1\\
\hline
\hline
3.5 & 1.6 & 3.8 & 0 \\
\hline
2.3 & 2.1 & 1.0 & 0\\
\hline
\multicolumn{4}{c}{}
\end{tabular}
}
\caption{}
\label{dset2}
\end{minipage}
\begin{minipage}{0.155\textwidth}
\resizebox{\textwidth}{!}{
\centering
\begin{tabular}{|c|c|}
\hline
$A$ & Class\\
& Labels\\
\hline
3.5 & 1 \\
\hline
2.6 & 1 \\
\hline
1.0 & 1\\
\hline
\hline
3.5 & 0 \\
\hline
2.3 & 0\\
\hline
\multicolumn{2}{c}{}
\end{tabular}
}
\caption{}
\label{dset3}
\end{minipage}
\begin{minipage}{0.16\textwidth}
\resizebox{1.0\textwidth}{!}{
\centering
\begin{tabular}{|c|c|}
\hline
$A$ & Class \\
& Labels\\
\hline
3.5 & 1 \\
\hline
3.5 & 0 \\
\hline
2.6 & 1 \\
\hline
2.3 & 0\\
\hline
1.0 & 1\\
\hline
\multicolumn{2}{c}{}
\end{tabular}
}
\caption{}
\label{dset4}
\end{minipage}
\begin{minipage}{0.185\textwidth}
\resizebox{\textwidth}{!}{
\centering
\begin{tabular}{|c|c|}
\hline
$A$ & Class\\
& Labels\\
\hline
3.5 & 2 \\
\hline
2.6 & 1 \\
\hline
2.3 & 0\\
\hline
1.0 & 1\\
\hline
\multicolumn{2}{c}{}
\end{tabular}
}
\caption{}
\label{dset5}
\end{minipage}
}
\end{table*}
A ``nominal" or descriptive attribute $x$ can be converted into binary very easily by relating each possible value $v_i$ of $x$ with a Boolean variable $b(x,v_i)$ such that
\begin{equation}
b(x, v_i)=\begin{cases}
1, & \text{if $ x = v_i$}.\\
0, & \text{otherwise}.
\end{cases}
\end{equation}
\subsection{Support sets generation}
Binary dataset obtained through the binarization or any other process may contain redundant attributes. A set $S$ of binary attributes is termed as a {\it support set} if the projections $\Omega^+_S$ and $\Omega^-_S$ of $\Omega^+$ and $\Omega^-$, respectively, are such that $\Omega^+_S \cap \Omega^-_S = \emptyset$.
A support set is termed {\it minimal} if elimination any of its constituent attributes leads to
$\Omega^+_S \cap \Omega^-_S \ne \emptyset$.
Finding the minimal support set of a binary dataset, like Table~\ref{bintab} (see Appendix), is equivalent of solving a set covering problem. A detailed discussion on the support set, minimal support set and a few algorithms to solve the set covering problem can be found in ~\cite{alm94,cra88,ham86}. Here, we have used the {\it ``Mutual-Information-Greedy" algorithm} proposed in~\cite{alm94} to solve the set covering problem in our implementation. Note that, our implementation produces the set $S$ in a manner such that the constituent binary attributes are ordered according to their discriminating power and it helps us to achieve the simplicity objective which is mentioned in the description of LAD. Following binary feature variables are selected if we apply the said algorithm: $S=\{b_{15},b_8,b_1,b_2\}$.
\subsection{Modified pattern generation method}
Let us first recall a few common Boolean terminologies that we may require to describe the pattern generation process.
A Boolean variable or its negation is known as {\it literals} and conjunction of such literals is called a {\it term}.
The number of literals present in a term $T$ is known as its {\it degree}. The {\it characteristic term} of a point
$p \in \{0,1\}^n$ is the unique term of degree $n$, such that $T(p)=1$. The term $T$ is said to {\it cover} the point
$p$ if $T(p)=1$. A term $T$ is called a {\it positive pattern} of a given dataset $(\Omega_S^+, \Omega_S^-)$ if
\begin{enumerate}
\item $T(p)=0$ for every point $p \in \Omega_S^-$.
\item $T(p)=1$ for at least one point $p \in \Omega_S^+$.
\end{enumerate}
Similarly, one can define the negative patterns. {Here, $T(\Omega_S)$ is defined as $T(\Omega_S)= \bigcup \limits_{p \in \Omega_S} T(p)$.}
Both the positive and the negative patterns play a significant role in any
LAD based classifier. A {\it positive pattern} is defined as a subcube of the unit cube that intersects $\Omega^+_S$ but is disjoint from
$\Omega^-_S$. A {\it negative pattern} is defined as a subcube of
the unit cube that intersects $\Omega^-_S$ but is disjoint from
$\Omega^+_S$. Consequently, we have a symmetric pattern generation procedure.
In this paper, we have used an extensively modified and optimized version of the pattern generation technique that has been proposed by Boros et al.~\cite{bor00}.
{\small
\center
\begin{algorithm}[!ht]
\centering
\begin{algorithmic}[1]
\STATEx{Input: \quad ${\Omega}_S^+$, ${\Omega}_S^- \subset \{0,1\}^n$ - Sets of positive and negative observations in binary.}
\STATEx{$\hat{d}$ \hspace{1mm}- Maximum degree of generated patterns.}
\STATEx{$k$ \hspace{1mm}- Minimum number of observations covered by a generated pattern.}
\STATEx{Output: \hspace{0mm} $\chi$ \hspace{1cm}- Set of prime patterns.}
\STATE{$\chi=\emptyset$.}
\STATE{$\mathcal{G}_0=\{\emptyset\}$.}
\FOR{ $d=1,\ldots,\hat{d}$}
\IF {$d<\hat{d}$}
\STATE {$\mathcal{G}_d=\emptyset$.} \COMMENT {$\mathcal{G}_{\hat{d}}$ is not required.}
\ENDIF
\FOR {$\tau \in \mathcal{G}_{d-1}$}
\STATE{ $p=$ maximum index of the literal in $\tau$.}
\FOR{ $s=p+1, \ldots, n$}
\FOR{$l_{new} \in \{l_s,\bar{l}_s\}$}
\STATE{$\tau^{\prime} = \tau \Vert l_{new}$.}
\FOR {$i=1$ to $d-1$}
\STATE {$\tau^{\prime \prime} = $ remove $i$\textsuperscript{th} literal from $\tau^{\prime}$.}
\IF {$\tau^{\prime \prime} \notin \mathcal{G}_{d-1}$}
\STATE {go to Step~\ref{dia}.}
\ENDIF
\ENDFOR
\IF {$k \leq \sum_{y \in {\Omega}^+_S}\tau^{\prime}(y)$} \COMMENT {$\tau^{\prime}$ covers at least $k$ many positive observations.}
\label{msup}
\IF {$1 \notin \tau^{\prime}({\Omega}^-_S)$} \COMMENT {$\tau^{\prime}$ covers no negative observation.}
\STATE{$\chi=\chi \cup \{\tau^{\prime}\}$.}
\STATE{ Remove the points (or observations) covered by $\tau^{\prime}$ from ${\Omega}^+_S$ .} \label{thfm}
\ELSIF {$d<\hat{d}$}
\STATE{$\mathcal{G}_d= \mathcal{G}_d \cup \{\tau^{\prime}\}$.}
\ENDIF
\ENDIF
\ENDFOR \label{dia}
\ENDFOR
\ENDFOR
\ENDFOR
\end{algorithmic}
\caption{\small Positive prime pattern enumeration algorithm.
\label{algo-pat}
\end{algorithm}
}
We have made two major changes in Algorithm~\ref{algo-pat} for pattern generation over the algorithm
proposed in~\cite{bor00}. Steps~\ref{msup} and~\ref{thfm} are different from the original algorithm and
Step~\ref{thfm} increases the probability that a point or observation is only covered by a single pattern instead of multiple patterns.
We expect that the majority of the observations will be covered by a unique pattern.
Thus, we no longer require the `theory formation' step to select the most suitable pattern to cover an observation.
In Step~\ref{msup}, we have ensured that a pattern is selected if and only if it covers
at least $k$ many positive observations.
This ensures that a selected pattern occurs frequently in the dataset.
One major drawback of this approach is that if $k>1$, then it may so happen that all the observations present
in the dataset may not be covered by the selected set of patterns. However, a properly chosen value of $k$
ensures that more than $95\%$ of the observations are covered.
Note that, the negative prime patterns can also be generated in a similar fashion.
If we apply the algorithm~\ref{algo-pat} over the projection
$S=\{ b_{15},b_8,b_1,b_2\}$ of the binary dataset presented in the Table~\ref{bintab} (see Appendix), following positive patterns are generated:
(i) $b_2 b_8$, (ii) $b_2 \bar{b}_1$, (iii) $\bar{b}_2 b_{15}$ using $k=1$ and the corresponding negative patterns
are (i) $\bar{b}_2 \bar{b}_{15}$, (ii) $b_2 b_{15}$.
\subsection{Design of Classifier}
\label{dclfr}
The patterns which are generated using Algorithm~\ref{algo-pat}, are transformed into rules and later these rules are
used to build a classifier. The rule generation process is trivial and it's explained using an example. Let us take the first positive
pattern $b_2 b_8$. The meaning of $b_2$ is whether $(A \ge 2.45)$ is true or false as evident from the Table~\ref{bintab}. Similarly, the meaning of
$\bar{b}_2$ is whether $\neg(A \ge 2.45)$ is true or false. Consequently, the rule generated from the pattern $b_2 b_8$ is
$(A \ge 2.45) \land (B \ge 1.85$) $\implies$ $\mathcal L=1$. The corresponding pseudo-code is as follows.
\vspace{1mm}
\noindent
\newline
{\bf if} $(A \ge 2.45) \land (B \ge 1.85)$~{\bf then} \newline
\hspace*{5mm}Class label $\mathcal L = 1$ \newline
{\bf end if}
\vspace{1mm}
\noindent
\newline We can combine more than one positive rule into an {\it `if else-if else'} structure to design a classifier.
Similarly, one can build a classifier using the negative patterns also. Hybrid classifiers can use both the positive and the
negative rules to design a classifier. A simple classifier using the positive patterns is presented below.
{
\renewcommand{\thealgorithm}{}
\floatname{algorithm}{}
\begin{algorithm}[!ht]
\centering
\small
\begin{algorithmic}[1]
\STATEx{Input: Observation consisting of attribute $A,B,C$.}
\STATEx{Output: Class label $\mathcal L$.}
\IF {($A \ge 2.45) \land (B \ge 1.85$)}
\STATE{Class label $\mathcal L = 1$.}
\ELSIF {($A \ge 2.45) \land \neg(A \ge 3.05$)}
\STATE{Class label $\mathcal L = 1$.}
\ELSIF {$\neg (A \ge 2.45) \land ( 3.3 \leq C < 4.5)$}
\STATE{Class label $\mathcal L = 1$.}
\ELSE
\STATE{Class label $\mathcal L = 0$.}
\ENDIF
\end{algorithmic}
\caption{Simple Classifier.}
\label{clsfr}
\end{algorithm}
}
In general, a new observation $x$ is classified as {\it positive} if at least a positive pattern covers it and no negative pattern
covers it. Similar definition is possible for {\it negative} observations.
However, in the `Simple Classifier', we have relaxed this criterion
and we consider $x$ as negative if it is not covered by any positive patterns. Another classification strategy that has worked well
in our experiment is based on {\it balance score}\cite{ale07}. The balance score is the linear combination of positive ($P_i$) and
negative ($N_i$) patterns and defined as :
\begin{equation}
\Delta(x) = \frac{1}{q}\sum_{l=1}^{q} P_l(x) - \frac{1}{r}\sum_{i=1}^{r} N_i(x)
\label{diseq}
\end{equation}
The classification $\eta(x)$ of the new observations $x$ is given by
\begin{equation}
\eta(x)=\begin{cases}
1, & \text{if $ \Delta(x) > 0$}.\\
0, & \text{if $ \Delta(x) < 0$}.\\
\epsilon, & \text{if $ \Delta(x) = 0$}. ~\text{Here, $\epsilon$ indicates unclassified.} \\
\end{cases}
\label{clseq}
\end{equation}
\section{Extension of LAD}
\label{slad}
Majority of the applications of the LAD which are available in the existing literature~\cite{ale07}, work with the labeled data during the
classifier design phase. There
are many applications where a plethora of data are available which are unlabeled or partially labeled. These applications require
semi-supervised or unsupervised pattern generation approach. One such application is {\it intrusion detection system} where
the lightweight classification methods designed using the LAD are desirable. However, the dearth of labeled observations makes it difficult
for the development of a LAD based solution. In this effort, we propose a pre-processing method which can label the available
unlabeled data. However, the proposed method requires that some labeled data are available during the design of classifiers. Thus,
the method is akin to a semi-supervised learning approach~\cite{zhu05,zhu09}.
The process of class label generation is very simple and it uses a standard LAD based classifier~\cite{bor00} having
{\it balance score}~\cite{ale07} as a discriminant function to classify an unlabeled observation. First, we design a
balance score based classifier using the set of available labeled observations D\textsubscript{L}.
Later, we classify each observation in the unlabeled dataset using the balance score based classifier. However, we replace the
classifier described in the Equation~\ref{clseq} by the Equation~\ref{clseq2}. Thus, we keep those observations unlabeled which are having
very low balance score and those observations are also omitted form farther processing. Basically, we are ensuring that if a given observation has a strong affinity towards the positive or negative patterns, then only the observation
is classified/labeled during the labeling process.
\begin{equation}
\eta^\prime(x)=\begin{cases}
1, & \text{if $ \Delta(x) > \tau_1$}.\\
0, & \text{if $ \Delta(x) < \tau_0$}.\\
\epsilon, & \text{if $ \tau_0 \leq \Delta(x) \leq \tau_1$}.\\
\end{cases}
\label{clseq2}
\end{equation}
We have evaluated the performance of the said strategy using the KDDTrain\_20 percent dataset which is part of the NSL-KDD dataset. The KDDTrain\_20percent dataset
consists of $25,192$ observations and we have partitioned the dataset into two parts. The first part D\textsubscript{L} consists of $5000$ randomly selected observations, and the second part D\textsubscript{UL} consists of the rest of the observations. We have removed the labels from the observations of D\textsubscript{UL}. Afterward, D\textsubscript{L} is used to design a classifier based on the Equation~\ref{clseq2}. This classifier latter used for the classification of
D\textsubscript{UL} and the output of the labeling process is a dataset D\textsubscript{L}\textsuperscript{$\prime$} which consists of all the labeled examples from the D\textsubscript{UL}.
The results are summarized in the Table~\ref{tab-kmns}. It is obvious that
any error in the labeling process will have a cascading effect on the performance of Algorithm~\ref{algo-ids}. On the
other hand, the unlabeled samples (marked as $\epsilon$) would have no such consequence on the performance of the proposed SSIDS.
Thus, while reporting the accuracy of the labeling process, we have considered the labeled samples only.
It is clear from the Table~\ref{tab-kmns} that the number of observations that are currently labeled is $17601 + 5000 = 22601$ and these many observations would be used for farther processing. One important aspect
that remains to be discussed is the values of $\tau_0$, and $\tau_1$. We have used {\boldmath $\tau_0 =-0.021$} and
{\boldmath$\tau_1 = 0.24$} in our experiments.
We have arrived at these values after analyzing the outcome of the labeling process on the training dataset D\textsubscript{L}.
\begin{table}[ht]
\center
\begin{tabular}{|c|c|c|c|c|}
\hline
\#D\textsubscript{UL} & \multicolumn{3}{|c|}{Labeled} & \#Unlabeled$(\epsilon)$\\
\cline{2-4}
& \# Correctly & \#Wrongly & Accuracy &\\
\hline
20192 & 17333 & 268 &98.48\%&2591 \\
\hline
\multicolumn{5}{c}{}
\end{tabular}
\caption{Results related to the labeling of D\textsubscript{UL}.}
\label{tab-kmns}
\end{table}
Following the introduction of this pre-processing step, the steps of a semi-supervised LAD (or {\bf S-LAD}) are as follows.
{\small
\begin{enumerate}
\item {Class label (or truth value) generation.}
\item {Binarization.}
\item {Elimination of redundancy (or Support sets generation).}
\item {Pattern generation.}
\item {Classifier design and validation.}
\end{enumerate}
}
\section{Design of a Semi-Supervised IDS using S-LAD}
\label{sids}
Organizations and governments are increasingly using the Internet to deliver services, and the attackers are trying to
gain unfair advantages from it by misusing the network resources. Denning~\cite{den87} introduced the concept
of detecting the cyber threats by constant monitoring of the network audit trails using the intrusion detection systems.
The intrusion can be defined as the set of actions that seek to undermine the availability, integrity or confidentiality
of a network resource~\cite{den87,her09,yan15}. Traditional IDSs that are used to minimize such risks can be categorized into two: (i) {\it anomaly based}, (ii) {\it misuse based} (a.k.a. signature based). The anomaly based IDSs build a model of normal activity, and any deviation from the model is considered as an intrusion. On the contrary, misuse based models generate signatures from the past attacks to analyze existing network activity. It was observed that the misuse based models are vulnerable to ``zero day'' attacks~\cite{mukk05}. Our proposed technique is unique in the sense that it can be used as either a misuse based or an anomaly based model. Hybridization is also possible in our proposed technique.
\subsection{Proposed Intrusion Detection System}
The proposed SSIDS is presented in the Figure~\ref{fig-ids}. It consists of two major phases, i.e., the offline phase and the online phase.
The offline phase uses an S-LAD to design a classifier which online phase uses for real-time detection of any abnormal activity using the data that describe the network traffic. It is obvious that the offline phase should run at least once before the online phase is used
to detect any abnormal activity. The offline phase may be set up to run at a regular interval of time to upgrade the classifier with the new patterns or rules. Let us now summarize the steps of the offline phase in Algorithm~\ref{algo-ids}. Note that, the
Step~\ref{bls} of Algorithm~\ref{algo-ids} implicitly uses the Steps~\ref{sbin} to \ref{spgen} to build the classifier.
The online phase is very simple as it uses the classifier generated in the offline phase for the classification of new observations.
\renewcommand{\thealgorithm}{2}
\begin{algorithm}[!ht]
\centering
\small
\begin{algorithmic}[1]
\STATEx {{Input}: Historical dataset consisting of labeled and unlabeled data.}
\STATEx {{Output}: Rule based classifier for the online phase.}
\STATE {Read the historical dataset D\textsubscript{L} and D\textsubscript{UL}.}
\STATE \label{bls}{Using D\textsubscript{L}, build a standard LAD Classifier based on the balance score (i.e., Equation~\ref{clseq2}).}
\STATE {Using the classifier from the previous step, label the dataset D\textsubscript{UL}
to generate D\textsubscript{L}\textsuperscript{$\prime$} and
$\Omega=$D\textsubscript{L} $\cup$ D\textsubscript{L}\textsuperscript{$\prime$}.}
\STATE \label{sbin}{Binarize $\Omega$ using the process described in Subsection~\ref{binr}.}
\STATE {Generate support set $S$ from the binary dataset.}
\STATE \label{spgen}{Generate positive and negative patterns (i.e., rules) using Algorithm~\ref{algo-pat}.}
\STATE \label{ana}{Design a classifier from the generated patterns following the example of 'Simple Classifier' from Subsection~\ref{dclfr}.}
\end{algorithmic}
\caption{Steps of Offline Phase of IDS}
\label{algo-ids}
\end{algorithm}
One can use the positive rules only to build a classifier in Step~\ref{ana} of Algorithm~\ref{algo-ids},
then the IDS can be termed as anomaly-based.
On the other hand, if it uses only the negative rules, the design is similar to a signature-based IDS.
\begin{figure}[ht]
\center
\includegraphics[width=8cm]{drawing2sv}
\caption{Block Diagram of the proposed SSIDS}
\label{fig-ids}
\end{figure}
\section{Performance Evaluations}
\label{expr}
Most widely used datasets for validation of IDSs are NSL-KDD~\cite{nslkdd09} and KDDCUP'99~\cite{kddcup99}. NSL-KDD is a modified version of the KDDCUP'99 dataset and we have used the NSL-KDD dataset in all our experiments. Both the datasets consist of $41$ features along with a class label for each observation. These features are categorized into four different classes and they are (i) basic features, (ii) content features, (iii) time-based traffic features, (iv) host-based traffic features. Here, the {\it basic features} are extracted from the TCP/IP connections without scanning the packets and there are nine such features in the NSL-KDD dataset. On the other hand, features which are extracted after inspecting the payloads of a TCP/IP connection are known as the {\it content features} and there are $13$ such features present in the dataset. A detailed description of the features is available in the Table~\ref{tab-inpf}. There are different types of attacks present in the dataset but we have clubbed them to one and consider them as ``attack'' only. Thus, there are two types of class labels that
we have considered in our experiments and they are ``normal'' and ``attack''. We have used the KDDTrain\_20percent dataset which is a part of the NSL-KDD dataset to build the classifier in the offline phase. The KDDTest\textsuperscript{+} and the KDDTest\textsuperscript{-21}
have been used during the online phase for validation testing.
The details of the experimental setup are presented in Subsection~\ref{exp-setup}.
\begin{table}[ht]
\center
\resizebox{1\columnwidth}{!}{
\small \bf
\begin{tabular}{|l|c|l|l||l|c|l|l|}
\hline
\rotatebox{90}{Feature~~}\rotatebox{90}{Type} &\rotatebox{90}{Col. No}& Input Feature &\rotatebox{90}{Data Type} & \rotatebox{90}{Feature~~}\rotatebox{90}{Type} &\rotatebox{90}{Col. No}& Input Feature&\rotatebox{90}{Data Type}\\
\hline
& 1 & duration & C & & 23 & Count & C\\
\cline{2-4} \cline{6-8}
& 2 & protocol\_type & S & & 24 & srv\_count & C\\
\cline{2-4} \cline{6-8}
& 3 & service & S & \multirow{8}{*}{\rotatebox{90}{Traffic}\rotatebox{90}{(Time Based)}} & 25 & serror\_rate & C\\
\cline{2-4} \cline{6-8}
\multirow{4}{*}{\rotatebox{90}{Basic}}& 4 & flag & S & & 26 & srv\_error\_rate & C\\
\cline{2-4} \cline{6-8}
& 5 & src\_bytes & C & & 27 & rerror\_rate & C\\
\cline{2-4} \cline{6-8}
& 6 & dst\_bytes & C & & 28 & srv\_rerror\_rate & C\\
\cline{2-4} \cline{6-8}
& 7 & land & S & & 29 & same\_srv\_rate & C\\
\cline{2-4} \cline{6-8}
& 8 & wrong\_fragment & C & & 30 & diff\_srv\_rate & C\\
\cline{2-4} \cline{6-8}
& 9 & urgent & C & & 31 & srv\_diff\_host\_rate & C\\
\cline{2-4} \cline{5-8}
& 10 & hot & C & & 32 & dst\_host\_count & C \\
\cline{1-4} \cline{6-8}
& 11 & num\_failed\_logins & C & & 33 & dst\_host\_srv\_count & C \\
\cline{2-4} \cline{6-8}
& 12 & logged\_in & S &\multirow{8}{*}{\rotatebox{90}{Traffic}\rotatebox{90}{(Host Based)}} & 34 & dst\_host\_same\_srv\_rate & C \\
\cline{2-4} \cline{6-8}
& 13 & num\_compromised & C & & 35 & dst\_host\_diff\_srv\_rate & C \\
\cline{2-4} \cline{6-8}
& 14 & root\_shell & C & & 36 & dst\_host\_same\_src\_port\_rate & C \\
\cline{2-4} \cline{6-8}
\multirow{4}{*}{\rotatebox{90}{Contents}}& 15 & su\_attempted & C & & 37 & dst\_host\_srv\_diff\_host\_rate & C \\
\cline{2-4} \cline{6-8}
& 16 & num\_root & C & & 38 & dst\_host\_serror\_rate & C \\
\cline{2-4} \cline{6-8}
& 17 & num\_file\_creations & C & & 39 & dst\_host\_srv\_serror\_rate & C \\
\cline{2-4} \cline{6-8}
& 18 & num\_shells & C & & 40 & dst\_host\_rerror\_rate & C \\
\cline{2-4} \cline{6-8}
& 19 & num\_access\_files & C & & 41 & dst\_host\_srv\_rerror\_rate & C \\
\cline{2-4} \cline{5-8}
& 20 & num\_outbound\_cmds & C \\
\cline{2-4}
& 21 & is\_hot\_login & S & \multicolumn{4}{c}{C means Continuous}\\
\cline{2-4}
& 22 & is\_guest\_login & S & \multicolumn{4}{c}{S means Symbolic}\\
\cline{1-4}
\multicolumn{4}{c}{}
\end{tabular}
}
\caption{Input features of the NSL-KDD dataset.}
\label{tab-inpf}
\end{table}
\subsection{Experimental setup}
\label{exp-setup}
The next step after the labels are generated is binarization. Detailed attention is needed to
track the number of binary variables produced during
this process. In the case of {\it numeric} or {\it continuous} features, the number of binary variables generated is directly dependent on the number of cut-points. Thus,
if a feature is producing a large number of cut-points, it will increase the number of binary variables exponentially. For example,
if the number of cut-points is $100$, the total number of interval variables is $\binom{100}{2}=4950$ and after considering the level variables, the total number of binary variables created will be $4950+100=5050$. Consequently, the memory requirement will increase afterward to
an unmanageable level. On the other hand, a large number of cut-points indicate that the feature may not have much influence on the
classification of observations. Our strategy is to ignore such features completely. Another set of features which
are having a fairly large number of cut-points are ignored partially. Given a feature $x$, if the number of cut-points is greater than or equal to $175$, we completely ignore the feature $x$ and if the number of cut-points is greater than or equal to $75$ but less than $175$, we ignore that feature partially by only generating the level variables.
We have arrived at these thresholds after empirical analysis using the training data.
List of features that have been fully or partially ignored are presented in the Table~\ref{tab-cutpt}.
{
\begin{table}[ht]
\center
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{|c|l|c|l|}
\hline
Col. Num. & Input feature & \#Cut-points & Ignored ?\\
\hline
1 & duration & 102 & Partially \\
\hline
5 & src\_bytes & 116 & Partially\\
\hline
23 & Count & 374 & {\bf Fully} \\
\hline
24 & srv\_count & 302 & {\bf Fully} \\
\hline
32 & dist\_host\_count & 254 & {\bf Fully} \\
\hline
33 & dist\_host\_srv\_count & 255 & {\bf Fully} \\
\hline
34 & dst\_host\_same\_srv\_rate & 100 & Partially \\
\hline
35 & dst\_host\_diff\_srv\_rate & 93 & Partially\\
\hline
36 & dst\_host\_same\_src\_port\_rate & 100 & Partially \\
\hline
38 & dst\_host\_serror\_rate & 98 & Partially\\
\hline
40 & dst\_host\_rerror\_rate & 100 & Partially\\
\hline
\multicolumn{4}{c}{}
\end{tabular}
}
\caption{Binarization: Ignored features of the NSL-KDD dataset.}
\label{tab-cutpt}
\end{table}
}
Another important aspect that we have incorporated into our design is the support of a pattern. Support of a positive (negative) pattern is $k$ if it covers $k$ positive (negative) observations and it should not cover any negative (positive) observation.
Thus, the value of $k$ in Step~\ref{msup} of Algorithm~\ref{algo-pat} holds immense importance. In a previous implementation~\cite{bor00} the value of $k=1$
have been used, but it is observed during experiments that such a low support is generating a lot of patterns/rules having
little practical significance. Moreover, these patterns cause a lot of false positives during testing. An empirical
analysis helps us to fix the threshold at $k=100$. At this threshold, more than $95\%$ of the observations present in the training
dataset are covered by the generated patterns having degree up to $4$.
\vspace*{-3mm}
\algrenewcommand\alglinenumber[1]{\tiny #1:}
\renewcommand{\thealgorithm}{1}
\floatname{algorithm}{Classifier}
\begin{algorithm}[ht]
\centering
\begin{algorithmic}[1]
\tiny
\STATE {{Input}: Observation $obs$ having $41$ features.}
\STATE {{Output}: Class label $\mathcal L$.}
\IF {$\neg(obs(36) \ge 0.0050) \land (obs(37) \ge 0.0050 \land obs(37) < 0.1150)$}
\STATE {$\mathcal L=1$} \COMMENT{$\mathcal L=1$ indicates normal behavior.}
\ELSIF {$obs(5) \ge 28.50 \land \neg(obs(37) \ge 0.005 \land obs(37)< 0.915) \land strcmp(obs(r,3),'ftp\_data')$}
\STATE {$\mathcal L = 1$}
\ELSIF {$obs(5) \ge 28.5000 \land (obs(37) \ge 0.0050 \land obs(37)< 0.9150) \land obs(34) \ge 0.1950$}
\STATE {$\mathcal L=1$}
\ELSIF {$obs(5) \ge 28.500000 \land \neg(obs(5) \ge 333.500000) \land obs(37) \ge 0.005000 \land obs(37) < 0.085000$}
\STATE {$\mathcal L=1$}
\ELSIF {$obs(5) \ge 28.500000 \land \neg(obs(34) \ge 0.645000) \land strcmp(obs(r,3),'ftp\_data')$}
\STATE {$\mathcal L=1$}
\ELSIF {$obs(6) \ge 0.500000 \land obs(37) \ge 0.005000 \land obs(37) < 0.915000 \land \neg(obs(40) \ge 0.005000)$}
\STATE {$\mathcal L=1$}
\ELSIF {$obs(6) \ge 0.500000 \land \neg(obs(5) \ge 333.500000) \land obs(5) \ge 181.500000$}
\STATE {$\mathcal L=1$}
\ELSIF {$obs(5) \ge 333.500000 \land (obs(36) \ge 0.015000) \land obs(6) \ge 0.500000 \land obs(6) < 8303.000000$}
\STATE {$\mathcal L=1$}
\ELSIF {$obs(5) \ge 333.500000 \land obs(34) \ge 0.645000 \land obs(6) \ge 0.500000 \land obs(6)< 8303.000000$}
\STATE {$\mathcal L=1$}
\ELSIF {$\neg(obs(40) \ge 0.005000) \land obs(34) \ge 0.645000 \land \neg(obs(36) \ge 0.005000)$}
\STATE {$\mathcal L=1$}
\ELSIF {$\neg(obs(40) \ge 0.005000) \land obs(6) \ge 0.500000 \land obs(6) < 8303.000000 \land \neg(obs(34) \ge 0.045000)$}
\STATE {$\mathcal L=1$}
\ELSIF {$obs(5) \ge 28.50 \land obs(6) \ge 0.500 \land obs(36) \ge 0.015000 \land \neg(obs(10) \ge 0.500000 \land obs(10) < 29.000000)$}
\STATE {$\mathcal L=1$}
\ELSIF {$obs(5) \ge 28.50 \land obs(6) \ge 0.500 \land \neg(obs(40) \ge 0.005000) \land \neg(obs(10) \ge 0.500000 \land obs(10) < 29.000000)$}
\STATE {$\mathcal L=1$}
\ELSE
\STATE {$\mathcal L=0$} \COMMENT{$\mathcal L=0$ indicates attack.}
\ENDIF
\end{algorithmic}
\caption{\small Details of SSIDS}
\label{clsfier}
\end{algorithm}
\vspace*{-5mm}
\subsection{Experimental Results}
We have described all the steps required to design a classifier in the offline phase. Let us now summarize the outcome of the individual steps.
\newline
1.~{\it Labeling}: We have used $5000$ labeled observations for labeling $20192$ unlabeled observations as described in Section~\ref{slad}. This step produces $22601$ labeled observations which have been used in the following steps to design the classifier.
\newline
2.~{\it Binarize}: During this step, total $10306$ binary variables are produced and a binary dataset along with its class labels having size $22601 \times 10307$ is generated.
\newline
3.~{\it Support Set Generation}: We have selected $21$ binary features according to their discriminating power.
\newline
4.~{\it Pattern Generation}: During pattern generation, we found $13$ positive and $7$ negative patterns.
\newline
5.~{\it Classifier Design}: We have developed a rule-based IDS using the $13$ positive patterns that are generated in the last step.
Thus, the SSIDS contains $13$ rules. The details
of the SSIDS is available in the Classifier~\ref{clsfier}. The NSL-KDD dataset contains two test datasets: (i) KDDTest\textsuperscript{+} having $22,544$ observations, and (ii) KDDTest\textsuperscript{21} having $11,850$ observations. These two datasets are used to measure the accuracy of the
proposed SSIDS and the results related to the accuracy of the IDS is presented in Table~\ref{tab-res}. These results compare favorably with
the state of the art classifiers proposed in~\cite{rana17}, and the comparative results are presented in Table~\ref{tab-comp}. It is evident that the proposed SSIDS outperforms the existing IDSs by a wide margin.
{
\begin{table}[ht]
\center
\resizebox{1.0\columnwidth}{!}{\bf
\begin{tabular}{|c|c|c|c|c|c|}
\hline
{\small Dataset} & {\small Accuracy} & {\small Precision} & {\small Sensitivity} & {\small F{1}-Score} & {\small Time in sec.}\\
\hline
KDDTest\textsuperscript{+} & 90.91\% &0.9458 &0.8915 & 0.9179& 0.000156 \\
\hline
KDDTest\textsuperscript{21} & 83.92\% &0.9417 & 0.8564&0.8971& 0.000173\\
\hline
\multicolumn{3}{c}{}
\end{tabular}
}
\caption{Results related to KDDTest\textsuperscript{+} and KDDTest\textsuperscript{21}.}
\label{tab-res}
\end{table}
}
\begin{table}[ht]
\center
\resizebox{0.80\columnwidth}{!}{
\begin{tabular}{|c|c|c|}
\hline
Classifiers\textsuperscript{\$} & \multicolumn{2}{c|}{Accuracy using Dataset(\%)}\\
\cline{2-3}
& KDDTest\textsuperscript{+} & KDDTest\textsuperscript{21} \\
\hline
J48\textsuperscript{*} & 81.05 & 63.97 \\
\hline
Naive Bayes\textsuperscript{*} & 76.56 & 55.77 \\
\hline
NB Tree\textsuperscript{*} & 82.02 & 66.16 \\
\hline
Random forests\textsuperscript{*} & 80.67 & 63.25 \\
\hline
Random Tree\textsuperscript{*} & 81.59 & 58.51 \\
\hline
Multi-layer perceptron\textsuperscript{*} & 77.41 & 57.34 \\
\hline
SVM\textsuperscript{*} & 69.52 & 42.29 \\
\hline
Experiment-1 of ~\cite{rana17} & 82.41 & 67.06 \\
\hline
Experiment-2 of ~\cite{rana17} & 84.12 & 68.82 \\
\hline
LAD\textsuperscript{@} & {87.42} & {79.09} \\
\hline
Proposed SSIDS & {\bf 90.91} & {\bf 83.92} \\
\hline
\multicolumn{3}{l}{* ~Results as reported in~\cite{rana17}.}\\
\multicolumn{3}{l}{@ Classifier designed using dataset D\textsubscript{L} only by}\\
\multicolumn{3}{l}{~~~~omitting the `labeling' process.}\\
\multicolumn{3}{l}{\$~~All the classifiers use the same training}\\
\multicolumn{3}{l}{~~~~dataset, i.e., KDDTrain\_20percent.}\\
\end{tabular}
}
\caption{Performance Comparison between different Classifiers, IDSs and the proposed SSIDS.}
\label{tab-comp}
\end{table}
\vspace*{-10mm}
\section{Conclusion}
\label{sec-con}
The intrusion detection system (IDS) is a critical tool used to detect cyber attacks, and semi-supervised IDSs are gaining popularity
as it can enrich its knowledge-base from the unlabeled observations also. Discovering and understanding the usage patterns from the past observations
play a significant role in the detection of network intrusions by the IDSs. Normally, usage patterns establish a causal
relationship among the observations and their class labels and the LAD is useful for such problems where we need to
automatically generate useful patterns that can predict the class labels of future observations. Thus, LAD
is ideally suited to solve the design problems of IDSs. However, the dearth of labeled observations makes it
difficult to use the LAD in the design of IDSs, particularly semi-supervised IDSs, as we need to consider the
unlabeled examples along with the labeled
examples during the design of IDSs. In this effort, we have proposed a simple methodology to extend the classical LAD to
consider unlabeled observations along with the labeled observations. We have employed the proposed technique successfully
to design a new semi-supervised ``Intrusion Detection System'' which outperforms the existing semi-supervised IDSs by a wide margin both in terms of accuracy and detection time.
\bibliographystyle{IEEEtranS}
| proofpile-arXiv_065-300 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro}
Although Raman spectroscopy is nowadays used as a "finger print" technique for the identification of many molecules, its big disadvantage is still the low efficiency of the inelastic Raman-Scattering process itself. However the scattering efficiency can be markedly enhanced for molecules, which are adsorbed at noble metal surfaces leading to Surface Enhanced Raman Scattering (SERS) \cite{Sers_1}. The majour reason for this is the enhanced local electric field of the light wave at specific features like tips or grooves of rough metal surfaces. However the density and shape of these features on noble metal surfaces prepared by electrochemical or other simple deposition techniques is rather arbitrary and often non-reproducible leading to a random distribution of field hot-spots with widely varying SERS-efficiency. A common goal is therefore to fabricate SERS-substrates which offer a high density of uniformly distributed hot-spots of the same intensity resulting in the same predictable SERS enhancement from all parts of the substrate area \cite{SERS-review}.\par
To specifically design substrates for increased SERS sensitivity, metal nano-structures are commonly fabricated by electron beam lithography (EBL). Das et al. for instance fabricated periodic Au nanocube structures applying EBL \cite{EBL1}. Lin et al. used focused ion beam milling (FIB) to precisely fabricate different Au disc patterns exhibiting hexagon-like, shield-like, pentagon-like, and kite-like geometries\cite{EBL2}. Furthermore Cin et al. studied the influence of the detailed geometry of the nanostructures on SERS, fabricating samples of concentric rings and arcs with EBL \cite{SERS_04}. The diversity of designs is endless when fabrication with EBL or FIB is considered and SERS enhancements up of $10^{11}$ from free standing gold bowtie-antennas were reported \cite{SERS_03}. To obtain these huge SERS enhancements the concentration of the electric dipolar near field in ultra-thin gaps between metal nanoparticle-antennas is crucial and a local plasmon resonance has to be excited. This was studied in detail for metal particle dimers \cite{nordlander-dimers}, \cite{rechberger-dimers} as well as for extended closely packed metal disc arrays \cite{bendana-arrays}. Another approach aims for the increase in Q-factor of the local plasmon resonances of the metal particles. This can be achieved by realising a strict periodic order of the metal particles resulting in sharp lattice resonances which appear at a wavelength just beyond the Wood's anomaly, when the first diffraction order becomes evanescent \cite{Barnes-1}. In this case the cancellation of radiation losses to orders higher than the zeroth reflection or transmission order results in the concentration of the light intensity within the array. This could already be used to enhance the luminescence of emitters dispersed within the metal particle array \cite{Vecchi-1}, \cite{Vecchi-2} and also an impact on SERS was already reported \cite{old-SERS-lattice}, although a large SERS-enhancement due to the lattice resonance was not yet observed \cite{Chapelle}.\par
The stated publications demonstrate clearly that the shape, size and arrangement of the nanoparticles, the material of the metal surface (like gold or silver) and the excitation wavelength strongly affect the strength of the SERS-signal. Naturally the samples fabricated by EBL or FIB only involve small areas of a few square microns since the e-beam/FIB writing process is time-consuming and expensive. However replication methods like nanoimprinting were used to transfer the patterns on larger scales paving the way to apply highly regular controlled nanostructures as general SERS substrates \cite{EBL3}.\par
Here we report on the increase of SERS-signals from molecules attached to a one dimensional periodic array of chains of closely spaced gold discs. The idea is to achieve especially high field enhancements (intensive hot spots) by combining the effect of increased near fields in small gaps between neighbouring metal discs within the same chain with a lattice resonance generated by the periodical arrangement of the different chains.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{S500_chain-fin.png}
\caption{SEM-image of an investigated structure showing the periodic arrangement of chains of gold discs on glass \label{chain_500}}
\end{figure}
\section{Plasmonic Resonances}
To realize the intended enhancement we investigate chains of gold discs, which have an individual height of 60 nm and a diameter of 65nm. The centre to centre distance between the nearest discs within the same chain is 125 nm. The period between different chains was varied between 420 to 520 nm in 20 nm increments [Fig.\ref{chain_500}]. These disc arrays with an overall dimension of $100\mu m $ x $100\mu m$ were fabricated on glass substrates applying electron beam lithography, gold evaporation (including an initially deposited 5nm Cr layer for better adhesion) and a final lift-off process.\par
A microscope setup was used to collect the transmission spectra of the described sample employing a halogen lamp as white light source. Subsequently a collimated beam of linear polarized light was created with the help of a lens system and a polarizer. The tranmitted light was collected by an infinity corrected 20x objective and a tube lens was used to create an intermediate image of the structure in the plane of an iris. The intermediate image was then imaged onto a CCD via a plano convex lens. Adjusting the iris at the intermediate image plane and observing the image of the sample with the CCD, one could restrict the area of observation and ensure that only light transmitted through the disc chain arrays reaches the CCD. Instead of creating an image on the CCD the transmitted light could be diverted by a flip mirror and coupled into a fibre bundle, which was connected to an Acton Spectra Pro SP-2500 Monochromator with attached single channel Si-photodetector for spectral analysis.\par
For the formation of clear plasmon grating resonances a homogeneous refractive index environment around the gold discs has to be created. We therefore applied immersion oil on the discs, which was index matched to the glass substrate resulting in a homogeneous refractive index of n=1.5 surrounding the discs.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{transmission_01.png}
\caption{Transmission against wavelength for different chain periods. The incident laser light is polarised parallel to the chains. The first order plasmonic grating resonance appears as a dip, which shifts to the red with increasing period between the chains. The 785nm line represents the wavelength of the Raman excitation laser used later for the SERS measurements \label{transmisson}}
\end{figure}
In Fig.\ref{transmisson} the transmission for different chain periods are plotted against the wavelength.
Every transmission spectrum shows the same characteristic shape of the curve: With increasing wavelength, we note at first a sharper peak followed by a broader dip. Increasing the period of the chains from 420nm to 520nm leads to a red shift of the dip from 740nm to 830nm [Fig.\ref{transmisson}].\par
These dips indicate the excitation of the plasmonic grating resonances. For the chosen periods they appear as a dip on the long wavelength side of the spectrally much more extended local plasmon resonance of the single discs \cite{doktorarbeit}. The interaction between single disc resonance and the grating resonance leads to the observed Fano line shape of the described spectral peak-dip feature. In general the first order plasmonic grating resonance appears, when the incident wavelength fullfills the condition
\begin{equation}\label{eq:1}
\lambda = n \cdot a
\end{equation}
\noindent(n is the refractive index surrounding the discs and a is the period between the chains [nm]). In this case the scattering fields from the discs of the neighbouring chain arrive in-phase with the primary incident light wave. The constructive superposition of these electric fields leads to a strongly enhanced electron oscillation in the discs leading to an increased near field as well as enhanced absorption \cite{doktorarbeit}-\cite{schilling}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{hot_spots_3.png}
\caption{Simulation of the local electric field around the gold discs for normal incidence at a wavelength of 785nm. a) The E-field of the incident light (blue arrow) is polarised along the chains ($0^\circ$ polarisation) and b) perpendicular to the chains ($90^\circ$ polarisation) \label{hotspot}}
\end{figure}
For SERS the enhancement of the local field at the metal surface is crucial and a finite-element simulation, applying the commercial software COMSOL, was performed to illustrate the enhancement and areas of field concentration [Fig.\ref{hotspot}]. In the first case ($0^\circ$ polarisation) the electric field and the chains are parallel to each other [Fig.\ref{hotspot}a)]. In the second case ($90^\circ$ polarisation) the electric field is perpendicularly oriented to the chains [Fig.\ref{hotspot}b)].\par
The brighter colours in these heat maps indicate the areas of strong field enhancement. From this a concentration of the electric field between the neighbouring discs of the same chain is expected, when the incident light wave is polarised along the chain forming the so--called "Hot Spots" [Fig.\ref{hotspot}a)]. The increased electron oscillation at the plasmon grating resonance promises furthermore a strong enhancement of these hot spots. In contrast, for a polarisation of the incident light perpendicular to the chains [Fig.\ref{hotspot}b)] the formation of hot spots is not observed and the level of the near field appears overall subdued.
These numerical results can already be understood considering the electric dipole fields of the discs, which are induced by the incident light. When the incident light is polarised parallel to the disc chains, the induced oscillating plasmonic dipoles of the discs form a "Head--to--Tail"--configuration [Fig.\ref{headtotail}], where the dipoles are all aligned along the chain. Since the near field of the discs is strongest at the poles of the induced dipoles, the superposition of the dipolar near fields in the head-tail configuration leads to strong electric fields in the narrow gaps between neighbouring discs of the same chain forming the already mentioned hot spots. For a polarisation perpendicular to the chains the field in the gaps between the discs of the same chain is minimal and the discs of the next chain are too far away to produce a sizeable ovelap of the near fields as the dipolar near field of each disc decays with $1/r^3$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{headtail_3.png}
\caption{For the case that the electric field $E_L$ of the light is polarised along the chain the dipoles form a "Head--to--Tail"--configuration. The electric field lines (dotted) of the dipoles superpose and a strong localised electric near-field is generated in the gap between the "head" and "tail" of closely spaced neighbouring dipoles resulting in field hot spots in these gaps. \label{headtotail}}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{raman_polarisation.png}
\caption{Influence of the polarisation of the excitation laser on the SERS measurements. The intensity of the Raman-signal at $1070cm^{-1}$ varies with the polarisation of the laser. $0^\circ$ and $180^\circ$ correspond to a polarisation of the laser along the chains, $90^\circ$ for a polarisation perpendicular to the chains \label{raman_2}. Measurements performed in air.}
\end{figure}
\section{SERS results}
The predicted increased field enhancement due to the plasmonic grating resonance is of special interest for SERS measurements. The impact of this field enhancement on the Raman-Spectra of 4-methylbenzenethiol (4-MBT) was therefore investigated.\par
To attach the 4--MBT molecules to the disc arrays, the sample was dipped into a 1mM 4--MBT/methanol solution. In the next step the excess solution was washed off using methanol and the sample was dried in air. Due to the thiol group of the 4-MBT the molecules bind directly to the gold discs and form a self assembled monolayer at the gold surface. For the SERS-measurements a Raman microscope (HORIBA Jobin Yvon LabRam HR evolution) with a laser excitation wavelength of 785nm and a power of ca. 7mW was chosen. This relatively long wavelength was selected, as gold exhibits lower losses in the near IR improving the plasmonic response of the discs, which ultimately results in higher field enhancements.\par
A 50x objective was used to focus the laser onto the sample and to collect the backscattered light and the Raman-spectrum was detected by a CCD-camera. To reduce the radiation damage to the individual molecules and measure the average enhancement of SERS in our disc array, the laser was scanned over four separate 30x30$\mu m$ squares within the $100\mu m $ x $100\mu m$ disc arrays using the DUOSCAN-mode of the Raman microscope. Finally the average of the four scans was taken.
\par
Fig.\ref{raman_2} illustrates the intensity of the SERS--Signal for different polarisations of the excitation laser. To keep the initial complexity of the spectra to a minimum the measurements were recorded in air. The polarisation is defined as angle between the electric field vector of the incident Raman laser and the direction of the disc chains. We concentrate on the characteristic Raman--peak at $1070cm^{-1}$, which corresponds to a wavelength of 851 nm and is caused by a combination of the C--H in plane bending and the ring breathing vibrational modes of the 4-MBT-molecule \cite{MBT-ramanlines}. The SERS--signal reaches its maximum for polarisations $0^\circ$ and $180^\circ$. A rotation of the polarisation leads to a decrease of the signal until it vanishes at a polarisation of $90^\circ$. As the measured SERS intensity depends strongly on the electric field at the surface of the discs (where the molecules are attached) the observed polarisation dependence of the SERS-signal can be attributed to the polarisation dependence of the formation of the near-field hot spots [Fig.\ref{hotspot}]. The maximum SERS signal for parallel polarisation ($0^\circ$ polarisation) is caused by the enhanced near field in the narrow gaps in head-tail dipole configuration. Vice versa the minimum SERS signal at perpendicular polarisation ($90^\circ$ polarisation) is the result of the corresponding minimal near field in the narrow gaps between the discs.\par
In this way the observed polarisation dependent SERS-signal already implies, that the SERS signal mainly stems from the described hot spots between the discs, which only form at parallel polarisation.\par
In a second experiment the impact of the grating resonances on the SERS-signal for different periods between the disc chains was investigated [Fig.\ref{raman_1}]. For the measurements an immersion oil with a refractive index n=1.5 was applied on the sample to create the homogeneous refractive index environment around the gold discs, which is necessary for the formation of the distinct grating resonances. Here only the parallel polarisation was used for the measurements. We concentrate again on the characteristic peak at $1070cm^{-1}$. The data show, that it reaches its maximum intensity, when the period is about 480 nm and decreases for smaller and larger periods.
This behaviour can be ascribed to the appearance of the plasmonic grating resonances. For a chain period of 480 nm the plasmonic grating resonance matches the laser wavelength of 785nm (which is used to excite the Raman signal), so that a maximum field enhancement of the exciting laser field is achieved in the hot spots.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{raman_1.png}
\caption{Influence of the chain period on the SERS measurements. The intensity of the Raman-signal at $1070cm^{-1}$ varies with chain periodicity. For the measurements an immersion oil was applied on the sample with a refractive index $n=1.5$. \label{raman_1}}
\end{figure}
\section{Discussion}
The observation, that the maximum SERS-signal is obtained when the excitation wavelengths matches the grating resonance, is supported by a comparison of the spectral position of the plasmonic grating resonance and the intensity of the $1070cm^{-1}$ SERS-peak for the varying chain period [Fig.\ref{comparison}]. Here the SERS-Intensity (blue) and the spectral position of the plasmonic resonance (black) are plotted against the chain period. The plasmonic resonance for the different chain periods was read from Fig.\ref{transmisson} and shows the linear dependence on the chain period as described in (\ref{eq:1}). The wavelength of the laser is marked as dotted horizontal line. If the plasmonic grating resonance approaches the Raman excitation laser wavelength of 785nm, the SERS-Signal becomes maximum. This is the case for a period of 480 nm as pointed out before. With this, Fig.\ref{comparison} clearly demonstrates the coincidence of plasmonic grating resonance, the associated maximum field enhancement and the resulting maximum SERS-intensity. The maximum local electric field is achieved by a combination of near and far field enhancement in the disc array. For this to work, the different polarisation dependences of dipolar near and far fields is important. While for$0^\circ$ polarisation the near field is strongest parallel to the dipole oscillation along the chains , the radiating far field (which is responsible for the formation of the plasmonic grating resonances) is strongest perpendicular to the chains. With these two coupling mechanisms in mind the parameters of the disc array parameters were chosen: Narrow gaps between neighbouring discs along the chain for the enhancement of near fields, a larger period between the chains matching the wavelength requirement for the formation of the plasmonic grating resonance.\par
A similar strategy was already employed by Crozier et al when relatively sparse arrays of optical antennas were investigated which were interlaced with a gold strip grating \cite{Crozier-1}, \cite{Crozier-2}. There the near field enhancement was realised within the narrow feed gap of the optical antennas, while the diffractive far field coupling was achieved by the strip grating. The high SERS-enhancement reported by Crozier was additionally boosted by the excitation of surface plasmon standing waves on an underlying gold layer. The potential advantage of our chain arrays compared to the sparse antenna arrays of Crozier et al. lies in the higher density of near field hot spots in our chains. The disc period within our chain is only 125nm while the spacing between the corresponding optical antennas in \cite{Crozier-1}, \cite{Crozier-2} is about 6 times larger (730nm). However, to harvest this potential, the gaps between the neighbouring discs in the same chain have to be further reduced from the current 60nm down towards 20nm or less, so that the near field enhancement in the hot spots can be further increased.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{raman_plasmonik.png}
\caption{Comparison of the dependence of the plasmonic grating resonance (black) and the SERS intensity (blue) on the chain period. The maximum SERS signal is obtained when the grating resonance occurs at the Raman laser wavelength of 785 nm for a 480nm period. \label{comparison}}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{enhancement_simulation.png}
\caption{COMSOL-Simulation: Averaged local Enhancement Factor (EF) as a function of the chain period a. The maximum EF is obtained at the chain period 510 nm. The EF decreases strongly with small variation of the chain period\label{comsol-simulation}}
\end{figure}
Moreover, based on more detailed finite element simulations we also suspect, that the maximum possible SERS intensity enhancement for the existing structure type (with the still relatively large gaps of about 60nm) might not have been reached experimentally yet. Fig. \ref{comsol-simulation} illustrates the dependence of the averaged Raman enhancement factor (EF) with respect to the chain period considering finer increments in the chain period. For this the electric field distribution within the array was simulated for a plane wave at normal incidence at the laser wavelength (785nm) and at the Raman wavelength (851nm). The EF was then calculated by means of the formula \cite{comsol}
\begin{equation}
EF = \frac{1}{A_{Ges}}\int \frac{|E_{Lloc}|^2 \cdot |E_{Rloc}|^2 }{|E_{L0}|^2 \cdot |E_{R0}|^2} \mathrm{d}A
\end{equation}
where $A_{Ges}$ is the surface area of the gold discs, $E_{Lloc}$ is the local electric field at the incident laser wavelength at the gold disc surface and $E_{Rloc}$ the local field at the raman wavelengths, while $E_{L0}$ and $E_{R0}$ are the fields of the incident light. The EF was evaluated for periods ranging from 420nm to 520nm and the simulations predict a sharp maximum of the field enhancement at a chain period of 510 nm. At this period the simulation predicts an efficient excitation of the grating resonance by the laser light.\par
These results from finite element simulations show, that the local field enhancement is very sensitive to the exact period and a 10nm deviation has already a decisive impact on it. A finer experimental tuning of the chain period with a step size of 10 nm or 5 nm would therefore be necessary to reliably determine the ultimate maximum possible SERS enhancement in this structure. The slight difference between the theoretically predicted maximum SERS enhancement for a chain period of 510nm and the experimentally observed maximum at a chain period of 480nm is attributed to slight deviations of the experimentally realized structure from the calculated one.\par
In conclusion we have shown, that the SERS-signal in the investigated periodic gold disc structures can be strongly increased by the correct choice of the light polarisation and the periodic distance between the chains. The results also demonstrate, that the field enhancement in our nanostructured substrates can be engineered effectively by combining near field superposition and far field interference. This might pave the way to further intentionally designed SERS substrates employing a combination of different field enhancement effects.
\begin{acknowledgements}
The authors would like to acknowledge the funding under EFRE project ZS/2016/04/78121 and DFG-project RE3012/2.
\end{acknowledgements}
| proofpile-arXiv_065-301 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{T}{he} computational costs of convolutional neural networks (CNNs) have increased as CNNs get wider and deeper to perform better predictions for a variety of applications. For deep learning to have revolutionary impact on real-world applications, their computational costs must meet the timing, energy, monetary, and other design constraints of the deployed services. Many approaches have been studied to reduce the computational costs at all levels of software and hardware, from advances in network architectures \cite{szegedy2015going,chollet2017xception} down to electronics where even memory devices have been extensively researched \cite{sun2018fully, shim2016low}.
Although training requires more computations when compared to inference, it is still important to reduce the cost of inference as much as possible because it is the inference that is usually subject to more strict real-world design constraints.
Many hardware-based approaches have shown significant improvements for the computational costs of CNN inferences, but there are two limitations commonly found in these works.
Some techniques are computationally expensive in order to optimize their methods for each network model, or to retrain networks to compensate for the performance degradation from their methods \cite{kung2015power, zhang2015approxann}.
Also, many techniques such as \cite{rastegari2016xnor} are only effective for small networks and cannot scale to deeper CNNs as they report much worse performance results when tested for deeper networks.
They leverage the fact that a small number of bits are sufficient for small CNNs, but more complex networks require more bits to properly represent the amount of information \cite{lai2017deep}.
One promising hardware-based approach is the application of approximate multiplication to CNN inference \cite{kim2018efficient}. It involves designing and applying multiplication circuits that have reduced hardware costs but produce results that are not exact. Unlike aggressive quantization that trades off numeric precision, the multipliers trade off arithmetic accuracy that is less dependent on the network models, making them better suited for deeper CNNs. The approach does not involve any optimization to a target network model or require additional processing of the network models, allowing easy adaptation into the ASIC and FPGA accelerators.
While optimizing CNN inference through approximate multiplication was demonstrated in several previous studies, there was limited understanding of why it worked well for CNNs. The promising results led to the general observation that CNNs were resilient against small arithmetic errors, but none of them identified the complete reason behind that resilience. Specifically, it was unclear how the CNN layers preserved their functionalities when all their multiplications have a certain amount of error. The lack of understanding made it challenging to identify the suitable approximate multiplier for each network model, leading to expensive search-based methodologies in some studies \cite{mrazek2016design}.
This paper investigates how the errors from approximate multiplication affect deep CNN inference. The work is motivated by hardware circuits but it focuses on the implications from the Deep Learning perspective.
The contributions are summarized as follows:
\begin{itemize}
\item Explaining how convolution and fully-connected (FC) layers maintain their intended functionalities despite approximate multiplications.
\item Demonstrating how batch normalization can prevent the buildup of error in deeper layers when its parameters are properly adjusted.
\item Discussing how these findings also explain why bfloat16 multiplication performs well on CNNs despite the reduction of precision.
\item Performing experiments to show that deep CNNs with approximate multiplication perform reasonably well.
\item Discussing the potential cost benefits of the methodology by briefly comparing the hardware costs against those of bfloat16 arithmetic.
\end{itemize}
\section{Preliminaries}
\label{sec:prelim}
The convolution layers in CNNs consist of a large number of multiply-accumulate (MAC) operations and they take up the majority of computations for CNN inferences \cite{qiu2016going}.
The MAC operations are ultimately performed in the hardware circuits, and it is important to minimize the cost of these circuits to perform more computations with the same amount of resources.
For MAC operations, multiplications are more complex than additions and consume most resources.
The proposed methodology consists of minimizing the cost of multiplication by replacing the conventional multipliers with approximate multipliers.
Approximate multipliers are significantly cheaper compared to the exact multipliers but they introduce errors in the results.
There are many different types of approximate multipliers with various costs and error characteristics.
Some designs use the electronic properties \cite{chippa2010scalable} and some approximate by intentionally flipping bits in the logic \cite{du2014leveraging}, while others use algorithms to approximate multiplication \cite{sarwar2016multiplier}.
This paper studies the effects of approximate multiplication with the approximate log multiplier presented in \cite{kim2018efficient} as well as a few other promising designs.
The approximate log multiplication is based on the Mitchell's Algorithm \cite{mitchell1962computer} that performs multiplications in the log domain.
Fig. \ref{fig:logmult} shows the difference between the conventional fixed-point multiplier and the log multiplier.
An important benefit of the algorithm-based approximation is the consistent error characteristics which allow for consistent observation of the effects across various CNN instances.
The other types of approximation have more inconsistent errors that make them ill-suited for the study.
For example, approximate multipliers based on electronic properties depend not only on the operands but also on Process, Voltage, and Temperature (PVT) variations, making it difficult to get consistent observations.
The findings of this study are not limited to log multiplication, however, and may help explain the viability of other approaches when they meet the conditions discussed in Section \ref{subsec:minimized}.
\begin{figure}[t]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{images/figure1.png}}
\caption{Difference between (a) the conventional fixed-point multiplication and (b) the approximate log multiplication. $k$ stands for characteristic and $m$ stands for mantissa of logarithm.}
\label{fig:logmult}
\end{center}
\vskip -0.2in
\end{figure}
\begin{figure}[t]
\captionsetup{justification=centering}
\begin{subfigure}[t]{0.97\linewidth}
\centering
\includegraphics[height=2.3in]{images/new_mitch_pattern.png}
\caption{Error pattern of the original Mitchell multiplier with exact sign handling, given two signed inputs.}
\label{fig:mitch_pattern}
\end{subfigure}
~
\qquad
\begin{subfigure}[t]{0.97\linewidth}
\centering
\includegraphics[height=2.3in]{images/new_mitchk_c1_patt.png}
\caption{Error pattern of Mitch-$w$6 with C1 approximated sign handling, given two signed inputs.}
\label{fig:mitchk_c1_pattern}
\end{subfigure}
~
\qquad
\begin{subfigure}[t]{0.97\linewidth}
\centering
\includegraphics[height=1.9in]{images/new_mitchk_c1_side.png}
\caption{Error pattern of Mitch-$w$6, viewed from the side.}
\label{fig:mitchk_c1_pattern_side}
\end{subfigure}
~
\caption{Error patterns of approximate log multipliers.}
\label{fig:error_patterns}
\end{figure}
\begin{figure*}[ht]
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[height=3.2in]{images/insidemnist_mitchnew.png}
\caption{Convolution by Log Mult.}
\label{fig:insidemnist_mitchconv}
\end{subfigure}
\qquad
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[height=3.2in]{images/insidemnist_floatnew.png}
\caption{Convolution by Float Mult.}
\label{fig:insidemnist_convfloat}
\end{subfigure}
\qquad
\hspace{-5mm}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[height=3.2in]{images/insidemnist_scorenew.png}
\caption{The final scores}
\label{fig:insidemnist_finalscore}
\end{subfigure}
\caption{Convolution outputs and the final raw scores of a sample inference from LeNet \cite{kim2018efficient}.}
\label{fig:insidemnist}
\end{figure*}
The errors from approximate log multiplication are deterministic and depend on the two input operands, similarly to the other algorithmic approximation methods. Fig. \ref{fig:error_patterns} shows the error patterns of the original Mitchell log multiplier \cite{mitchell1962computer} and Mitch-$w$6 \cite{kim2018efficient} with a million random input pairs. The relative error is defined as Equation \ref{eq:deferr} where $|Z|$ is the magnitude of the exact product and $|Z'|$ is the magnitude of the approximate product.
\begin{align} \label{eq:deferr}
\begin{split}
error_{relative} = \frac{|Z'|- |Z|}{|Z|}.
\end{split}
\end{align}
Approximate log multiplication requires separate sign handling and does not affect the signs of the products \cite{kim2018efficient}.
Compared to the original Mitchell log multiplier, Mitch-$w$6 has a small frequency of high relative errors caused by the 1's complement (C1) sign handling, but they are acceptable as CNNs consist of MAC operations \cite{kim2018efficient}.
It should be noted that the approximate log multipliers have reasonably even distributions of errors across the input ranges, but can only have negative errors that cause the products to have less magnitudes compared to the exact products.
The mean error of an approximate multiplier is measured by repeating many multiplications with random inputs, and the Mitchell multiplier has the biased mean error of -3.9\% at 32 bits while Mitch-$w$6 has -5.9\%.
Besides the convolution layers, the FC layers also have MAC operations but they have fewer computations compared to convolution \cite{qiu2016going}.
Our methodology still applies to approximate multiplication of FC layers to be consistent with networks that use 1x1 convolution for classifiers.
The effect of approximating FC layers is minimal because of the reasons discussed in Section \ref{sec:errorconv}.
On the other hand, the operations in batch normalization are not approximated because they can be absorbed into neighboring layers during inferences \cite{lin2016fixed}.
It is important to understand the difference between the method of quantization and the approximate multiplication.
Quantization is the process of converting floating-point values in the CNN models to fixed-point for more cost-efficient inferences in the hardware \cite{lin2016fixed}.
The goal of quantization is to find the minimum number of fixed-point bits that can sufficiently represent the distribution of values. In fact, there are some approximations with small numbers of fixed-point bits that cannot match the range and precision of the floating-point format. The error from this approximation depends on the network models as each has different distributions of values \cite{judd2016stripes,lai2017deep}. The network dependency is the reason why more complex networks require a higher number of bits and the benefits of aggressive quantization diminish. While many studies have successfully demonstrated the effectiveness of quantization, they usually report significant degradation of CNN prediction accuracies when using only 8 bits on deep CNNs \cite{jacob2018quantization}.
Approximate multiplication is less dependent on the networks because its source of error is from the approximation methods, not any lack of range and precision.
Given proper quantization, the approximate multiplication further minimizes the cost of multipliers for the given number of bits.
Approximate multiplication is an orthogonal approach to quantization as approximate multipliers may be designed for any number of bits, and it complements quantization to maximize the computational efficiency of CNN inferences.
\section{Accumulated Error in Convolution}
\label{sec:errorconv}
This section explains how the convolution and FC layers achieve their intended functionalities despite the errors from approximate multiplication.
\subsection{Understanding Convolution and FC Layers}
Explaining the effects of approximate multiplication must begin with understanding how the convolution and FC layers achieve their intended functionalities.
Fig. \ref{fig:insidemnist} is taken from \cite{kim2018efficient} and shown here to visualize the outputs of convolution and FC.
The CNN convolution layers achieve abstract feature detection by performing convolution between their input channels and kernels.
They produce feature maps, as shown in Fig. \ref{fig:insidemnist_mitchconv} and \ref{fig:insidemnist_convfloat}, where the locations that match the kernel are represented by high output values relative to other locations.
Unlike a sigmoid or step activation, the widely used ReLU activation function simply forces the negative output values to zero and does not have absolute thresholds with which the abstract features are identified.
That means the abstract features are not identified by their absolute values but by the relatively higher values within each feature map, and this claim is also supported by the fact that convolution is often followed by a pooling layer.
Similarly, when the FC layers classify an image based on the abstract features, the probabilities of classes are decided by the relative strengths and order among all FC outputs.
CNNs simply select the best score as the most probable prediction instead of setting a threshold with which a prediction is made.
Because the features are represented with relative values as opposed to absolute values, it is much more important to minimize the variance of error between the convolution outputs than minimizing the absolute mean of errors when applying approximate multiplication to convolution \cite{kim2018efficient}.
In other words, it is acceptable to have a certain amount of error in multiplications as long as the errors affect all outputs of convolution as equally as possible.
The FC layers behave in the same way so that it is important to minimize the variance of error between the nodes.
Fig. \ref{fig:insidemnist} demonstrates this principle and shows that the Mitchell log multiplier can produce a correct inference because all outputs are affected at the same time.
Fig. \ref{fig:insidemnist} also shows that the variances of accumulated errors in the convolution and FC layers are very small when the approximate log multiplier is applied, and the convolutions are still able to locate the abstract features albeit with smaller magnitudes.
The previous work \cite{kim2018efficient}, however, did not identify the reason why the variance of accumulated error was minimized when approximate multiplication was applied.
\subsection{Minimized Variance of Error}
\label{subsec:minimized}
This paper provides the analytical explanation for why the variance of accumulated error was minimized in the convolution and FC layers. These layers consist of large numbers of multiplications and accumulations that converge the accumulated errors to a mean value. The variance of the accumulated error is minimized and all outputs of the layers are equally affected because of this convergence, and then maintaining the relative magnitudes between the outputs preserves the functionality of abstract feature detection.
Equation \ref{eq:multiconv} shows the multi-channel convolution where feature $s$ at ($i$,$j$) is the accumulation of products between kernel $w$ and input $x$ across the kernel dimensions ($m$,$n$) and the input channels ($l$).
\begin{equation}
\label{eq:multiconv}
s_{i,j} = \sum_{l}\sum_{m}\sum_{n} w_{l,m,n} \cdot x_{l,i-m,j-n}~.
\end{equation}
\begin{figure}[t]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[height=1.8in]{images/converge.png}}
\caption{Accumulation of many products with varying amount of error converges the combined errors to a mean value.}
\label{fig:converge}
\end{center}
\vskip -0.2in
\end{figure}
The distributions of weights and inputs are different for each CNN model and layer \cite{qiu2016going,lai2017deep,judd2016stripes}.
The input operands to multiplication, weights and input pixels, are numerous and practically unpredictable with pseudo-randomness, which in turn makes the error from approximate multiplication pseudo-random.
The approximate log multipliers have evenly distributed error patterns across the input ranges, as shown in Fig. \ref{fig:error_patterns}, and therefore the expected value of the error is close to the mean error of the approximate multiplier regardless of the different ranges of inputs from CNNs.
When each convolution output accumulates many products from approximate multiplication, the accumulated error statistically converges closer to the expected value, which is the mean error of the approximate multiplier.
This convergence reduces the variance of the accumulated error between the outputs and the values scale by roughly the same amount, minimizing the effect of varying error on feature detection.
Fig. \ref{fig:converge} shows the abstraction of this mechanism and Fig. \ref{fig:insidemnist} shows an example.
Equation \ref{eq:converr} describes the feature $s'_{i,j}$ when multiplications are associated with the mean error of $e$.
\begin{gather}
\label{eq:converr}
s'_{i,j} = \sum_{l}\sum_{m}\sum_{n} w_{l,m,n} \cdot x_{l,i-m,j-n} \cdot (1 + e)~, \\
\label{eq:converrfinal}
s'_{i,j} = (1+e) \cdot s_{i,j}~.
\end{gather}
Therefore, the features are simply scaled by the mean error of the approximate multiplication when a large number of products are accumulated.
The above observations hold only for the approximate multiplications with the symmetric errors between positive and negative products so that Equations \ref{eq:converr} and \ref{eq:converrfinal} hold.
The approximate multipliers studied in this paper satisfy this condition because all of them handle the signs separately from magnitudes.
Although we primarily used the Mitch-$w$ multiplier to develop this hypothesis, the hypothesis does not depend on the inner workings of the log multiplier but only on the output error characteristics. Therefore, the theory can be similarly applied to any approximate multiplier that meets the assumptions made in this section, namely the evenly distributed error and the symmetric errors between positive and negative products. Having only negative errors like Mitch-$w$ is not a requirement. It should be noted that the assumption of an evenly distributed error is used to accommodate different ranges of inputs, and may be relaxed when an approximate multiplier can produce a consistent expected value of error for particular input distributions. In this paper, we also used DRUM6 \cite{hashemi2015drum} and the truncated iterative log multiplier \cite{kim2019cost} for the experiments in Section \ref{sec:experiments} to show that the hypothesis may be applied to other approximate multipliers.
\subsection{Impact on Convolution and FC}
\label{subsec:impact_on_conv}
\begin{figure}[t]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[height=2.4in]{images/depthwise.png}}
\caption{Depthwise Convolution has a reduced number of accumulations and convergence of error per output.}
\label{fig:diffconv}
\end{center}
\vskip -0.2in
\end{figure}
The number of accumulations in convolution is finite so the convergence does not completely nullify the variance of accumulated error.
The small amount of error variance from approximate multiplication is acceptable, however, because CNNs are designed to be general and robust against small variations by nature.
The techniques of regularization, such as pooling and dropout, intentionally lose some information to suppress overfitting and increase the generality of CNN predictions. Some studies have observed that small arithmetic errors have similarly positive effects \cite{ansari2019improving,kim2018efficient,wang2019bfloat16}.
For example, an eye needs to be recognized as an eye even when it is a little different from the training samples. CNNs are designed to overlook such small differences, and some computational inaccuracies are not only tolerable but often beneficial in providing such generality.
Deep CNNs typically start with smaller numbers of convolution channels to obtain general features, and the number of channels increases in the deeper layers where features become more specific.
Approximate multiplication on such CNNs exhibits the desired trend of having smaller effects in the wide and deep layers as required.
The larger variance of accumulated error in the shallow layers is tolerable because the feature detection needs to account for the small variations in the input images.
In fact, some previous works, such as \cite{sarwar2016multiplier, kim2017power}, had claimed that earlier layers can be approximated more in neural networks.
This hypothesis implies the importance of exact additions in CNNs because the multiplication errors will not converge properly with inexact accumulations.
This agrees with the work in \cite{du2014leveraging} where approximating the additions had a larger impact on the CNN accuracies.
As multipliers in fixed-point arithmetic are much more expensive than adders, approximating only the multipliers gains the most benefit with minimal degradation in CNN inferences.
Approximate multiplication also benefits from the fact that the convolution outputs receive inputs from the same set of input channels.
For each convolution output, there are two types of accumulations.
One type occurs within each input channel across the kernel dimensions while the other occurs across the input channels to produce the final output.
The intra-channel accumulation combines the products from the same input channel and kernel, and therefore each channel has a specific range of values within which features are located.
The inter-channel accumulation may have more varying ranges of products because each input channel has its own kernel and input values.
Different input ranges may trigger different error characteristics on the approximate multiplier, but every convolution output accumulates from all input channels so that it does not affect the variance of accumulated error between the outputs.
An implication of this observation is that approximate multiplication does not work as well when every output does not accumulate from the same set of data, as in the cases of grouped convolution and branches in CNN architectures.
The FC layers are also resilient against the effects of approximate multiplication as the same factors help converge errors in the outputs.
There is usually a large number of accumulations per each output and all outputs share the same set of inputs.
Thus, CNN accuracies show minimal differences when the FC layers have approximate multiplications as demonstrated in Section \ref{sec:experiments}.
\subsection{Grouped and Depthwise Convolutions}
\label{subsec:grouped}
\begin{figure*}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[height=1.6in]{images/batchnorm.png}}
\caption{Abstract overview of batch normalization.}
\label{fig:batchnorm}
\end{center}
\vskip -0.2in
\end{figure*}
The benefits of approximate multiplication with conventional convolution are best understood and verified by comparing against grouped and depthwise separable convolution.
Depthwise separable convolution consists of depthwise convolution followed by pointwise convolution \cite{chollet2017xception}.
Depthwise convolution is a special case of grouped convolution that eliminates the accumulation across input channels, and the reduced number of accumulations leads to an increase in the variance of accumulated error in the outputs.
Fig. \ref{fig:diffconv} shows the comparison of the accumulation pattern between conventional convolution and depthwise convolution.
Also, each output channel receives inputs from only one input channel and the difference of error between output channels is subject to another approximate multiplication and variance of error before the inter-channel accumulations occur in the following pointwise convolution.
More accurate approximate multipliers are required for CNNs that use depthwise separable convolution because errors from approximate multiplication do not converge well.
A sufficiently accurate approximate multiplier can still perform reasonably well, as demonstrated in Section \ref{sec:experiments}.
Another technique that reduces the number of accumulations is 1x1 convolution, but it is found to be compatible with approximate multipliers.
1x1 convolution does not have any intra-channel accumulation but accumulates the products across input channels.
Because deep CNNs require large numbers of channels appropriate for their deep structures, inputs to 1x1 convolutions usually consist of many input channels and therefore provide enough accumulations for the error convergence.
Each output of 1x1 convolution also receives inputs from all input channels, which provides more consistent accumulation of error between the outputs.
\section{Effect of Batch Normalization}
\label{sec:batch}
The approximate log multiplication with Mitchell's Algorithm generates negative error in the results, meaning that the product has less magnitude compared to the exact multiplication \cite{mitchell1962computer}.
It is evident from Equation \ref{eq:converrfinal} that the features have less magnitudes with the log multiplication in each convolution layer.
There are many convolution layers that repeatedly cause the reduction, and the previous work had reported that this became a problem for deeper layers \cite{kim2018efficient}.
Its adverse effect on the network performance was observable in AlexNet with only 8 layers of convolution and FC, and it was unclear how the mean error accumulation would behave in much deeper networks.
Having tens or hundreds of convolution layers significantly reduces the magnitudes of the features so that the deeper layers receive input distributions that are difficult to distinguish.
On the other hand, if an approximate multiplier has a positively biased mean error, it is possible to amplify the values beyond the range set by quantization, resulting in the arithmetic overflow.
These adverse effects are under the best-case scenario of ReLU activation, and the other types such as a sigmoid function may suffer additional errors in activations.
The ReLU function simply forces the negative values to zero and does not change the magnitudes of positive inputs, but the same is not true for other activation functions where the magnitudes of positive inputs cause changes in activations.
Batch normalization \cite{ioffe2015batch}, the popular technique used in most deep CNNs, can alleviate this problem and help approximate multiplication go deeper into the networks.
A critical function of batch normalization is to redistribute the output feature maps to have more consistent input distributions for deeper layers.
While the training process necessitates this function, the inferences on the resulting models still need to go through the normalization with the stored global parameters of expected distributions.
These global parameters can be appropriately adjusted to account for the changes in the distributions due to approximate multiplication, and this can prevent the accumulation of mean error across the layers.
The abstract overview of batch normalization is shown in Fig. \ref{fig:batchnorm}.
During training, each batch normalization layer calculates and stores the mean and variance values of the input distributions.
These mean and variance values are used to normalize the input distributions to generate the normalized distributions with the mean value of zero and the variance of one.
Then, batch normalization uses learnable parameters to scale and shift the normalized distribution to restore the representation power of the network \cite{ioffe2015batch}.
In essense, batch normalization redistributes the feature maps before or after the activation function so that the next layer may receive consistent distributions of inputs.
All these parameters are learned during training and stored as numerical values in CNN models, and they can be easily modified if necessary.
CNN inferences use these stored parameters to perform normalization assuming they represent the same input distributions during inferences.
The mean and variance parameters are a source of error for approximate multiplication without proper adjustments because the distribution of convolution outputs changes as the result of approximate multiplication.
Equations \ref{eq:meanfeat3} and \ref{eq:varifeat3} show the mean ($\mu'$) and variance ($(\sigma')^2$) of the convolution output distribution, when the features $s'_{i,j}$ have the mean error $e$ from Equation \ref{eq:converrfinal}.
\begin{gather}
\label{eq:meanfeat1}
\mu' = 1/ m \sum_{i,j} s'_{i,j}~,\\
\label{eq:meanfeat2}
\mu' = 1/m \sum_{i,j} (1+e) \cdot s_{i,j}~,\\
\label{eq:meanfeat3}
\mu' = (1 + e) \mu~.\\
\label{eq:varifeat1}
(\sigma')^2 = 1/m \sum_{i,j} (s'_{i,j} - \mu')^2~,\\
\label{eq:varifeat2}
(\sigma')^2 = 1/m \sum_{i,j} (1 + e)^2 (s_{i,j} - \mu)^2~,\\
\label{eq:varifeat3}
(\sigma')^2 = (1 + e)^2 \cdot \sigma^2~.
\end{gather}
Therefore, the stored mean values for batch normalization must be scaled by $(1 + e)$, while the variance values are scaled by $(1+e)^2$.
With the adjusted parameters, the batch normalization layers correctly normalize the convolution outputs and scale them back to the desired distributions.
In the process, the mean and variance of the outputs match those of exact multiplication and the effect of mean error accumulation disappears.
Failing to adjust these parameters results in incorrect redistribution of feature maps, and worse CNN accuracies.
The proposal only requires the scaling of the stored parameters and significantly improves the performance of approximate multipliers on deep neural networks.
It does not introduce any new operations and does not prevent the ability of batch normalization to fold into neighboring layers.
Designing an approximate multiplier with an unbiased mean error near zero is another effective solution, but it is much harder to make changes to hardware designs.
The unbiased designs usually have a small amount of mean error because it is difficult to create a perfectly unbiased design, and the problem is only deferred to deeper networks.
Also, depending on the approximation method, it may take additional hardware resources to make a design unbiased.
The networks that do not use batch normalization have no choice but to use the unbiased multipliers, but otherwise the proposed adjustment is simpler, less costly, and more flexible to accommodate different approximation methods with biased mean errors.
\section{Arithmetic Reason for Bfloat16 Success}
\label{sec:bfloat16}
The discoveries in Sections \ref{sec:errorconv} and \ref{sec:batch} are not limited to the error of approximate multiplication but apply to all sources of arithmetic error.
They also provide deeper understanding of why bfloat16 \cite{wang2019bfloat16} has been widely successful at accelerating CNNs despite its reduced precision.
The bfloat16 format is an approximation of the FP32 floating-point format that simply truncates the 16 least significant bits from the 23 fractional bits.
By truncating the less significant fractional bits, converting an FP32 value to bfloat16 generates a small negative error from 0\% to -0.78\% relative to the original FP32 value. The factors discussed in Section \ref{sec:errorconv} also minimize the adverse effects of this varying error and they explain why using the full FP32 accumulator after bfloat16 multiplication produces the best results \cite{henry2019leveraging}, in agreement with the observation that the accumulations need to be exact.
The accumulation of mean error discussed in Section \ref{sec:batch} should also be present, but the mean error of bfloat16 is too small to cause any problems for the studied CNNs.
The successful application of bfloat16 to CNNs has been explained by the high-level interpretation that the small amount of error helps the regularization of a CNN model. The interpretation is still valid and also applies to approximate multiplication, and the findings of this paper provide deeper understanding with the arithmetic explanation. They also explain why the bfloat16 format has slightly degraded performances with the networks that use grouped convolution as presented in Section \ref{subsec:expvari}.
\section{Experiments}
\label{sec:experiments}
\subsection{Setup}
\label{subsec:setup}
The experiments are performed in the Caffe framework to evaluate the impact of approximate multipliers on deep CNN models \cite{yangqing2014caffe}.
Caffe has limited features compared to contemporary tools but its lack of encapsulation allows easy modification of underlying matrix multiplication, making it suitable for the study.
The code that performs floating-point matrix multiplication in GPU is replaced by the CUDA C++ functions that emulate the behavior of the target approximate multipliers.
These functions are verified against RTL simulations of the HDL code of the multipliers.
The Mitch-$w$6 multiplier with the C1 sign handling is chosen because the comparison against the other multipliers showed that it was cost-efficient while performing well on AlexNet \cite{kim2018efficient}.
Mitch-$w$ multipliers consume significantly less resources compared to the Mitchell log multiplier.
DRUM6 multiplier \cite{hashemi2015drum} is also added to the experiments because it performed very well on AlexNet while being more costly than Mitch-$w$6 \cite{kim2018efficient}.
The truncated iterative log multiplier in \cite{kim2019cost} has higher accuracy than these multipliers and is tested for networks that have depthwise separable convolution. Unlike Mitch-$w$, DRUM6 and the truncated iterative log multiplier have the unbiased mean errors close to zero.
The FP32 floating-point results are included for comparison, and the bfloat16 results provide additional data points (see Section \ref{sec:bfloat16}).
\begin{table}
\centering
\scriptsize
\caption{Pre-trained CNN models used for the experiments}
\vskip 0.1in
\renewcommand\tabcolsep{3pt}
\renewcommand\arraystretch{1.5}
\begin{tabular}{llcc@{\hspace{7.0pt}}ccc}
\toprule[0.8pt]
\multicolumn{1}{l}{\textbf{Network}} &
\multicolumn{1}{c}{\textbf{Model Source}} &
\multicolumn{1}{c}{\textbf{BatchNorm}} &
\multicolumn{1}{c}{\textbf{Grouped Conv.}} &
\\
\cmidrule[0.5pt](l{2pt}r{2pt}){1-4}
\textbf{VGG16} &
\cite{yangqing2014caffe} &
&
&
\\
\textbf{GoogLeNet} &
\cite{yangqing2014caffe} &
&
&
\\
\textbf{ResNet-50} &
\cite{he2016deep} &
$\surd$ &
&
\\
\textbf{ResNet-101} &
\cite{he2016deep} &
$\surd$ &
&
\\
\textbf{ResNet-152} &
\cite{he2016deep} &
$\surd$ &
&
\\
\textbf{Inception-v4} &
\cite{liu2019enhancing} &
$\surd$ &
&
\\
\textbf{Inception-ResNet-v2} &
\cite{silberman2017tf} &
$\surd$ &
&
\\
\textbf{ResNeXt-50-32x4d} &
\cite{Xie2016} &
$\surd$ &
$\surd$ &
\\
\textbf{Xception} &
\cite{liu2019enhancing} &
$\surd$ &
$\surd$ &
\\
\textbf{MobileNetV2} &
\cite{sandler2018mobilenetv2} &
$\surd$ &
$\surd$ &
\\
\bottomrule[0.8pt]
\end{tabular}%
\label{tab:list_cnn}%
\vskip -0.1in
\end{table}%
\begin{figure*}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[height=2.2in]{images/top5.png}}
\caption{Comparison of Top-5 errors between the FP32 reference and the approximate multipliers.}
\label{fig:top5}
\end{center}
\vskip -0.2in
\end{figure*}
\begin{figure*}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[height=2.2in]{images/top1.png}}
\caption{Comparison of Top-1 errors between the FP32 reference and the approximate multipliers.}
\label{fig:top1}
\end{center}
\vskip -0.2in
\end{figure*}
The target application is object classification with the ImageNet ILSVRC2012 validation dataset of 50,000 images.
Only single crops are used for experiments because the C++ emulation of the approximate multipliers is very time-consuming compared to the multiplication performed in actual hardware, so the presented CNN accuracies may differ from the original literature that use 10-crops.
Table \ref{tab:list_cnn} shows the list of CNN models used for the experiments, and the networks that use batch normalization and grouped convolutions are marked for comparative discussion.
The pre-trained CNN models for the experiments are publicly available from online repositories, and the source is indicated with each model.
Any training or retraining of a network model is purposefully avoided to achieve reproducibility and to show that the proposed methodology works with many network models with only minor scaling of batch normalization parameters.
The experiments assume quantization to 32 fixed-point bits without rounding (statically assigned to 16 integer and 16 fractional bits) as it is sufficient for all the tested network models.
As discussed in Section \ref{sec:prelim}, approximate multiplication is an orthogonal approach to quantization and we used generous quantization to minimize the quantization errors and study the effects of approximate multiplication in isolation, in order to clearly evaluate the hypothesis presented in this paper.
This paper focuses on establishing approximate multiplication as a viable approach, and combining various quantization methods with approximate multiplication is beyond the scope of this paper.
\subsection{Impact of Approximate Multiplication on CNNs}
\label{subsec:expvari}
Fig. \ref{fig:top5} and \ref{fig:top1} show the Top-5 and Top-1 errors when the approximate multipliers are applied to the CNNs, compared against the FP32 reference values.
For the networks with conventional convolution, the studied approximate multipliers produce predictions that are nearly as accurate as the exact FP32 floating-point as they show Top-5 errors within 0.2\% compared to the reference values, except for Mitch-$w$6 on Inception-ResNet-v2 (0.5\%) and the networks without batch normalization.
On the contrary, the CNNs with grouped convolution suffer degraded accuracies when there are errors in multiplications, from approximate multiplication as well as bfloat16.
The difference of CNN accuracies between different convolution types supports the hypothesis presented in Section \ref{sec:errorconv}.
In order to demonstrate the increased variance of error for grouped and depthwise convolution, all convolution outputs are extracted for the first 100 sample images of the ILSVRC2012 validation set with FP32 and Mitch-$w$6 multiplications. The errors from approximate multiplication are measured by comparing the results. The variance of accumulated error within each channel is measured as well as the variance between the convolution outputs.
The geometric means are taken across all channels as channels had wildly varying ranges of values.
Table \ref{tab:varierr} shows the measured values for various CNNs and it demonstrates the increased variance of accumulated error for grouped and depthwise convolutions as discussed in Section \ref{subsec:grouped}.
The conventional convolution results also provide the evidence that the accumulated errors have much less variance compared to the distribution of outputs, and therefore have less impact on the functionality of feature detection.
\begin{table}
\centering
\scriptsize
\caption{Measured error variance with Mitch-$w$6}
\vskip 0.1in
\renewcommand\tabcolsep{3pt}
\renewcommand\arraystretch{1.5}
\begin{tabular}{l@{\hspace{7.0pt}}lcc@{\hspace{7.0pt}}ccc}
\toprule[0.8pt]
\textbf{Conv. Type} &
\multicolumn{1}{l}{\textbf{Network}} &
\multicolumn{1}{c}{\textbf{Error Vari.}} &
\multicolumn{1}{c}{\textbf{Output Vari.}} &
\textbf{Pct.}
\\
\cmidrule[0.5pt](l{2pt}r{2pt}){1-5}
\textbf{Conventional} &
\textbf{ResNet-50} &
2.31E-3 &
6.13E-2 &
3.8\%
\\
&
\textbf{ResNet-101} &
1.69E-3 &
3.52E-2 &
4.8\%
\\
&
\textbf{ResNet-152} &
1.50E-3 &
2.72E-2 &
5.5\%
\\
&
\textbf{Inception-v4} &
6.79E-3 &
1.22E-1 &
5.6\%
\\
&
\textbf{Inception-ResNet-v2} &
1.18E-3 &
1.85E-2 &
6.3\%
\\
\midrule[0.4pt]
\textbf{Grouped} &
\textbf{ResNeXt-50-32x4d} &
1.50E-4 &
1.35E-3 &
11.2\%
\\
\midrule[0.4pt]
\textbf{Depthwise} &
\textbf{Xception} &
1.81E-2 &
8.91E-2 &
20.4\%
\\
&
\textbf{MobileNetV2} &
2.00E-2 &
1.34E-1 &
14.9\%
&
\\
\bottomrule[0.8pt]
\end{tabular}%
\label{tab:varierr}%
\vskip -0.1in
\end{table}%
While the 100 images may seem like a small number of samples, the geometric means are actually taken across millions of convolution feature maps produced from the images.
The samples include sufficient numbers of data points to demonstrate the point.
It is extremely difficult to process the entire dataset because of the large amount of internal data generated by CNNs.
Changing the sample size had little effect on the observation and the samples likely represent the behavior of the entire set for these models.
The measured variances in Table \ref{tab:varierr} do not directly correlate to the performance of Mitch-$w$6 in Fig. \ref{fig:top5} and \ref{fig:top1} because Table \ref{tab:varierr} only shows the error variance within each channel and does not account for the error variance across channels. The approximate multiplication in ResNeXt-50-32x4d causes more degradation in the prediction accuracy because ResNeXt networks have many branches in their architectures where different amounts of error accumulate. The Inception networks have relatively shorter branches and show slightly more degradation compared to the ResNet models that have none. The theoretical principle discussed in Section \ref{subsec:impact_on_conv} agrees with this analysis, though Table \ref{tab:varierr} could not capture these differences.
When the convergence of errors diminishes for grouped and depthwise convolutions, the outcomes become statistically uncertain and each CNN model may favor different approximate multipliers depending on their error patterns. DRUM6 has a different error pattern compared to Mitch-$w$6 and it performs worse than Mitch-$w$6 on the ResNeXt50 model despite the fact that it generally produces smaller errors, as shown in Fig. \ref{fig:top5} and \ref{fig:top1}. On the contrary, DRUM6 performs very well on the Xception model and it is conjectured that the errors from DRUM6 work well with this particular pre-trained model.
\begin{figure}[t]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{images/withwithoutfc.png}}
\caption{Low impact on CNN accuracies when FC layers do not use approximate multiplication. The experiments are performed with Mitch-$w$6.}
\label{fig:withwithoutfc}
\end{center}
\vskip -0.2in
\end{figure}
For CNNs with grouped convolutions, a sufficiently accurate approximate multiplier can still be used to perform accurate inferences, as demonstrated with the truncated iterative log multiplier in Fig. \ref{fig:top5} and \ref{fig:top1}.
When the converging effect of accumulation is reduced, the variance of accumulated error may be reduced by producing a smaller range of errors at the cost of more hardware resources.
Fig. \ref{fig:withwithoutfc} shows the effects on CNN accuracies when the FC layers perform exact multiplication instead of approximate multiplication.
Despite the fact that approximating later layers in CNNs have more influence on the outputs compared to earlier layers \cite{sarwar2016multiplier,kim2017power}, Fig. \ref{fig:withwithoutfc} demonstrates that approximating FC layers at the end of CNNs has minimal impact on CNN accuracies.
The FC layers have a large number of accumulations per each output and the higher convergence of error preserves the relative order between the final outputs.
This is the desirable property of approximate multiplication for CNN inferences as discussed in Section \ref{subsec:impact_on_conv}.
\subsection{Effect of Batch Normalization}
\label{subsec:expbat}
\begin{figure}[t]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[height=1.6in]{images/vgg16.png}}
\caption{Accumulation of mean error on VGG16.}
\label{fig:accum}
\end{center}
\vskip -0.2in
\end{figure}
\begin{figure}[t]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[height=1.6in]{images/resnet50.png}}
\caption{Effect of batch normalization on ResNet-50.}
\label{fig:resnet50}
\end{center}
\vskip -0.2in
\end{figure}
Fig. \ref{fig:accum} demonstrates the accumulation of mean error in VGG16 with Mitch-$w$6, averaged over the 100 sample images.
Because the network lacks batch normalization, the deeper layers receive the inputs that are repeatedly scaled down when the errors in multiplication are biased.
It explains the poor performance of Mitch-$w$6 on VGG16 and GoogLeNet in Fig. \ref{fig:top5}, while the unbiased DRUM6 performs well.
The last three layers that disrupt the trend are the FC layers where the added bias values become more significant when the inputs have reduced magnitudes.
Fig. \ref{fig:resnet50} shows the effect of batch normalization with properly adjusted parameters, on ResNet-50 with Mitch-$w$6 averaged over the 100 sample images.
For Mitch-$w$6 with a mean error of -5.9\%, the mean and variance parameters in batch normalization are scaled by 0.941 and 0.885 respectively.
With the proper adjustments, batch normalization eliminates the accumulation of mean error across layers and helps approximate multiplication work with deep CNNs.
Fig. \ref{fig:resnet50} shows that the mean error per layer hovers around the mean error of Mitch-$w$6, which supports the convergence of accumulated error as well as the effectiveness of the adjusted batch normalization.
Failing to adjust the parameters not only accumulates error in deeper layers but also becomes an additional source of error with incorrect redistribution of feature maps, resulting in an unstable pattern of accumulated error.
Table \ref{tab:resnet50} shows the impact on the Top-1 and Top-5 errors of the ResNet models.
Incorrect batch normalization results in performance degradation while the corrected batch normalization layers help approximate multiplication perform well for deep ResNet models.
\begin{table}
\centering
\scriptsize
\caption{Impact of batch normalization adjustment with Mitch-$w$6 on ResNet models}
\vskip 0.1in
\renewcommand\tabcolsep{10pt}
\renewcommand\arraystretch{1.4}
\begin{tabular}{lcccccc}
\toprule[0.8pt]
\multicolumn{1}{r@{\hspace{8pt}}}{} &
\multicolumn{2}{c}{\textbf{Top-1 Error}} &
\multicolumn{2}{c}{\textbf{Top-5 Error}}
\\
&
\multicolumn{1}{c}{\textbf{Original}} &
\multicolumn{1}{c}{\textbf{Adjusted}} &
\multicolumn{1}{c}{\textbf{Original}} &
\multicolumn{1}{c}{\textbf{Adjusted}}
\\
\cmidrule[0.5pt](l{2pt}r{2pt}){2-3}
\cmidrule[0.5pt](l{2pt}r{2pt}){4-5}
\textbf{ResNet-50} &
31.7\% &
27.2\% &
10.5\% &
9.0\%
\\
\textbf{ResNet-101} &
31.8\% &
26.0\% &
12.0\% &
8.2\%
\\
\textbf{ResNet-152} &
31.2\% &
25.2\% &
11.5\% &
7.7\%
\\
\bottomrule[0.8pt]
\end{tabular}%
\label{tab:resnet50}%
\vskip -0.1in
\end{table}%
\section{Comparison of Costs}
\label{sec:costs}
Using the bfloat16 format significantly reduces the hardware costs compared to the FP32 floating-point format and has been widely adopted in Machine Learning hardware accelerators.
While its ease of use and the ability to perform training as well as inference are undeniably advantageous, its arithmetic units are slower and consume more energy compared to the discussed multipliers based on the fixed-point format.
It is plausible to have a use-case scenario where embedded systems perform only inferences under strict design constraints, while communicating to datacenters where training occurs.
This section presents a brief comparison of the hardware costs against a bfloat16 MAC unit to give an idea of the potential benefits of the approximate log multiplication.
Table \ref{tab:costcomp} compares the costs among the MAC units of FP32, bfloat16 and the Mitch-$w$, as synthesized with a 32nm standard library from Synopsys.
The Mitch-$w$6 HDL code is available in \cite{log_source}, the FP32 MAC design is from \cite{del2014ultra}, and we modified the FP32 design to create the bfloat16 MAC.
Synopsys Design Compiler automatically synthesized the fixed-point MAC, and Mitch-$w$6 is followed by an exact fixed-point adder.
The 32-bit Mitch-$w$6 design represents the circuit used for the experiments while the 16-bit design represents what is potentially achievable with the proper quantization such as \cite{jacob2018quantization}.
It is clear from Table \ref{tab:costcomp} that applying approximate multiplication to CNNs can save a significant amount of resources for inferences.
The presented figures do not consider the potential benefits when adopting multiple log multipliers, where additional optimization for resource sharing can be performed depending on the design of the hardware accelerator.
Oliveira et al. \cite{oliveira2019design} proposed that certain parts of the log multiplier can be removed or shared between multiple instances of MAC units depending on the accelerator design.
\begin{table}
\centering
\scriptsize
\caption{Hardware costs of FP32, bfloat16, fixed-point and Mitch-$w$6 MAC units}
\vskip 0.1in
\renewcommand\tabcolsep{3pt}
\renewcommand\arraystretch{1.4}
\begin{tabular}{lcccccc}
\toprule[0.8pt]
&
\multicolumn{3}{c}{\textbf{N=16}} &
\multicolumn{3}{c}{\textbf{N=32}}
\\
&
\multicolumn{1}{c}{\textbf{bfloat16}} &
\multicolumn{1}{c}{\textbf{Fixed}} &
\multicolumn{1}{c}{\textbf{Mitch-$w$6}} &
\multicolumn{1}{c}{\textbf{FP32}} &
\multicolumn{1}{c}{\textbf{Fixed}} &
\multicolumn{1}{c}{\textbf{Mitch-$w$6}}
\\
\cmidrule[0.5pt](l{2pt}r{2pt}){2-4}
\cmidrule[0.5pt](l{2pt}r{2pt}){5-7}
\textbf{Delay (ns)} &
4.77 &
2.07 &
2.74 &
7.52 &
4.29 &
4.39
\\
\textbf{Power (mW)} &
1.47 &
1.17 &
0.50 &
5.80 &
4.36 &
0.98
\\
\textbf{Energy (pJ)} &
7.01 &
2.42 &
1.37 &
43.62 &
18.70 &
4.30
\\
\textbf{Energy vs. bfloat16} &
100\% &
35\% &
20\% &
622\% &
267\% &
61\%
\\
\bottomrule[0.8pt]
\end{tabular}%
\label{tab:costcomp}%
\vskip -0.1in
\end{table}%
\section{Related Works}
\label{sec:related}
There have been a number of previous works that applied approximate multipliers to CNN inferences.
This paper explains the underlying reason why some of these methods perform well despite the error and how to extend the methodologies to deep CNNs with batch normalization.
To the best of our knowledge, this is the first work to demonstrate that one approximate multiplier design can perform successful inferences on the various ResNet and Inception network models without retraining.
One study in \cite{hammad2018impact} applied various approximate multipliers with varying accuracies to the VGG network, and it provided more evidence that approximate multiplication was compatible with CNN inferences.
Their work included interesting experimental results that support our hypothesis.
They found that approximating the convolution layers with higher numbers of channels resulted in less degradation of CNN accuracy, and this agrees with our finding that variance of accumulated error decreases with more inter-channel accumulations.
The works presented in \cite{du2014leveraging,mrazek2016design,ansari2019improving, mrazek2019alwann,de2018designing} had used logic minimization to create the optimal approximate multipliers for each network model.
Logic minimization intentionally flips bits in the logic to reduce the size of the operators, and these techniques use heuristics to find the optimal targets. While these studies demonstrate promising results for improving the efficiency of CNN inferences, the heuristics involve the costly exploration of a large design space and do not ensure that the optimal multipliers for one situation would be optimal for another.
The Alphabet Set Multiplier proposed in \cite{sarwar2016multiplier} stores multiples of each multiplier value as alphabets and combines these alphabets to produce the products. Because the stored multiples require memory accesses, the authors eventually proposed the design with a single alphabet that had performed reasonably well for the simple datasets. However, the design was too inaccurate to handle the more complex dataset of ImageNet \cite{kim2018efficient}.
Approximate log multiplication from Mitchell's Algorithm had been applied to small CNN models in \cite{kim2018low, kim2018efficient, ansari2020improved}. The iterative log multipliers that increase accuracy by iterating log multiplication had been also studied \cite{lotrivc2012applicability, kim2019cost, kung2015power}.
They were mostly effective at performing CNN inferences but the reason for the good performances largely remained unsolved.
This paper provides deeper understanding of the effects of approximate multiplication on CNNs.
The log multipliers should be distinguished from the log quantization presented in \cite{lee2017lognet, miyashita2016convolutional}.
The log quantization performs all operations in the log domain and suffers from inaccurate additions, which may explain why the performances drop for more complex networks.
The Mitchell's Algorithm still performs exact additions in the fixed-point format which helps maintain the CNN performance, as discussed in Section \ref{sec:errorconv}.
There are many other ways of approximating multiplication that had not been applied to deep CNNs, such as
\cite{liu2018design, salamat2018rnsnet, imani2018canna} among countless others. While we believe that the studied multiplier designs are the most promising, there are most likely other related opportunities for improving CNNs.
\section{Conclusion}
\label{sec:conclusion}
This paper provides a detailed explanation of why CNNs are resilient against the errors in multiplication.
Approximate multiplication favors the wide convolution layers with many input channels and batch normalization can be adjusted for deeper networks, making it a promising approach as the networks become wider and deeper to handle various real-world applications.
The proposed approximate multipliers show promising results for CNN architectures, and the arithmetic explanations provide a new and effective way for designing hardware accelerators.
They also help explain some of the phenomenon observed in the related works while providing guidelines for extending to deeper CNNs with batch normalization.
The most widely applicable insight of this paper is that the multiplications in CNNs can be approximated while the additions have to be accurate.
The implications are far-reaching and may help analyze and justify a variety of other approximation techniques that were previously only supported by empirical evidence.
In this paper, we provide the arithmetic reason behind the success of bfloat16 approximation \cite{wang2019bfloat16} and also conjecture that log quantization \cite{miyashita2016convolutional} loses CNN accuracy because of inaccurate additions.
For quantization, the convergence theory can justify the reduced number of bits used for weights while accumulations are done with a higher number of bits.
The findings may help justify the analog processing of neural networks where the multiplication resistors may have some process variation \cite{shim2016low}.
These are only a few examples and new approximation techniques may be evaluated in the similar fashion in terms of the variance of accumulated error.
Various studies on approximation of CNN inferences have relied only on the end results as the inner workings of CNNs are often treated as black boxes.
This paper seeks to contribute towards a more analytical understanding of CNN approximation based on arithmetic.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
This work has been partially supported by the CPCC at UCI, the Community of Madrid under grant S2018/TCS-4423, the EU (FEDER) and the Spanish MINECO under grant RTI2018-093684-B-I00, and the NRF of South Korea funded by the Ministry of Education (2017R1D1A1B03030348).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-302 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Keyword spotting (KWS) is a frequently used technique in spoken data processing whose goal is to detect selected words or phrases in speech.
It can be applied off-line for fast search in recorded utterances (e.g. telephone calls analysed by police~\cite{p1}), large spoken corpora (like broadcast archives~\cite{p2}), or data collected by call-centres~\cite{p3}.
There are also on-line applications, namely for instant alerting, used in media monitoring~\cite{p4} or in keyword activated mobile services~\cite{p5}.
The performance of a KWS system is evaluated from two viewpoints.
The primary one is a detection reliability, which aims at missing as few as possible keywords occurring in the audio signal, i.e. to achieve a low miss detection rate (MD), while keeping the number of false alarms (FA) as low as possible.
The second criterion is a speed as most applications require either instant reactions, or they are aimed at huge data (thousands of hours), where it is appreciated if the search takes only a small fraction of their duration.
The latter aspect is often referred to as a real-time (RT) factor and should be significantly smaller than 1.
There are several approaches to solve the KWS task~\cite{p6}.
The simplest and often the fastest one, usually denoted as an \textit{acoustic approach}, utilizes a strategy similar to continuous speech recognition but with a limited vocabulary made of the keywords only.
The sounds corresponding to other speech and noise are modelled and captured by filler units~\cite{p7}.
An \textit{LVCSR approach} requires a very large continuous speech recognition (LVCSR) system that transcribes the audio first and after that searches for the keywords in its text output or in its internal decoder hypotheses arranged in \textit{word lattices}~\cite{p8}.
This strategy takes into account both words from a large representative lexicon as well as inter-word context captured by a language model (LM).
However, it is always slower and fails if the keywords are not in the lexicon and/or in the LM.
A \textit{phoneme lattice approach} operates on a similar principle but with phonemes (usually represented by triphones) as the basic units.
The keywords are searched within the phoneme lattices~\cite{p9}.
The crucial part of all the 3 major approaches consist in assigning a \textit{confidence score} to keyword candidates and setting thresholds for their acceptance or rejection. The basic strategies can be combined to get the best properties of each, as shown e.g. in~\cite{p10,p11}, and in general, they adopt a two-pass scheme.
The introduction of deep neural networks (DNN) into the speech processing domain has resulted in a significant improvement of acoustic models and therefore also in the accuracy of the LVCSR and phoneme based KWS systems.
Various architectures have been proposed and tested, such as feedforward DNNs~\cite{p12}, convolutional (CNN)~\cite{p13} and recurrent ones (RNN)~\cite{p14}.
A combination of the Long Short-Term Memory (LSTM) version of the latter together with the Connectionist Temporal Classification (CTC) method, which is an alternative to the classic hidden Markov model (HMM) approach, have become popular, too.
The CTC provides the location and scoring measure for any arbitrary phone sequence as presented e.g. in~\cite{p15}.
Moreover, modern machine learning strategies, such as training data augmentation or transfer learning have enabled to train KWS also for various signal conditions~\cite{p16} and languages with low data resources~\cite{p17}.
The KWS system presented here is a combination of several aforementioned approaches and techniques.
It allows for searching any arbitrary keyword(s) using an HMM word-and-filler decoder that accepts acoustic models based on various types of DNNs, including feedforward sequential memory networks that are an efficient alternative to RNNs~\cite{p20}.
An audio signal is processed and searched within a single pass in a frame synchronous manner, which means that no intermediate data (such as lattices) need to be precomputed and stored.
This allows for very short processing time (under 0.01 RT) in an off-line mode.
Moreover, the execution time can be further reduced if the same signal is searched repeatedly with a different keyword list.
The system can operate also in an on-line mode, where keyword alerts are produced with a small latency.
In the following text, we will focus mainly on the speed optimization of the algorithms, which is the main and original contribution of this paper.
\section{Brief Description of Presented Keyword Spotting System}
The system models acoustic events in an audio signal by HMMs.
Their smallest units are states.
Phonemes and noises are modelled as 3-state sequences and the keywords as concatenations of the corresponding phoneme models.
All different 3-state models (i.e. physical triphones in a tied-state triphone model) also serve as the fillers.
Hence any audio signal can be modelled either as a sequence of the fillers, or - in presence of any of the keywords – as a sequence of the fillers and the keyword models.
During data processing, the most probable sequences are continuously built by the Viterbi decoder and if they contain keywords, these are located and further managed.
The complete KWS system is composed of three basic modules.
All run in a frame synchronous manner.
The first one – a \textit{signal processing} module - takes a frame of the signal and computes log-likelihoods for all the HMM states.
The second one – a \textit{state processing} module – controls Viterbi recombinations for all active keywords and filler states.
The third one – a \textit{spot managing} module – focuses on the last states of the keyword/filler models, computes differences in accumulated scores of the keywords and the best filler sequences, evaluates their confidence scores and those with the scores higher than a threshold are further processed.
This scheme assures that the data is processed almost entirely in the forward direction with minimum need for look-back and storage of already processed data.
\section{KWS Speed and Memory Optimizations}
\label{sec:approach}
The presented work extends – in a significant way – the scheme proposed in~\cite{p18}.
Therefore, we will use a similar notation here when explaining optimizations in the three modules.
The core of the system is a Viterbi decoder that handles keywords $w$ and fillers $v$ in the same way, i.e. as generalized units $u$.
\subsection{Signal Processing Module}
It computes likelihoods for each state (senone) using a trained neural network.
This is a standard operation which can be implemented either on a CPU, or on a GPU.
In the latter case, the computation may be more than 1000 times faster.
Yet, we come with another option for a significant reduction in the KWS execution.
The speed of the decoder depends on the number of units that must be processed in each frame.
We cannot change the keyword number but let us see what can be done with the fillers.
Usually, their list is made of all different physical triphones, which means a size of several thousands of items.
If monophones are used instead, the number of fillers would be equal to their number, i.e. it would be smaller by 2 orders and the decoder would run much faster, but obviously with a worse performance.
We propose an optional alternative solution that takes advantages from both approaches.
We model the words and fillers by something we call quasi-monophones, which can be thought as triphone states mapped to a monophone structure.
In each frame, every quasi-monophone state gets the highest likelihood of the mapped states.
This simple triphone-to-monophone conversion can be easily implemented as an additional layer of the neural network that just takes max values from the mapped nodes in the previous layer.
The benefit is that the decoder handles a much smaller number of different states and namely fillers.
In the experimental section, we demonstrate the impact of this arrangement on KWS system’s speed and performance.
\subsection{State Processing Module}
The decoder controls a propagation of accumulated scores between adjacent states. At each frame $t$, new score $d$ is computed for each state $s$ of unit $u$ by adding log likelihood $L$ (provided by the previous module) to the higher of the scores in the predecessor states:
\begin{equation}
d(u,s,t) = L(s,t)+\max_{i=0,1}[d(u,s-i,t-1)]
\end{equation}
Let us denote the score in the unit’s end state $s_E$ as
\begin{equation}
D(u,t) = d(u,s_E,t)
\end{equation}
and $T(u,t)$ be the frame where this unit’s instance started.
Further, we denote two values $d_{best}$ and $D_{best}$:
\begin{equation}
d_{best}(t) = \max_{u,s}[d(u,s,t)]
\end{equation}
\begin{equation}
D_{best}(t) = \max_{u}[D(u,t)]
\end{equation}
The former value serves primarily for pruning, the latter is propagated to initial states $s_1$ of all units in the next frame:
\begin{equation}
d(u,s_{1},t+1) = L(s_1,t+1)+\max[D_{best}(t),d(u,s_{1},t)]
\end{equation}
\subsection{Spot Managing Module}
This module computes acoustic scores $S$ for all words $w$ that reached their last states.
This is done by subtracting these two accumulated scores:
\begin{equation}
\label{eq6}
S(w,t) = D(w,t) - D_{best}(T(w,t)-1)
\end{equation}
The word score $S(w,t)$ needs to be compared with score $S(v_{string},t)$ that would be achieved by the best filler string $v_{string}$ starting in frame $T(w,t)$ and ending in frame $t$.
\begin{equation}
\label{eq7}
R(w,t) = S(v_{string},t) - S(w,t)
\end{equation}
In~\cite{p18}, the first term in eq.~\ref{eq7} is computed by applying the Viterbi algorithm within the given frame span to the fillers only.
Here, we propose to approximate its value by this simple difference:
\begin{equation}
\label{eq8}
S(v_{string},t) \cong D_{best}(t) - D_{best}(T(w,t)-1)
\end{equation}
The left side of eq.~\ref{eq8} equals exactly the right one if the Viterbi backtracking path passes through frame $T(w,t)$, which can be quickly checked.
A large experimental evaluation showed that this happens in more than 90 \% cases.
In the remaining ones, the difference is so small that it has a negligible impact on further steps.
Hence, by substituting from eq.~\ref{eq6} and eq.~\ref{eq8} into eq.~\ref{eq7} we get:
\begin{equation}
\label{eq9}
R(w,t) = D_{best}(t) - D(w,t)
\end{equation}
The value of $R(w,t)$ is related to the confidence of word $w$ being detected in the given frame span.
We just need to normalize it and convert it to a human-understandable scale where number 100 means the highest possible confidence.
We do it in the following way:
\begin{equation}
\label{eq10}
C(w,t) = 100 - k\frac{R(w,t)}{(t-T(w,t))N_S(w)}
\end{equation}
The $R$ value is divided by the word duration (in frames) and its number of HMM states $N_s$, which is further multiplied by constant $k$ before subtracting the term from 100.
The constant influences the range of the confidence values.
We set it so that the values are easily interpretable by KWS system users (see section~\ref{sec:evalres}).
The previous analysis shows that the spot managing module can be made very simple and fast.
In each frame, it just computes eq.~\ref{eq9} and~\ref{eq10} and the candidates with the confidence scores higher than a set threshold are registered in a time-sliding buffer (10 to 20 frames long).
A simple filter running over the buffer content detects the keyword instance with the highest score and sends it to the output.
\subsection{Optimized Repeated Run}
In many practical applications, the same audio data is searched repeatedly, usually with different keyword lists (e.g. during police investigations).
In this case, the KWS system can run significantly faster if we store all likelihoods and two additional values ($d_{best}$ and $D_{best}$) per frame.
In the repeated run, the signal processing part is skipped over and the decoder can process only the keywords because all information needed for optimal pruning and confidence calculation is covered by the 2 above mentioned values.
\section{System and Data for Evaluation}
\subsection{KWS System}
The KWS system used in the experiments is written in C language and runs on a PC (Intel Core i7-9700K).
In some tasks we employ also a GPU (GeForce RTX 2070 SUPER) for likelihood computation.
We tested 2 types of acoustic models (AM) based on neural networks.
Both accept 16 kHz audio signals, segmented into 25ms long frames and preprocessed to 40 filter bank coefficients.
The first uses a 5-layer feedforward DNN trained on some 1000 hours of Czech data (a mix of read and broadcast speech).
The second AM utilizes a bidirectional feedforward sequential memory network (BFSMN) similar to that described in~\cite{p20}.
We have been using it as an effective alternative of RNNs.
In our case, it has 11 layers, each covering 4 left and 4 right temporal contexts.
This AM was trained on the same source data augmented by about 400 hours of (originally) clean speech that passed through different codecs~\cite{p21}.
For both types of the NNs we have trained triphone AMs, for the second also a monophone and quasi-monophone version.
\subsection{Dataset for Evaluation}
\label{sec:dataset}
Three large datasets have been prepared for the evaluation experiments, each covering a different type of speech (see also Table~\ref{tab:dataset}).
The Interview dataset contains 10 complete Czech TV shows with two-persons talking in a studio.
The Stream dataset is made of 30 shows from Internet TV Stream.
We selected the shows with heavy background noise, e.g. Hudebni Masakry (Music Masacres in English).
The Call dataset covers 53 telephone communications with call-centers (in separated channels) and it is a mix of spontaneous (client) and mainly controlled (operator) speech.
All recordings have been carefully annotated with time information (10 ms resolution) added to each word.
\begin{table}[ht]
\centering
\caption{Datasets for evaluation and their main parameters.}\label{tab:dataset}
\begin{tabular}{|l|c|c|c|c|}
\hline
\bfseries Dataset & \bfseries Speech type & \bfseries Signal type & \bfseries Total duration [min] & \bfseries \# keywords\\
\hline
\hline
Interview & planned & studio & 272 & 3524 \\
\hline
Stream & informal & heavy noise & 157 & 1454 \\
\hline
Call & often spontaneous & telephone & 613 & 2935 \\
\hline
\end{tabular}
\end{table}
\section{Experimental Evaluation}
\label{sec:eval}
\subsection{Keyword List}
Our goal was to test the system under realistic conditions and, at the same time, to get statistically conclusive results.
A keyword list of 156 word lemmas with 555 derived forms was prepared for the experiments.
For example, in case of keyword ''David'' we included its derived forms ''David'', ''Davida'', ''Davidem'', ''Davidovi'', etc. in order to avoid false alarms caused by words being substrings of others.
The list was made by combining 80 most frequent words that occurred in each of the datasets, from which some were common and some appeared only in one set.
The searched word forms had to be at least 4 phonemes long.
The mean length of the listed word forms was 6.9 phonemes.
The phonetic transcriptions were automatically extracted from a 500k-word lexicon used in our LVCSR system.
\subsection{Filler Lists}
The list of fillers was created automatically for each acoustic model.
The triphone DNN model generated 9210 fillers and the triphone BFSMN produced 10455 of them.
In contrast to these large numbers, the monophone and quasi-monophone BFSMN model had only 48 fillers (representing 40 phonemes + 8 noises).
\subsection{Evaluation conditions and metrics}
A word was considered correctly detected if the spotted word-form belonged to the same lemma as the word occurring in the transcription at the same instant - with tolerance ±0.5 s.
Otherwise it was counted as a false alarm.
For each experiment we computed Missed Detection (MD) and False Alarm (FA) rates as a function of acceptance threshold value, and drawn a Detection Error Tradeoff (DET) diagram with a marked Equal Error Rate (EER) point position.
\subsection{Evaluation results}
\label{sec:evalres}
The Interview dataset was used as a development data, on which we experimented with various models, system arrangements and also user preferences.
In accord with them, the internal constant $k$ occurring in eq.~\ref{eq10} was set to locate the confidence score equal to 75 close to the EER point.
The first part of the experiments focused on the accuracy of the created acoustic models.
We tested the triphone DNN and 3 versions of the BFSMN one.
Their performance is illustrated by DET curves in Fig.~\ref{fig1}, where also the EER values are displayed.
It is evident that the BFSMN-tri model performs significantly better than the DNN one, which is mainly due to its wider context span.
This is also a reason why even its monophone version has performance comparable to the DNN-tri one.
The proposed quasi-monophone BFSMN model shows the second best performance but the gap between it and the best one is not that crucial, especially if we take into account its additional benefits that will be discussed later.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.52]{tsd1046a.eps}
\caption{KWS results for the Interview dataset in form of DET curves drawn for 4 investigated neural network structures.} \label{fig1}
\end{figure}
Similar trends can be seen also in Fig.~\ref{fig2} and Fig.~\ref{fig3} where we compare the same models (excl. the monophone BFSMN) on the Stream and Call datasets.
In both cases, the performance of all the models was worse (when compared to that of the Interview set) as it can be seen from the positions of the curves and the EER values. This is due to the character of speech and signal quality as explained is section~\ref{sec:dataset}.
Yet, we can notice the positive effect of the training of the BFSMN models on the augmented data (with various codecs), especially on the Call dataset.
Again, the gap between the best triphone and the proposed quasi-monophone version seems to be not that critical.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.52]{tsd1046b.eps}
\caption{DET curves compared for 3 models on the Stream dataset} \label{fig2}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.52]{tsd1046c.eps}
\caption{DET curves compared for 3 models on the Call dataset} \label{fig3}
\end{figure}
Now, we shall focus on the execution time of the proposed scheme.
As explained in section~\ref{sec:approach}, the three modules of the KWS system can be split into 2 parts: the first with the signal processing module, the second with the remaining two.
Both can run together on a PC (in a single thread), or if extremely fast execution is required, the former can be implemented on a GPU.
We tested both approaches and measured their RT factors.
Similar measurements (across all the tree datasets) were performed also in the second part for all the proposed variants and operation modes (see Table~\ref{tab:times} for results.)
The total RT factor is obtained by adding the values for selected options in each of the two parts.
\begin{table}[ht!]
\centering
\caption{Execution times for proposed KWS variants expressed as RT factors.}\label{tab:times}
\begin{tabular}{|l|c|}
\hline
\bfseries System part, variant, mode & \bfseries Real-Time factor\\
\hline
\hline
\multicolumn{2}{|c|}{Part 1 (signal proc. module)} \\
\hline
on CPU & 0.12 \\
\hline
on GPU & 0.0005 \\
\hline
\multicolumn{2}{|c|}{Part 2 (rest of KWS system)} \\
\hline
triphone BFSMN & 0.012 \\
\hline
quasi-mono BFSMN & 0.002 \\
\hline
triphone BFSMN, repeated & 0.009 \\
\hline
quasi-mono BFSMN, repeated & 0.001 \\
\hline
\end{tabular}
\end{table}
Let us remind that the proposed quasi-monophone model performs slightly worse but it offers two practical benefits: a) a speed that can get close to 0.001 RT (if a GPU is used for likelihood computation) and b) a small disk memory consumption in case of repeated runs (with different keywords) because only 48x3+2=146 float numbers per frame need to be stored.
Moreover, the speed of the proposed KWS system is only slightly influenced by the number of keywords.
A test made with 10.000 keywords (instead of 555 ones used in the main experiments) showed only twice slower performance.
\section{Conclusion}
In this contribution we focus mainly on the speed aspect of a modern KWS system, but at the same time we aim at the best performance that is available thanks to the advances in deep neural networks.
The used BFSMN architecture has several benefits for practical usage.
In contrast to more popular RNNs, it can be efficiently and fast trained on a large amount (several thousands of hours) of audio and at the same time yields performance comparable to more complex RNNs and LSTMs as shown in~\cite{p20}.
Its phoneme accuracy is high (due its large internal context) so that it fits both to acoustic KWS systems as well as to standard speech-to-text LVCSR systems.
The latter means that it is well suited for a tandem KWS scheme where a user requires that the sections with detected keywords are immediately transcribed by a LVCSR system.
In our arrangement this can be done very effectively by reusing some of the precomputed data.
(Let us recall that if we use the quasi-monophones, their values are just max values from the original triphone neural network and hence both acoustic models can be implemented by the same network with an additional layer.)
The results presented in section~\ref{sec:eval} allow for designing an optimal configuration that takes into account the three main factors: accuracy, speed and cost.
If the main priority is accuracy and not the speed, the KWS system can run on a standard PC and process data with a RT factor about 0.1.
When very large amounts of records must be processed within very short time then the addition of a GPU and the adoption of the proposed quasi-monophone approach will allow for completing the job in time that can be up to 3 orders shorter than the audio duration.
We evaluated the performance on Czech datasets as these were available with precise human checked transcriptions.
Obviously, the proposed architecture is language independent and we plan to utilize it for other languages investigated in our project.
\subsubsection*{Acknowledgments.}
This work was supported by the Technology Agency of the Czech Republic (Project No. TH03010018).
| proofpile-arXiv_065-303 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection*{1. Proof of Theorem $1$}
For any three different
nodes $p_i, p_j, p_k$ in $\mathbb{R}^3$, the condition $\theta_i+\theta_j+\theta_k = \pi$ must hold.
The angle constraints can be rewritten as
\begin{align}\label{para1}
& w_{ik}d_{ik}d_{ij}\cos \theta_i + w_{ki}d_{ik}d_{jk}\cos \theta_k =0, \\
\label{para2}
& w_{ij}d_{ik}d_{ij}\cos \theta_i + w_{ji}d_{ij}d_{jk}\cos \theta_j =0, \\
\label{para3}
& w_{jk}d_{jk}d_{ij}\cos \theta_j + w_{kj}d_{ik}d_{jk}\cos \theta_k =0,
\end{align}
with $w_{ik}^2+w_{ki}^2 \neq 0$, $w_{ij}^2+w_{ji}^2 \neq 0$, and $w_{jk}^2+w_{kj}^2 \neq 0$.
First, we introduce \textit{Lemma 7} and \textit{Lemma 8} below for proving \text{Theorem 1}.
\textit{Lemma 7.} $p_i, p_j, p_k$ are non-colinear if the parameters in (1)-(3) satisfy $
w_{ik}w_{ij}w_{jk}w_{ki}w_{ji}w_{kj} = 0$.
\begin{proof}
When $
w_{ik}w_{ij}w_{jk}w_{ki}w_{ji}w_{kj} = 0$, without loss of generality,
suppose $w_{ik}=0$,
since $w_{ik}^2+w_{ki}^2 \neq 0$, we have $\theta_k = \frac{\pi}{2}$, $\theta_i + \theta_j= \frac{\pi}{2}$ from \eqref{para1}. Hence, $p_i, p_j, p_k$ are non-colinear. Similarly, we can prove that $p_i, p_j, p_k$ are non-colinear if the parameter $w_{ij}=0$, or $w_{jk}=0$, or $w_{ki}=0$, or $w_{ji}=0$, or $w_{kj}=0$.
\end{proof}
If $
w_{ik}w_{ij}w_{jk}w_{ki}w_{ji}w_{kj} \neq 0$, \eqref{para1}-\eqref{para3} can be rewritten as
\begin{align}\label{para21}
& \frac{\cos \theta_i}{\cos \theta_k} = - \frac{w_{ki}d_{jk}}{w_{ik}d_{ij}}, \\
\label{para22}
& \frac{\cos \theta_i}{\cos \theta_j} = - \frac{w_{ji}d_{jk}}{w_{ij}d_{ik}}, \\
\label{para23}
& \frac{\cos \theta_j}{\cos \theta_k} = - \frac{w_{kj}d_{ik}}{w_{jk}d_{ij}}.
\end{align}
From \eqref{para21} and \eqref{para22}, we have
\begin{align}\label{di}
& d_{ij} = -\frac{\cos \theta_k}{\cos \theta_i}\frac{w_{ki}}{w_{ik}}d_{jk},\\
\label{dj}
& d_{ik} = -\frac{\cos \theta_j}{\cos \theta_i}\frac{w_{ji}}{w_{ij}}d_{jk}.
\end{align}
Note that $\cos \theta_i = \frac{d_{ij}^2+d_{ik}^2-d_{jk}^2}{2d_{ij}d_{ik}}$, $\cos \theta_j = \frac{d_{ij}^2+d_{jk}^2-d_{ik}^2}{2d_{ij}d_{jk}}$, $\cos \theta_k = \frac{d_{ik}^2+d_{jk}^2-d_{ij}^2}{2d_{ik}d_{jk}}$. Combining \eqref{di} and \eqref{dj}, it yields
\begin{equation}\label{dd1}
\frac{d_{ij}^2+d_{ik}^2-d_{jk}^2}{d_{ik}^2+d_{jk}^2-d_{ij}^2} + \frac{d_{ij}^2+d_{ik}^2-d_{jk}^2}{d_{ij}^2+d_{jk}^2-d_{ik}^2} = -(\frac{w_{ki}}{w_{ik}} +\frac{w_{ji}}{w_{ij}}).
\end{equation}
\textit{Lemma 8.} When the parameters in (1)-(3) satisfy $
w_{ik}w_{ij}w_{jk}w_{ki}w_{ji}w_{kj} \neq 0$,
$p_i, p_j, p_k $ are colinear if and only if
\begin{equation}
\frac{w_{ki}}{w_{ik}} +\frac{w_{ji}}{w_{ij}} =1, \text{or} \ \frac{w_{ij}}{w_{ji}} + \frac{w_{kj}}{w_{jk}} =1, \text{or} \ \frac{w_{ik}}{w_{ki}} +\frac{w_{jk}}{w_{kj}} =1.
\end{equation}
\begin{proof}
(Necessity) If $p_i, p_j, p_k $ are colinear, there are three cases: $\text{(\romannumeral1)}$ $\theta_i=\pi$, $\theta_j, \theta_k =0$; $\text{(\romannumeral2)}$ $\theta_j=\pi$, $\theta_i, \theta_k =0$; $\text{(\romannumeral3)}$ $\theta_k=\pi$, $\theta_i, \theta_j =0$.
For the case $\text{(\romannumeral1)}$ that $\theta_i=\pi$, $\theta_j, \theta_k =0$, we have $d_{ij}+d_{ik}= d_{jk}$. Substituting $d_{ij}+d_{ik}= d_{jk}$ into \eqref{dd1}, we get
\begin{equation}
\frac{w_{ki}}{w_{ik}} +\frac{w_{ji}}{w_{ij}}=1.
\end{equation}
Similarly, the conditions can be derived for the other two cases $\text{(\romannumeral2)}$-$\text{(\romannumeral3)}$.
(Sufficiency) If $\frac{w_{ki}}{w_{ik}} +\frac{w_{ji}}{w_{ij}}=1$, \eqref{dd1} becomes
\begin{equation}\label{dd2}
\frac{d_{ij}^2+d_{ik}^2-d_{jk}^2}{d_{ik}^2+d_{jk}^2-d_{ij}^2} + \frac{d_{ij}^2+d_{ik}^2-d_{jk}^2}{d_{ij}^2+d_{jk}^2-d_{ik}^2} = -1.
\end{equation}
Then, \eqref{dd2} can be rewritten as
\begin{equation}\label{dd3}
(d_{ij}^2+d_{ik}^2-d_{jk}^2)^2 = 4d_{ik}^2d_{ij}^2.
\end{equation}
Since $\cos \theta_i = \frac{d_{ij}^2+d_{ik}^2-d_{jk}^2}{2d_{ij}d_{ik}}$, \eqref{dd3} becomes
\begin{equation}
4d_{ij}^2d_{ik}^2\cos^2 \theta_i = 4d_{ij}^2d_{ik}^2, \rightarrow \cos^2 \theta_i = 1.
\end{equation}
Hence, $\theta_i = 0$ or $\pi$, i.e., $p_i, p_j, p_k $ must be colinear. Similarly, we can prove that $p_i, p_j, p_k $ must be colinear for the other two cases $\frac{w_{ij}}{w_{ji}} + \frac{w_{kj}}{w_{jk}} =1 \ \text{and} \ \frac{w_{ik}}{w_{ki}} +\frac{w_{jk}}{w_{kj}} =1$.
\end{proof}
Next, we will prove that the angles $\theta_i, \theta_j, \theta_k \in [0, \pi] $ are determined uniquely by the parameters $w_{ik}, w_{ki}, w_{ij}, w_{ji}, $ $ w_{jk}, w_{kj}$ in (1)-(3). From \textit{Lemma 7} and \textit{Lemma 8}, we can know that
there are only three cases for (1)-(3):
\begin{enumerate}[(i)]
\item $
w_{ik}w_{ij}w_{jk}w_{ki}w_{ji}w_{kj} = 0$;
\item $
w_{ik}w_{ij}w_{jk}w_{ki}w_{ji}w_{kj} \neq 0$, and $\frac{w_{ki}}{w_{ik}} +\frac{w_{ji}}{w_{ij}} =1$, \text{or} \ $\frac{w_{ij}}{w_{ji}} + \frac{w_{kj}}{w_{jk}} =1$, \text{or} \ $\frac{w_{ik}}{w_{ki}} +\frac{w_{jk}}{w_{kj}} =1$;
\item $
w_{ik}w_{ij}w_{jk}w_{ki}w_{ji}w_{kj} \neq 0$, and $ \frac{w_{ki}}{w_{ik}} +\frac{w_{ji}}{w_{ij}}, \frac{w_{ij}}{w_{ji}} + \frac{w_{kj}}{w_{jk}}, \frac{w_{ik}}{w_{ki}} +\frac{w_{jk}}{w_{kj}} \neq 1$.
\end{enumerate}
The above three cases $(\text{\romannumeral1})\!-\!(\text{\romannumeral3})$ are analyzed below.
For the case $(\text{\romannumeral1})$,
from \textit{Lemma 7}, we can know that $p_i, p_j, p_k$ are non-colinear and form a triangle $\bigtriangleup_{ijk}(p)$.
Without loss of generality,
suppose $w_{ik}=0$,
since $w_{ik}^2+w_{ki}^2 \neq 0$, we have $\theta_k = \frac{\pi}{2}$, $\theta_i + \theta_j= \frac{\pi}{2}$ from \eqref{para1}.
Since $\theta_i + \theta_j = \frac{\pi}{2}$, we have
$w_{ij}\cdot w_{ji} < 0$ from \eqref{para22}. According to the sine rule,
$\frac{d_{jk}}{d_{ik}}= \frac{\sin \theta_i}{\sin \theta_j}$. Then, \eqref{para22} becomes
\begin{equation}\label{tan}
\frac{\tan \theta_i}{\tan \theta_j} = - \frac{w_{ij}}{w_{ji}}.
\end{equation}
Since $\theta_j= \frac{\pi}{2}-\theta_i$, from \eqref{tan}, we have
\begin{equation}
\tan \theta_i = \sqrt{- \frac{w_{ij}}{w_{ji}}}, \rightarrow \theta_i = \arctan \sqrt{- \frac{w_{ij}}{w_{ji}}}.
\end{equation}
Similarly, we can prove that $\theta_i, \theta_j, \theta_k$ can be determined uniquely if the parameter $w_{ij}, w_{jk}, w_{ki}, w_{ji},$ or $w_{kj}$ equals $0$.
For the case $(\text{\romannumeral2})$, from \textit{Lemma 8}, we can know that $p_i, p_j, p_k$ are colinear. Two of $\theta_i, \theta_j, \theta_k$ must be $0$. If $w_{kj}w_{jk}<0$, i.e., $\frac{\cos \theta_j}{\cos \theta_k}>0$ from \eqref{para23},
we have
$\theta_i=\pi$, $\theta_j, \theta_k =0$. Similarly, we have $\theta_j=\pi$, $\theta_i, \theta_k =0$ if $w_{ki}w_{ik}<0$, and
$\theta_k=\pi$, $\theta_i, \theta_j =0$ if $w_{ji}w_{ij}<0$.
For the case $(\text{\romannumeral3})$, from \textit{Lemma 8}, we can know that $p_i, p_j, p_k$ are non-colinear and form a triangle $\bigtriangleup_{ijk}(p)$. For this triangle $\bigtriangleup_{ijk}(p)$,
at most one of $\theta_i, \theta_j, \theta_k$ is an obtuse angle. Hence,
there are only four possible cases: $(\text{a})$ $w_{ki}w_{ik}, w_{ji}w_{ij}, w_{kj}w_{jk} <0$; $(\text{b})$ $w_{ki}w_{ik}, w_{ji}w_{ij}>0, w_{kj}w_{jk} <0$; $(\text{c})$ $w_{ki}w_{ik}, w_{kj}w_{jk}>0, w_{ji}w_{ij}<0$; $(\text{d})$ $w_{ji}w_{ij}, w_{kj}w_{jk} >0, w_{ki}w_{ik}<0$.
For the case $(\text{a})$, we have $\theta_i, \theta_j, \theta_k < \frac{\pi}{2}$. From \eqref{para21} and \eqref{para22}, we have
\begin{equation}\label{trans}
\tan \theta_k = -\frac{w_{ki} }{w_{ik}} \tan \theta_i, \ \ \tan \theta_j = -\frac{w_{ji} }{w_{ij}} \tan \theta_i.
\end{equation}
Note that $\tan \theta_i = \tan (\pi- \theta_j - \theta_k)= \frac{\tan \theta_j +\tan \theta_k}{\tan \theta_j\tan \theta_k -1}$. Based on \eqref{trans},
we have
\begin{equation}
\tan \theta_i = \sqrt{\frac{1-\frac{w_{ki}}{w_{ik}}- \frac{w_{ji}}{w_{ij}}}{\frac{w_{ki}w_{ji}}{w_{ik}w_{ij}}}}.
\end{equation}
Then, we can obtain the angle $\theta_i$ by
\begin{equation}
\theta_i = \arctan \sqrt{\frac{1-\frac{w_{ki}}{w_{ik}}- \frac{w_{ji}}{w_{ij}}}{\frac{w_{ki}w_{ji}}{w_{ik}w_{ij}}}}.
\end{equation}
Similarly, the angles $\theta_j$ and $\theta_k$ can also be obtained. Using this way, we can prove that $\theta_i, \theta_j, \theta_k$ can be determined uniquely by the parameters $w_{ik}, w_{ki}, w_{ij}, w_{ji}, w_{jk}, w_{kj}$ for the cases $(\text{b})$-$(\text{d})$.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\linewidth]{fbearing.pdf}
\caption{3-D local-relative-bearing-based network. }
\label{moti1}
\end{figure}
\subsection*{2. Proof of \textit{Lemma} $2$}
\begin{proof}
Since $ \mu_{ij}e_{ij}\!+\! \mu_{ik}e_{ik}\!+\! \mu_{ih}e_{ih}\!+\! \mu_{il}e_{il}=\mathbf{0}$ and $w_{ik}e_{ik}^Te_{ij}+w_{ki}e_{ki}^Te_{kj}=0$,
for the scaling space $S_s$, it is straightforward that $\eta_d^Tp =\mathbf{0}$ and $\eta_r^Tp =0$. For the translation space $S_t$, we have $\eta_d^T( \mathbf{1}_n\otimes {I}_3)=\mathbf{0}$ and $\eta_r^T( \mathbf{1}_n\otimes {I}_3)=\mathbf{0}$. For the rotation space $S_r= \{(I_n \otimes A)p, A+A^T =\mathbf{0}, A \in \mathbb{R}^{3 \times 3} \}$, it follows that $\eta_d^T (I_n \otimes A) p = A (\mu_{ij}e_{ij}\!+\! \mu_{ik}e_{ik}\!+\! \mu_{ih}e_{ih}\!+\! \mu_{il}e_{il})=\mathbf{0}$ and
\begin{equation}
\begin{array}{ll}
\eta_r^T (I_n \otimes A) p\\
= w_{ik}p_i^T(A+A^T)p_i \!+\! (w_{ki}\!-\!w_{ik})p_j^T(A\!+\!A^T)p_i
\\
-(w_{ik}\!+\!w_{ki})p_k^T(A\!+\!A^T)p_i + (w_{ik}\!-\!w_{ki})p_k^T(A\!+\!A^T)p_j \\
+w_{ki}p_k^T(A\!+\!A^T)p_k =0.
\end{array}
\end{equation}
Then, the conclusion follows.
\end{proof}
\subsection*{3. Local-relative-bearing-based Displacement Constraint}
$g_{ij} = \frac{e_{ij}}{d_{ij}} \in \mathbb{R}^3$ is the relative bearing of $p_j$ with respect to $p_i$ in $\Sigma_g$. For the node $i$ and its neighbors $j, k, h, l$ in $\mathbb{R}^3$,
the matrix $g_i=(g_{ij}, g_{ik}, g_{ih}, g_{il}) \in \mathbb{R}^{3 \times 4}$ is a wide matrix. From the matrix theory, there must be a non-zero vector $ \bar \mu_i=(\bar \mu_{ij}, \bar \mu_{ik}, \bar \mu_{ih}, \bar \mu_{il})^T \in \mathbb{R}^4$ such that $g_i\bar \mu_i= \mathbf{0}$, i.e.,
\begin{equation}\label{root1}
\bar \mu_{ij}g_{ij}+\bar \mu_{ik}g_{ik}+\bar \mu_{ih}g_{ih} + \bar \mu_{il}g_{il}= \mathbf{0},
\end{equation}
where $\bar \mu_{ij}^2+\bar \mu_{ik}^2+\bar \mu_{ih}^2+\bar \mu_{il}^2 \neq 0$.
The equation $g_i\bar \mu_i= \mathbf{0}$ is a bearing constraint, based on which a displacement constraint can be obtained shown as following. The non-zero vector $(\bar \mu_{ij}, \bar \mu_{ik}, \bar \mu_{ih}, \bar \mu_{il})^T$ can be calculated with local relative bearing measurements $g_{ij}^{i}, g_{ik}^{i}, g_{ih}^{i}, g_{il}^{i}$ by solving the following equation
\begin{equation}\label{wmi1}
\left[ \!
\begin{array}{c c c c}
g_{ij}^{i} & g_{ik}^{i} & g_{ih}^{i} & g_{il}^{i} \\
\end{array}
\right] \left[ \!
\begin{array}{c}
\bar \mu_{ij} \\
\bar \mu_{ik} \\
\bar \mu_{ih} \\
\bar \mu_{il}
\end{array}
\right] = \mathbf{0}.
\end{equation}
Note that \eqref{root1} can be rewritten as
\begin{equation}\label{loca1}
\bar \mu_{ij}\frac{e_{ij}}{d_{ij}}+\bar \mu_{ik}\frac{e_{ik}}{d_{ik}}+\bar \mu_{ih}\frac{e_{ih}}{d_{ih}} + \bar \mu_{il}\frac{e_{il}}{d_{il}} = \mathbf{0}.
\end{equation}
\begin{assumption}\label{ad3}
No two nodes are collocated in $\mathbb{R}^3$. Each anchor node has at least two neighboring anchor nodes, and
each free node has at least four neighboring nodes. The free node and its neighbors are non-colinear.
\end{assumption}
Under \text{Assumption \ref{ad3}}, without loss of generality, suppose node $l$ is not colinear with nodes $i,j,k,h$ shown in the above Fig. \ref{moti1}.
The angles among the nodes $p_i, p_j, p_k, p_h, p_l$ are denoted by $\xi_{ilj} \!=\! \angle p_ip_lp_j, \xi_{ilk} \!=\! \angle p_ip_lp_k, \xi_{ilh} \!=\! \angle p_ip_lp_h, \xi_{ijl} \!=\! \angle p_ip_jp_l, \xi_{ikl} \!=\! \angle p_ip_kp_l, \xi_{ihl} \!=\! \angle p_ip_hp_l$. Note that these angles can be obtained by only using the local relative bearing measurements. For example, $\xi_{ilj} = g_{li}^Tg_{lj}= {g^{l}_{li}}^TQ_l^TQ_lg_{lj}^{l} ={g_{li}^{l}}^Tg_{lj}^l$.
According to the sine rule,
$\frac{d_{il}}{d_{ij}}= \frac{\sin \xi_{ijl}}{\sin \xi_{ilj}}, \frac{d_{il}}{d_{ik}}= \frac{\sin \xi_{ikl}}{\sin \xi_{ilk}}, \frac{d_{ih}}{d_{il}}= \frac{\sin \xi_{ilh}}{\sin \xi_{ihl}}$. Then, based on \eqref{loca1},
we can obtain
a displacement constraint by only using the local relative bearing measurements shown as
\begin{equation}\label{bead}
\mu_{ij}e_{ij}+ \mu_{ik}e_{ik}+ \mu_{ih}e_{ih} + \mu_{il}e_{il}= \mathbf{0},
\end{equation}
where
\begin{equation}
\begin{array}{ll}
& \mu_{ij} = \bar \mu_{ij}\frac{\sin \xi_{ijl}}{\sin \xi_{ilj}}, \ \ \mu_{ik} = \bar \mu_{ik}\frac{\sin \xi_{ikl}}{\sin \xi_{ilk}}, \\
& \mu_{ih} = \bar \mu_{ih}\frac{\sin \xi_{ilh}}{\sin \xi_{ihl}}, \ \ \mu_{il} = \bar \mu_{il}.
\end{array}
\end{equation}
In a local-relative-bearing-based network in $\mathbb{R}^3$ under \text{Assumption \ref{ad3}},
let
$\mathcal{X}_{\mathcal{G}}= \{ ( i, j, k, h, l) \in \mathcal{V}^{5} : (i,j), (i,k), $ $ (i,h), (i,l), (j,k), (j,h), (j,l) \in \mathcal{E}, j \!<\! k \!<\! h \!<\! l\}$. Each element of $\mathcal{X}_{\mathcal{G}}$ can be used to construct a local-relative-bearing-based displacement constraint.
\subsection*{4. Distance-based Displacement Constraint}
Since the displacement constraints
are invariant to translations and rotations, a congruent network of the subnetwork consisting of the node and its neighbors
has the displacement constraint.
Each displacement constraint can be regarded as a subnetwork, and
multi-dimensional scaling can be used to obtain
displacement constraint shown in the following Algorithm \ref{disa} \cite{han2017barycentric}.
\subsection*{5. Ratio-of-distance-based Displacement Constraint}
For the free node $i$ and its neighbors $j,k,h,l$, under Assumption $1$, we can obtain the ratio-of-distance matrix $M_r$ \eqref{ratio} by the ratio-of-distance measurements.
\begin{equation}\label{ratio}
M_r = \frac{1}{d_{ij}^2}\left[ \!
\begin{array}{c c c c c}
0 & d_{ij}^2 & d_{ik}^2 & d_{ih}^2 & d_{il}^2 \\
d_{ji}^2 & 0 & d_{jk}^2 & d_{jh}^2 & d_{jl}^2 \\
d_{ki}^2 & d_{kj}^2 & 0 & d_{kh}^2 & d_{kl}^2 \\
d_{hi}^2 & d_{hj}^2 & d_{hk}^2 & 0 & d_{hl}^2 \\
d_{li}^2 & d_{lj}^2 & d_{lk}^2 & d_{lh}^2 & 0
\end{array}
\right].
\end{equation}
Note that the displacement constraints
are not only invariant to translations and rotations, but also to scalings. Hence, a network with ratio-of-distance measurements $\frac{1}{d_{ij}}\{d_{ij}, \cdots, d_{hl}, \cdots \}$ has the same displacement constraints as the network with distance measurements $\{d_{ij}, \cdots, d_{hl}, \cdots \}$, that is, the displacement constraint $\mu_{ij}e_{ij}+ \mu_{ik}e_{ik}+ \mu_{ih}e_{ih} + \mu_{il}e_{il}= \mathbf{0}$ can also be obtained by Algorithm $1$, where the distance matrix $M$ \eqref{dim} is replaced by the
the ratio-of-distance matrix $M_r$ \eqref{ratio}.
\begin{algorithm}
\caption{Distance-based displacement constraint}
\label{disa}
\begin{algorithmic}[1]
\State Available information: Distance measurements among the nodes $p_i,p_j,p_k,p_h,p_l$. Denote $(\mathcal{\bar G}, \bar p)$ as a subnetwork with $\bar p=(p_i^T,p_j^T,p_k^T,p_h^T,p_l^T)^T$.
\State Constructing a distance matrix $M \in \mathbb{R}^{5 \times 5}$ shown as
\begin{equation}\label{dim}
M = \left[ \!
\begin{array}{c c c c c}
0 & d_{ij}^2 & d_{ik}^2 & d_{ih}^2 & d_{il}^2 \\
d_{ji}^2 & 0 & d_{jk}^2 & d_{jh}^2 & d_{jl}^2 \\
d_{ki}^2 & d_{kj}^2 & 0 & d_{kh}^2 & d_{kl}^2 \\
d_{hi}^2 & d_{hj}^2 & d_{hk}^2 & 0 & d_{hl}^2 \\
d_{li}^2 & d_{lj}^2 & d_{lk}^2 & d_{lh}^2 & 0
\end{array}
\right].
\end{equation}
\State Computer the centering matrix $J=I-\frac{1}{5}\mathbf{1}_5\mathbf{1}_5^T$;
\State Compute the matrix $X=-\frac{1}{2}JMJ$;
\State Perform singular value decomposition on $X$ as
\begin{equation}
X = V \Lambda V^T,
\end{equation}
where $V=(v_1,v_2,v_3,v_4,v_5) \in \mathbb{R}^{5 \times 5}$ is a unitary matrix, and $\Lambda = \text{diag}(\lambda_1,\lambda_2,\lambda_3, \lambda_4, \lambda_5)$ is a diagonal matrix whose diagonal elements $\lambda_1 \ge \lambda_2 \ge \lambda_3 \ge \lambda_4 \ge \lambda_5$ are singular values. Since $\text{Rank}(X) \le 3$, we have $\lambda_4 = \lambda_5=0$. Denote by $V_*=(v_1,v_2,v_3)$ and $\Lambda_*=\text{diag}(\lambda_1,\lambda_2,\lambda_3)$;
\State
Obtaining a congruent network $(\mathcal{\bar G}, \bar q) \cong (\mathcal{\bar G}, \bar p)$ with $\bar q=(q_i^T,q_j^T,q_k^T,q_h^T,q_l^T)^T $, where $(q_i, q_j, q_k, $ $ q_h, q_l ) = \Lambda_* ^{\frac{1}{2}}V_*^T$;
\State Based on the congruent network $\bar q=(q_i^T,q_j^T,q_k^T,q_h^T,q_l^T)^T$ of the subnetwork $\bar p=(p_i^T,p_j^T,p_k^T,p_h^T,p_l^T)^T$, the parameters $\mu_{ij}, \mu_{ik}, \mu_{ih}, \mu_{il}$ in $\mu_{ij}e_{ij}+\mu_{ik}e_{ik}+\mu_{ih}e_{ih} + \mu_{il}e_{il}= \mathbf{0}$ can be obtained by solving the following matrix equation
\begin{equation}\label{wmi}
\left[ \!
\begin{array}{c c c c}
q_j-q_i & q_k-q_i & q_h-q_i & q_l-q_i \\
\end{array}
\right] \left[ \!
\begin{array}{c}
\mu_{ij} \\
\mu_{ik} \\
\mu_{ih} \\
\mu_{il}
\end{array}
\right] = \mathbf{0}.
\end{equation}
\end{algorithmic}
\end{algorithm}
\subsection*{6. Angle-based Displacement Constraint}
For a triangle $\bigtriangleup_{ijk}(p)$, according to the sine rule,
the ratios of distance
can be calculated by the angle measurements $\theta_i, \theta_j, \theta_k$ shown as
\begin{equation}\label{sin}
\frac{d_{ij}}{d_{ik}}=\frac{\sin \theta_k}{\sin \theta_j}, \frac{d_{ij}}{d_{jk}}=\frac{\sin \theta_k}{\sin \theta_i}.
\end{equation}
Under \text{Assumption \ref{ad3}}, the ratios of distance of all the edges among the nodes $i, j, k, h, l$ can be calculated by the angle measurements through the sine rule \eqref{sin}, i.e., the ratio-of-distance matrix $M_r$ \eqref{ratio} is available. Then, the displacement constraint $\mu_{ij}e_{ij}+ \mu_{ik}e_{ik}+ \mu_{ih}e_{ih} + \mu_{il}e_{il}= \mathbf{0}$ can be obtained by Algorithm $1$, where the distance matrix $M$ \eqref{dim} is replaced by the
the ratio-of-distance matrix $M_r$.
In an angle-based network in $\mathbb{R}^3$ under \text{Assumption \ref{ad3}},
let
$\mathcal{X}_{\mathcal{G}}=\{ ( i, j, k, h, l) \in \mathcal{V}^{5} : (i,j), (i,k), $ $ (i,h), (i,l), (j,k), (j,h), (j,l), (k,h), (k,l), (h,l) \in \mathcal{E}, j \!<\! k \!<\! h \!<\! l\}$. Each element of $\mathcal{X}_{\mathcal{G}}$ can be used to construct an angle-based displacement constraint.
\subsection*{7. Relaxed Assumptions for Constructing local-relative-position-based, Distance-based, Ratio-of-distance-based, Local-relative-bearing-based, and Angle-based Displacement Constraint in a Coplanar Network}
\begin{assumption}\label{as31}
No two nodes are collocated in $\mathbb{R}^3$. Each anchor node has at least two neighboring anchor nodes, and
each free node has at least three neighboring nodes.
\end{assumption}
\begin{assumption}\label{ad32}
No two nodes are collocated in $\mathbb{R}^3$. Each anchor node has at least two neighboring anchor nodes, and
each free node has at least three neighboring nodes. The free node and its neighbors are non-colinear.
\end{assumption}
\begin{enumerate}
\item
In a local-relative-position-based coplanar network in $\mathbb{R}^3$ with \text{Assumption \ref{as31}},
let
$\mathcal{X}_{\mathcal{G}}=\{ ( i, j, k, h) \in \mathcal{V}^{4} : (i,j), (i,k), $ $ (i,h) \in \mathcal{E}, j \!<\! k \!<\! h \}$. Each element of $\mathcal{X}_{\mathcal{G}}$ can be used to construct a local-relative-position-based displacement constraint $\mu_{ij}e_{ij}+\mu_{ik}e_{ik}+\mu_{ih}e_{ih}= \mathbf{0}$.
\item In a distance-based coplanar network in $\mathbb{R}^3$ with \text{Assumption \ref{as31}},
let
$\mathcal{X}_{\mathcal{G}}=\{ ( i, j, k, h) \in \mathcal{V}^{4} : ((i,j), (i,k), $ $ (i,h), (j,k), (j,h), (k,h) \in \mathcal{E}, j \!<\! k \!<\! h\}$. Each element of $\mathcal{X}_{\mathcal{G}}$ can be used to construct a distance-based displacement constraint $\mu_{ij}e_{ij}+\mu_{ik}e_{ik}+\mu_{ih}e_{ih}= \mathbf{0}$.
\item In a ratio-of-distance-based coplanar network in $\mathbb{R}^3$ with \text{Assumption \ref{as31}},
let
$\mathcal{X}_{\mathcal{G}}=\{ ( i, j, k, h) \in \mathcal{V}^{4} : ((i,j), (i,k), $ $ (i,h), (j,k), (j,h), (k,h) \in \mathcal{E}, j \!<\! k \!<\! h\}$. Each element of $\mathcal{X}_{\mathcal{G}}$ can be used to construct a ratio-of-distance-based displacement constraint $\mu_{ij}e_{ij}+\mu_{ik}e_{ik}+\mu_{ih}e_{ih}= \mathbf{0}$.
\item In a local-relative-bearing-based coplanar network in $\mathbb{R}^3$ with \text{Assumption \ref{ad32}},
let
$\mathcal{X}_{\mathcal{G}}=\{ ( i, j, k, h) $ $ \in \mathcal{V}^{4} : (i,j), (i,k), (i,h), (j,k), (j,h) \in \mathcal{E}, j \!<\! k \!<\! h \!<\! l\}$. Each element of $\mathcal{X}_{\mathcal{G}}$ can be used to construct a local-relative-bearing-based displacement constraint $\mu_{ij}e_{ij}+\mu_{ik}e_{ik}+\mu_{ih}e_{ih}= \mathbf{0}$.
\item In an angle-based coplanar network in $\mathbb{R}^3$ with \text{Assumption \ref{ad32}},
let
$\mathcal{X}_{\mathcal{G}}=\{ ( i, j, k, h) \in \mathcal{V}^{4} : ((i,j), (i,k), $ $ (i,h), (j,k), (j,h), (k,h) \in \mathcal{E}, j \!<\! k \!<\! h\}$. Each element of $\mathcal{X}_{\mathcal{G}}$ can be used to construct an angle-based displacement constraint $\mu_{ij}e_{ij}+\mu_{ik}e_{ik}+\mu_{ih}e_{ih}= \mathbf{0}$.
\end{enumerate}
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-304 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{s1}
\vspace{20pt}
Let $M$ be a $C^\infty$ connected, closed Riemannian manifold. A function $H\in C^k(T^*M,\mathbb{R}), k\geq 2$ is called a {\sf Tonelli Hamiltonian} if for all $x\in M$,
\begin{enumerate}
\item[\bf (H1)] {\sf (Positive Definite)} $H_{pp}$ is positive definite everywhere on $T_x^*M$;
\item[\bf (H2)] {\sf (Superlinear)} $\lim_{|p|_{x}\rightarrow+\infty}H(x,p)/|p|_{x}=+\infty$, where $|\cdot|_{x}$ is the norm on $T^{\ast}_{x}M$ induced by the Riemannian metric.
\end{enumerate}
For a fixed constant $\lambda>0$, we consider the ODE system on $T^{\ast}M$ associated with $H$, which can be expressed in coordinates as
\begin{eqnarray}\label{eq:ode0}
\left\{
\begin{aligned}
\dot{x}&=H_p(x,p),\\
\dot p&=-H_x(x,p)-\lambda p.
\end{aligned}
\right.
\end{eqnarray}
Physically, this system describes the mechanical motion of masses with friction proportional to the velocity. System \eqref{eq:ode0} can also be
found in other subjects, e.g. astronomy \cite{CC}, transportation \cite{WL} and economics \cite{B}. It is remarkable that the earliest research of system \eqref{eq:ode0} can trace back to Duffing's work on explosion engines \cite{D}. The nonlinear oscillation he concerned inspires the qualitative theory of dynamical systems in the following decades \cite{MH}.
\medskip
{Due to (H1)-(H2)}, the {\sf local phase flow} $\Phi_{H,\lambda}^t$ of \eqref{eq:ode0} is forward complete, namely, it is well defined for all $t\in\mathbb{R}_+$. Besides, the direct computation shows that $\Phi_{H,\lambda}^t$ transports the standard symplectic form $\Omega=dp\wedge dx$ into a multiple of itself:
\begin{eqnarray}\label{cs}
(\Phi^{t}_{H,\lambda})^{*}\Omega= e^{\lambda t}\Omega,\quad t\in\mathbb{R}_+.
\end{eqnarray}
That is why system \eqref{eq:ode0} is also called {\sf conformally symplectic} \cite{WL} or {\sf dissipative} \cite{LC} in some literatures. The attracting invariant sets of twist map satisfying \eqref{cs} ($t\in\mathbb{Z}$) have been revealed by Le Calvez \cite{LC} and Casdagli \cite{Ca}. Besides, th e existence of KAM tori for system \eqref{eq:ode0} was investigated in \cite{CCD}.\\
\subsection{Viscosity subsolutions of discounted H-J equations}
Following the ideas of Aubry-Mather theory \cite{Ma} and weak KAM theory \cite{F}, the authors of \cite{DFIZ,MS} initiate the investigation of variational methods associated to \eqref{eq:ode0}. They try to seek a {\sf viscosity solution} of the {\sf discounted Hamilton-Jacobi equation}
\begin{align}\label{eq:hj}
\lambda u+H(x,du)=0,\quad\tag{D}\quad x\in M
\end{align}
which is related with the variational minimal curves starting from each $x\in M$.
It is well-known to specialists in PDE \cite{Ba,DFIZ}, such a viscosity solution is unique but usually not $C^{1}$. To understand such a solution, we introduce the following variational principle:
let's define the {\sf Legendre transformation} by
\[
\mathcal{L}_H:T^{\ast}M\rightarrow TM;(x,p)\mapsto(x,H_p(x,p))
\]
which is a diffeomorphism due to (H1)-(H2). Accordingly, the {\sf Tonelli Lagrangian} $L\in C^{k}(TM,\mathbb{R})$
\begin{eqnarray}\label{eq:led}
L(x,v):=\max_{p\in T_{x}^*M}\big\{\langle p, v\rangle-H(x,p)\big\},
\end{eqnarray}
is well defined and the maximum is attained at $\bar p\in T_x^*M $ such that $\bar p=L_v(x, v)$.
\begin{defn}\label{defn:subsol}
A function $u\in C(M,\mathbb{R})$ is called {\sf $\lambda-$dominated by $L$} and denoted by $u\prec_\lambda L$,
if for any absolutely continuous $\gamma:[a,b]\rightarrow M$, there holds
\begin{eqnarray}\label{eq:dom}
e^{\lambda b}u(\gamma(b))-e^{\lambda a}u(\gamma(a))\leq \int_a^be^{\lambda t}L(\gamma(t),\dot\gamma(t))\ dt.
\end{eqnarray}
We can denote by $\mathcal{S}^-$ the set of all $\lambda-$dominated functions of $L$.
\end{defn}
\begin{rmk}
Recall that any $\lambda-$dominated function $u$ has to be Lipschitz (Proposition 6.3 of \cite{DFIZ}). Therefore, we can prove
\[
\lambda u(x)+H(x,du(x))\leq 0,\quad a.e. \ x\in M,
\]
which implies $u$ is an {\sf almost everywhere subsolution} of (\ref{eq:hj}) (see Lemma \ref{sub-dom}). On the other side, the equivalence between almost everywhere subsolution and {\sf viscosity subsolution} was proved in bunch of references e.g. \cite{BC,Ba,DFIZ,F2,Si}. So we get the equivalence among the three:
\[
\text{a.e. subsolution} \Longleftrightarrow \text{viscosity subsolution} \Longleftrightarrow \lambda-\text{dominated function}
\]
\end{rmk}
\smallskip
\begin{defn}\cite{MS}\label{def:Aubry}
$\gamma\in C^{ac}(\mathbb{R},M)$ is called {{\sf globally calibrated}} by $u\in\mathcal{S}^-$, if for any $a<b\in\mathbb{R}$,
\[
e^{\lambda b}u(\gamma(b))-e^{\lambda a} u(\gamma(a))=\int_a^be^{\lambda t}L(\gamma(t),\dot\gamma(t))dt.
\]
The {\sf Aubry set} $\widetilde \mathcal{A}$ is an $\Phi_{L,\lambda}^t-$invariant set defined by
\begin{eqnarray}\label{defn:aub}
\widetilde \mathcal{A}=\bigcup_{u\in\mathcal{S}^-}\bigcup_{\gamma}\,\,\{(\gamma, \dot\gamma)|\gamma \text{ is globally calibrated by }u\}\subset TM
\end{eqnarray}
and the {\sf projected Aubry set} can be defined by $\mathcal{A}=\pi\widetilde \mathcal{A}\subset M$, where $\pi:TM\rightarrow M$ is the canonical projection.
\end{defn}
\begin{rmk}
Here the definition of the Aubry set is equivalent to the definition in \cite{MS}, see Appendix \ref{a3} for the proof. Therefore, $\pi^{-1}:\mathcal{A}\rightarrow \widetilde \mathcal{A}\subset TM$ is a Lipschitz graph, as described in Theorem (ii) of \cite{MS}.
\end{rmk}
The relation between $u^-$ and $\mathcal{S}^-$ can be revealed by the following conclusion:
\begin{thm}[proved in Appendix \ref{a2}]\label{thm:0}
The viscosity solution of (\ref{eq:hj}) is the pointwise supreme of all smooth, i.e., $C^{\infty}$, viscosity subsolutions, namely we have
\begin{eqnarray}
u^-(x)=\sup_{u\in \mathcal{S}^-}u(x)=\sup_{u\in C^{\infty}\cap\mathcal{S}^-}u(x).
\end{eqnarray}
\end{thm}
\medskip
\subsection{Constrained subsolutions \& Main results}
Notice that for any $w\in C^{1}(M,\mathbb{R})$, there always exists a constant $c>0$ large enough, such that $w-c$ is a subsolution of \eqref{eq:hj}. Therefore, we are easy to get tons of subsolutions, which couldn't tell us any information about $\widetilde \mathcal{A}$ yet.
So we need a further selection in $\mathcal{S}^-$. \smallskip
Let us denote
$\mathfrak M_\lambda$ by the set of all $\Phi_{L,\lambda}^t$-invariant measures (w.r.t the (E-L) flow, see Sec. \ref{s2}). It is non-empty, since
there exists at least an invariant probability measure $\mu$ supported on $\widetilde \mathcal{A}$, due to the $\Phi_{L,\lambda}^t-$invariance of $\widetilde \mathcal{A}$ and the Krylov-Bogolyubov's theorem (see Remark 10 of \cite{MS}). Then we can give the following definition.
\begin{defn}\label{con-sol}
$u\in\mathcal{S}^-$ is called a {\sf constrained subsolution} of \eqref{eq:hj}, if
\begin{eqnarray}\label{eq:con}
\inf_{\mu\in\mathfrak M_\lambda}\int L-\lambda u\,\,d\mu=0.
\end{eqnarray}
We denote by $\mathcal{S}_c^-$ the set of constrained subsolutions.
\end{defn}
The first conclusion shows the fine properties of the constrained subsolutions:
\begin{thmA}\label{prop:cs}
\
\begin{enumerate}
\item $u^-\in\mathcal{S}_c^-$, which implies $\mathcal{S}_c^-\neq\emptyset$.
\item For each $u\in\mathcal{S}_c^-$, there exists an $\Phi_{L,\lambda}^t-$invariant subset $\widetilde \mathcal{A}(u)\subset\widetilde \mathcal{A}$, such that $u=u^-$ on $\pi\widetilde \mathcal{A}(u)$.
\end{enumerate}
\end{thmA}
Due to the intrinsic properties of the Lax-Oleinik semigroup, we can find smooth constrained subsolutions:
\begin{thmB}\label{thm:b}
There exists a $u\in C^{1,1}(M,\mathbb{R})\cap\mathcal{S}_c^-$ which is a solution on $\mathcal{A}$.
\end{thmB}
The smoothness of constrained subsolutions can be further improved, if additional hyperbolicity of $\widetilde \mathcal{A}$ is supplied:
\begin{thmC}\label{thm:c}
Assume $\widetilde \mathcal{A}$ consists of finitely many hyperbolic equilibria or periodic orbits, then there exists a sequence $\{u_i\in C^k(M,\mathbb{R})\}_{i\in\mathbb{N}}\subset\mathcal{S}_c^-$ converging to $u^-$ as $i\rightarrow+\infty$ w.r.t. the $C^0-$norm, such that each $u_i$ equals $u^-$ on $\mathcal{A}$ and satisfies $\lambda u_i(x)+H(x,du_i(x))<0$ for $x\notin\mathcal{A}$.\\
\end{thmC}
\subsection{Applications of constrained subsolutions}
As the first application of constrained subsolutions, we show how to locate the {\sf maximal global attractor} of (\ref{eq:ode0}) by using elements in $\mathcal{S}_c^-\cap C^{1,1}$. We will see that the smoothness plays a crucial role.
\begin{defn}\cite[page 5]{MS}
A compact $\Phi^t_{H,\lambda}$-invariant set $\Omega\subset T^*M$ is called a {\sf global attractor} of $\Phi^t_{H,\lambda}$, if for any point $(x,p)\in T^*M$ and any open neighborhood $\mathcal{U}$ of $\Omega$, there exists $T(x,p,\mathcal{U})>0$ such that for all $t\geq T$, $\Phi_{H,\lambda}^t(x,p)\in \mathcal{U}$. Moreover, if $\Omega$ is not contained in any larger global attractor, then it is called a {\sf maximal global attractor}.
\end{defn}
\begin{thmD}\label{thm:dthmD}
For any initial point $(x,p)\in T^*M$, the flow $\Phi_{H,\lambda}^t(x,p)$ tends to a maximal global attractor $\mathcal{K}$ as $t\rightarrow+\infty$. Moreover, $\mathcal{K}$ can be identified as the forward intersectional set of the following region:
\[
\Sigma_c^-:=\bigcap_{u\in\mathcal{S}^-_c\cap C^1(M,R)}\{(x, p)\in T^*M| {\lambda u(x)+H(x,p)}\leq 0\},
\]
i.e.
\[
\mathcal{K}=\bigcap_{t\geq 0}\Phi_{H,\lambda}^t(\Sigma_c^-).
\]
\end{thmD}
Another application of the constrained subsolutions, we show how $\mathcal{S}_c^-$ can be used to control the convergent speed of the Lax-Oleinik semigroup, with the hyperbolic assumptions:
\begin{thmE}
Assume $\widetilde \mathcal{A}$ consists of a unique hyperbolic equilibrium $(x_0,0)\in TM$ with $\mu>0$ being the minimal positive eigenvalue, then there exists $K>0$ which guarantees
\begin{eqnarray}
\|\mathcal{T}_t^-0(x)-u^-(x)+e^{-\lambda t}\alpha\|\leq K\exp\Big(-(\mu+\lambda)t\Big),\quad\forall t\geq0.
\end{eqnarray}
where $\mathcal{T}_t^-$ is the Lax-Oleinik semigroup operator (see (\ref{eq:evo}) for the definition) and
\[
\alpha=\int_{-\infty}^0e^{\lambda t}L(x_0,0)dt=u^-(x_0)
\]
is a definite constant.
\end{thmE}
\begin{corF}
Assume $\widetilde \mathcal{A}$ consists of a unique hyperbolic periodic orbit, then there exists a constant $K>0$ and a constant $\mu>0$ being the minimal Lyapunov exponent of the hyperbolic periodic orbit, such that
\[
\liminf_{t\rightarrow+\infty}\dfrac{\|\mathcal{T}_t^-0(x)-u^-(x)+e^{-\lambda t}\alpha\|}{\exp \big(-(\mu+\lambda)t\big)}\leq K
\]
with $\alpha=u^-(x_0)$ for a fixed $x_0\in\mathcal{A}$.
\end{corF}
\begin{rmk}
For any $\psi\in C^0(M,\mathbb{R})$, the convergent rate
\begin{eqnarray}\label{eq:c0-con}
\|\mathcal{T}_t^-\psi(x)-u^-(x)\|\sim O(e^{-\lambda t}),\quad t\geq 0
\end{eqnarray}
has been easily proved in a bunch of references, e.g. \cite{DFIZ,MS}. However, as $\lambda\rightarrow 0_+$, this inequality becomes ineffective in constraining the convergent speed of the Lax-Oleinik semigroup.
On the other side, for the case $\lambda=0$ \cite{IM, M,WY1} has shown the exponential convergence of the Lax-Oleinik semigroup, with the assumption that Aubry set consists of finitely many hyperbolic equilibria. So we have chance to generalize the idea in \cite{IM, M} to $0<\lambda\ll1$, then prove Theorem E and Corollary F. \smallskip
Notice that due to Theorem D of \cite{Man}, for generic $H(x,p)$ we can guarantee the hyperbolic equilibrium (resp. periodic orbit with fixed homology class) of (\ref{eq:ode0}) with $\lambda=0$ is unique, so the uniqueness of equilibrium (or periodic orbit) is not artificial. Nonetheless, we can still
generalize Theorem E (resp. Corollary F) to several hyperbolic equilibriums (resp. periodic orbits), by replacing the constant $\alpha$ to a piecewise function
\[
\alpha (x)=\int_{-\infty}^0e^{\lambda t}L(z_x,0)dt,\quad x\in M.
\]
where $z_x$ is an arbitrary point in the $\alpha-$limit set of the backward calibrated curve $\gamma_x^-$ ending with $x$. This $\alpha(x)$ is posteriorily decided by the equilibriums (or periodic orbits), which dominate the asymptotic behavior of backward calibrated curves.
\end{rmk}
\vspace{20pt}
\subsection{Organization of the article}
The paper is organized as follows: In Sec \ref{s2}, we give a brief review of weak KAM theory for equation \eqref{eq:hj}, then in Sec. \ref{s2+} we give the proof of Theorem A, B and C. In Sec \ref{s3}, we discuss the global attractor and prove Theorem D. In Sec \ref{s4}, we discuss the convergent speed of the Lax-Oleinik semigroup and prove Theorem E and Corollary F. For the consistency of the proof, some technical conclusions are moved to the Appendix.
\vspace{20pt}
\noindent{\bf Acknowledgements:} The first author is supported by National Natural Science Foundation of China (Grant No.11571166, No.11901293) and by Startup Foundation of Nanjing University of Science and Technology (No. AE89991/114). The second author is supported by the National Natural Science Foundation of China (Grant No. 11901560), and the Special Foundation of the Academy of Science (No. E0290103).
\vspace{20pt}
\section{Weak KAM theory of discounted systems}\label{s2}
\vspace{20pt}
In this section, we shall discuss some details about the weak KAM theory for the discounted Hamilton-Jacobi equation \eqref{eq:hj} and its relationship with viscosity solutions.
\begin{lem}\label{sub-dom}
$u\prec_\lambda L$ if and only if $u$ is a viscosity subsolution of \eqref{eq:hj}.
\end{lem}
\begin{proof}
Assume that $u:M\rightarrow\mathbb{R}$ is a viscosity subsolution of \eqref{eq:hj}, then it's Lipschitzian (Proposition 2.4 of \cite{DFIZ}), which is therefore differentiable almost everywhere. Suppose $x\in M$ is a differentiable point of $u$, for any $C^1$ curve $\gamma:[a,b]\rightarrow M$ with $\gamma(a)=x$, we can take the directional derivative by
\begin{eqnarray}
\frac{d}{dt}\big(e^{\lambda t}u(\gamma(t))\big)\Big|_{t\rightarrow a^+}&=&e^{\lambda a}\langle du(x),\dot\gamma(a)\rangle+\lambda e^{\lambda a}u(x)\nonumber\\
&\leq& e^{\lambda a}[L(x,\dot\gamma(a))+H(x,du(x))+\lambda u(x)]\nonumber\\
&\leq& e^{\lambda a} L(x,\dot\gamma(a)) \nonumber
\end{eqnarray}
which implies $u\prec_\lambda L$ by using \cite[Proposition 4.2.3]{F}.
\medskip
Conversely, $u\prec_\lambda L$ implies that $u\in Lip(M,\mathbb{R})$, so for any differentiable point $x\in M$ {of $u$ and $C^1$ curve $\gamma:[a,b]\rightarrow M$ with $\gamma(a)=x$},
\[
\lim_{b\rightarrow a^+}\frac{1}{b-a}[e^{\lambda b}u(\gamma(b))-e^{\lambda a}u(\gamma(a))]\leq\lim_{b\rightarrow a^+}\frac{1}{b-a}\int_a^be^{\lambda s} L(\gamma(s),\dot\gamma(s)) ds
\]
which leads to
\[
e^{\lambda a}\langle du(x),\dot\gamma(a)\rangle+\lambda e^{\lambda a}u(x)\leq e^{\lambda a}L(x,\dot\gamma(a)).
\]
\medskip
By taking $\dot\gamma(a)=H_p(x,du(x))$, we get
\[
L(x,\dot\gamma(a))+H(x,du(x))=\langle du(x),\dot\gamma(a)\rangle
\]
which implies
\[
\lambda u(x)+H(x,du(x))\leq 0,
\]
then $u:M\rightarrow\mathbb{R}$ is an almost everywhere subsolution, then has to be a viscosity subsolution.
\end{proof}
Let's define the action function by
\begin{eqnarray}\label{eq:act}
h_\lambda^t(x,y):=\inf_{\substack{\gamma\in C^{ac}([0,t],M)\\\gamma(0)=x,\ \gamma(t)=y}}\int_0^t e^{\lambda s}L(\gamma(s),\dot\gamma(s))ds,\quad t\geq 0
\end{eqnarray}
of which the infimum $\gamma_{\min}:[a,b]\rightarrow M$ is always available and is $C^k-$smooth (by Weierstrass Theorem of \cite{MS}). Moreover, $\gamma_{\min}$ has to be a solution of the {\sf Euler-Lagrange equation}
\[
\frac d{dt}L_v(\gamma,\dot\gamma)+\lambda L_v(\gamma,\dot\gamma)=L_x(\gamma,\dot\gamma).\tag{E-L}
\]
For any point $(x,v)\in TM$, we denote by $\Phi_{L,\lambda}^t(x,v)$ the {\sf Euler-Lagrange flow}, which satisfies $\Phi_{L,\lambda}^t\circ\mathcal{L}_H=\mathcal{L}_H\circ\Phi_{H,\lambda}^t$ in the valid time domain of $\Phi_{H,\lambda}^t$. \\
The {\sf backward Lax-Oleinik semigroup} operator $\mathcal{T}_{t}^-: C^0(M,\mathbb{R})\rightarrow C^0(M,\mathbb{R})$ can be expressed by
\begin{eqnarray}\label{eq:evo}
\mathcal{T}_{t}^-\psi(x):=e^{-\lambda t } \min_{y\in M } \{ \psi(y)+h^t_\lambda(y,x) \}
\end{eqnarray}
which works as a viscosity solution of the following {\sf evolutionary equation}:
\begin{eqnarray}\label{eq:evo-hj}
\left\{
\begin{aligned}
&\partial_tu(x,t)+H(x,\partial_x u)+\lambda u=0,\\
&u(\cdot,0)=\psi(\cdot).
\end{aligned}
\right.
\end{eqnarray}
As $t\rightarrow+\infty$, $\mathcal{T}_t^-\psi(x)$ converges to a unique limit function
\begin{eqnarray}\label{eq:sta}
u^-(x):=\lim_{t\rightarrow+\infty}\mathcal{T}_t^-\psi(x)=\inf_{\substack{\gamma\in C^{ac}((-\infty,0],M)\\\gamma(0)=x}}\int_{-\infty}^0e^{\lambda\tau}L(\gamma,\dot\gamma)d\tau.
\end{eqnarray}
which is exactly viscous solution of (\ref{eq:hj}) if we take $\alpha=0$. Similarly, we can define the {\sf forward Lax-Oleinik semigroup} operator $\mathcal{T}_{t}^+: C^0(M,\mathbb{R})\rightarrow C^0(M,\mathbb{R})$ by
\begin{eqnarray}
\mathcal{T}_t^+\psi(x):=\max_{y\in M} \{e^{\lambda t} \psi(y)-h^t_\lambda(x,y) \}
\end{eqnarray}
for later use.
\begin{defn}
A continuous function $f:\mathcal{U} \subset \mathbb{R}^n \rightarrow\mathbb{R}$ is called {\sf semiconcave with linear modulus } if there exists $\mathcal{C}>0$ such that
\begin{eqnarray}\label{eq:scl-defn}
f(x+h)+f(x-h)-2f(x)\leq \mathcal{C}|h|^2
\end{eqnarray}
for all $x\in \mathcal{U},\ h\in\mathbb{R}^n$. Here { $\mathcal{C}$} is called a {\sf semiconcavity constant} of $f$. Similarly we can define the {\sf semiconvex functions with linear modulus} if we change `$\leq$' to `$\geq$' in (\ref{eq:scl-defn}).
\end{defn}
\begin{defn}
Assume $u\in C(M,\mathbb{R})$, for any $x \in M$, the closed convex set
$$
D^- u(x)=\Big\{p\in T^*M : \liminf_{y\rightarrow x} \frac{u(y)-u(x)-\langle p,y-x \rangle }{|y-x|} \geq 0 \Big\}
$$
$$
\Big( \ \text{resp.} \ D^+ u(x)=\Big\{p\in T^*M : \limsup_{y\rightarrow x} \frac{u(y)-u(x)-\langle p,y-x \rangle }{|y-x|} \leq 0 \Big\} \Big)
$$
is called the {\sf sub-differential} (resp. {\sf super-differential}) set of $u$ at $x$.
\end{defn}
\begin{lem}\cite[Theorem.3.1.5]{CS}\label{D^+ convex}
$f:\mathcal{U}\subset\mathbb{R}^d\rightarrow \mathbb{R}$ is semiconcave (w.r.t. semiconvex), then $D^+f(x)$( w.r.t. $D^-f(x)$) is a nonempty compact convex set for any $x\in\mathcal{U}$.
\end{lem}
\begin{prop}\label{prop:domi-fun}
\
\begin{enumerate}
\item $\mathcal{T}_{t+s}^\pm=\mathcal{T}_t^\pm\circ\mathcal{T}_s^\pm$;
\item if $u\leq v$, $\mathcal{T}_t^\pm u\leq\mathcal{T}_t^\pm v$;
\item $u\prec_\lambda L $ if and only if $u \leq \mathcal{T}^-_{t}u $, then $\mathcal{T}_t^-u\prec_\lambda L$.
\item $u\prec_\lambda L$ if and only if $\mathcal{T}^+_tu\leq u $, then $\mathcal{T}_t^+u\prec_\lambda L $.
\item For any $\psi\in C^0(M,\mathbb{R})$, $\mathcal{T}_t^-\psi$ is semiconcave for $t>0$. Similarly, $\mathcal{T}_t^+\psi$ is semiconvex for $t>0$.
\end{enumerate}
\end{prop}
\begin{proof}
The idea is borrowed from \cite{Ber}, with necessary adaptions.
(1) For any $t, s>0$, we have
\begin{align*}
\mathcal{T}^-_{t+s}\psi (x)=&\, e^{-\lambda(t+s)} \min_{y\in M} \{ \psi(y)+ h_\lambda^{t+s}(y,x)\} \\
=&\, e^{-\lambda(t+s)} \min_{y\in M} \{ \psi(y)+ \min_{z\in M} \{ h_\lambda^{s}(y,z)+e^{\lambda s} h_\lambda^{t}(z,x) \}\} \\
=&\, e^{-\lambda t} \min_{y\in M}\min_{z\in M} \{ e^{-\lambda s }\psi(y)+ e^{-\lambda s } h_\lambda^{s}(y,z)+ h_\lambda^{t}(z,x) \} \\
=&\, e^{-\lambda t} \min_{z\in M} \{ e^{-\lambda s } \min_{y\in M}\{\psi(y)+ h_\lambda^{s}(y,z) \}+ h_\lambda^{t}(z,x) \} \\
=&\, e^{-\lambda t} \min_{z\in M} \{ \mathcal{T}_{s}^-\psi(z) + h_\lambda^{t}(z,x) \} \\
=&\, \mathcal{T}^-_{t} \circ \mathcal{T}^-_{s}\psi(x)
\end{align*}
It is similar for $\mathcal{T}^+_{s+t}=\mathcal{T}^+_t \circ \mathcal{T}^+_s $.
(2) It is an immediate consequence of the definition of $\mathcal{T}_t^-$ and $\mathcal{T}^+_t $, i.e.
$$
\mathcal{T}_t^- u=e^{-\lambda t } \min_{y\in M } \{ u(y)+h^t_\lambda(y,x) \} \leqslant e^{-\lambda t } \min_{y\in M } \{ v(y)+h^t_\lambda(y,x) \}=\mathcal{T}_t^- u.
$$
(3)
On one hand, if $u \leq \mathcal{T}^-_{t}u $, according to the definition of $\mathcal{T}_t^-$, we have that
\begin{align*}
u(x)\leq \mathcal{T}_t^-u(x) = \inf_{y\in M} \big\{e^{-\lambda t } u(y)+e^{-\lambda t} h^t_\lambda(y,x)\big\}
\end{align*}
which means that $e^{\lambda t } u(x)-u(y)\leq h^t_\lambda(y,x) $ for any $
x,y \in M$. Therefore, $u \prec_\lambda L $.
On the other hand, if $u \prec_\lambda L $, we have that $u(x) \leq e^{-\lambda t } u(y)+e^{-\lambda t} h^t_\lambda(y,x) $ for any $
x, y \in M$ which implies that $u \leq \mathcal{T}^-_{t}u $ by taking the infimum of $y$. In summary, for every $t'\geq 0$, one obtains
$$
\mathcal{T}^-_{t}u\leq \mathcal{T}^-_{t}[ \mathcal{T}^-_{t'}u] = \mathcal{T}^-_{t+t'} u =\mathcal{T}^-_{t'} [\mathcal{T}^-_{t}u ]
$$
which implies that $\mathcal{T}^-_{t}u \prec_\lambda L $.
(4) Similar as above, $\mathcal{T}^+_tu\leq u $ if and only if $ u\prec_\lambda L $. Hence, for every $t'\geq 0$,
$$
\mathcal{T}^+_{t}u \geq \mathcal{T}^+_{t}[ \mathcal{T}^+_{t'}u] = \mathcal{T}^+_{t+t'} u =\mathcal{T}^+_{t'} [\mathcal{T}^+_{t}u ]
$$
which implies that $\mathcal{T}^+_{t}u \prec_\lambda L $.
(5)
As is shown in \cite[Proposition 6.2.1]{F} or \cite{CS} , $h^t_\lambda(x,y)$ is semiconcave w.r.t. $x$ (resp. $y$), since $M$ is compact. For any fixed $t>0$ and $y\in M$, $h^t_\lambda(y,\cdot)$ and $h^t_\lambda(\cdot,y)$ are both semiconcave, then $\psi(y)+h^t_\lambda(y,\cdot) $ is semiconcave and $e^{\lambda t} \psi(y)-h^t_\lambda(\cdot,y) $ is semiconvex. Due to \cite[Proposition 2.1.5]{CS}, $\min_{y\in M } \big(\psi(y)+h^t_\lambda(y,x)\big)$ preserves the semiconcavity, so $\mathcal{T}^-_t \psi(x) $ is also semiconcave. Similar proof implies $\mathcal{T}^+_t \psi(x)$ is semiconvex.
\end{proof}
Due to (\ref{eq:sta}), the following properties of $u^-$ can be easily proved:
\begin{prop}\cite[Proposition 5,7]{MS}\label{lem:ms}
\
\begin{itemize}
\item $u^-$ is Lipschitz on $M$, with the Lipschitz constant depending only on $L$.
\item $u^-\prec_\lambda L$.
\item For every $x\in M$, there exists a {\sf backward calibrated curve} $\gamma_x^-:(-\infty,0]\rightarrow M$ which achieves the minimum of (\ref{eq:sta}).
\item For any $t<0$,
\[
u^-(x)=e^{\lambda t}u^-(\gamma_x^-(t))+\int_t^0e^{\lambda s}L(\gamma_x^-(s),\dot\gamma_x^-(s))ds,
\]
and there {is} a uniform upper bound $K$ depending only on $L$ such that $|\dot\gamma_x^-|\leq K$.
\item For {every} $t<0$, $u^-$ is differentiable at $\gamma_x^-(t)$ and
\[
\lambda u^-(\gamma_x^-(t))+H(\gamma_x^-(t),du^-(\gamma_x^-(t)))=0.
\]
Any continuous function simultaneously satisfies bullets 2 and 3 are called a {\sf backward weak KAM solution} of (\ref{eq:hj}).
\end{itemize}
\end{prop}
\vspace{20pt}
\section{Proof of Theorem A, B and C}\label{s2+}
\vspace{20pt}
\subsection{Proof of Theorem A}
(1).
For any $\nu\in\mathfrak M_\lambda$ and any $u\in\mathcal{S}^-$,
\begin{eqnarray}\label{eq:mes-cal}
\int_{TM} \lambda ud\nu\leq\int_{TM} \lambda u^- d\nu&=&\lambda \int_{TM}\int_{-\infty}^0 e^{\lambda s} L(\Phi_{L,\lambda}^s(x,v)) ds d\nu\nonumber\\
&=&\lambda \int_{-\infty}^0 e^{\lambda s}\Big(\int_{TM}L(\Phi_{L,\lambda}^s(x,v))d\nu\Big) ds\\
&=&\lambda \int_{-\infty}^0 e^{\lambda s}\Big(\int_{TM}L(x,v)d\nu\Big) ds\nonumber\\
&=&\lambda \int_{-\infty}^0 e^{\lambda s} ds\cdot \int_{TM}L(x,v)d\nu=\int_{TM}L(x,v)d\nu,\nonumber
\end{eqnarray}
which implies
\[
\int_{TM}L(x,v)-\lambda u \ d\nu\geq 0,\quad\forall \nu\in\mathfrak M_\lambda.
\]
Moreover, for any ergodic measure $\mu\in\mathfrak M_\lambda$ supported in $\widetilde \mathcal{A}$, for every $(x,v)\in supp\ \mu$,
\begin{eqnarray*}
\int_{TM} \lambda u^- d\mu&=&\lambda \int_{TM}\int_{-\infty}^0 e^{\lambda s} L(\Phi_{L,\lambda}^s(x,v)) \ ds d\mu\\
&=&\lambda \int_{-\infty}^0 e^{\lambda s} \int_{TM}L(\Phi_{L,\lambda}^s(x,v)) \ d\mu ds\\
&=&\lambda \int_{-\infty}^0 e^{\lambda s} \int_{TM}L(x,v) \ d\mu ds\\
&=&\lambda \int_{-\infty}^0 e^{\lambda s} ds\cdot \int_{TM}L(x,v)d\mu=\int_{TM}L(x,v)d\mu.
\end{eqnarray*}
So $u^-\in\mathcal{S}^-_c$.\\
(2). For $u\in\mathcal{S}_c^-$, if there is a $\mu\in\mathfrak M_\lambda$ such that each step in (\ref{eq:mes-cal}) becomes an equality, then for $\mu-$a.e. $(x,v)\in TM$, we have $u=u^-$.
Therefore, for a.e. $(x,v)\in \text{supp}(\mu)$,
\begin{eqnarray*}
u(x)=u^-(x)=& \inf_{\gamma(0)=x}\int_{-\infty}^0 e^{\lambda s} L(\gamma,\dot\gamma) ds\\
=& \int_{-\infty}^0 e^{\lambda s} L(\Phi_{L,\lambda}^s(x,v)) ds
\end{eqnarray*}
thus $\pi\Phi_{L,\lambda}^t(x,v)$ is a $u^--$calibrated curve for $t\in(-\infty,0]$. Since $\mu$ is invariant, $\pi\Phi_{L,\lambda}^t(x,v)$ is globally calibrated, i.e. $\Phi_{L,\lambda}^t(x,v)\in\widetilde \mathcal{A}$. So $\mathcal{A}(u):=$supp$(\mu)\subset\mathcal{A}$.
\qed
\vspace{10pt}
\subsection{Proof of Theorem B} Due to Proposition \ref{prop:domi-fun}, as long as $t>0$ and $s>0$ sufficiently small, $\mathcal{T}_s^-\mathcal{T}_t^+u^-(x)$ is a subsolution of (\ref{eq:hj}). Another thing is that $u^-(x)= \mathcal{T}_t^+u^-(x)=\mathcal{T}_t^-u^-(x)=\mathcal{T}_s^-\mathcal{T}_t^+u^-(x) $ for any $x\in\mathcal{A} $. This is because $u^-\in S_c^-$, by Lemma \ref{sub-dom},we have that $u^- \prec_\lambda L $ which implies that $ \mathcal{T}_t^- u^- \geqslant u^- $ and $ \mathcal{T}_t^+ u^- \leqslant u^- $ due to Proposition \ref{prop:domi-fun}. For $x\in\mathcal{A} $, by the Definition \ref{def:Aubry}, there is exist a curve $\gamma_x^-:[0,t]\to M$ such that $e^{\lambda t}u^-(\gamma_x^-(t))- u^-(\gamma_x^-(0))=\int_0^t e^{\lambda s}L(\gamma_x^-(s),\dot\gamma_x^-(s)) \ ds$ with $u^-(\gamma_x^-(0))=x$. Hence,
\begin{align*}
u^-(x) \leqslant \mathcal{T}_t^+ u^-(x)=&\, \sup_{\substack{\gamma\in C^{ac}([0,t],M)\\\gamma(0)=x}}e^{\lambda t}u^-(\gamma(t))-\int_0^te^{\lambda \tau}L(\gamma(\tau),\dot\gamma(\tau))d\tau. \\
\leqslant &\, e^{\lambda t}u^-(\gamma_x^-(t))+\int_0^te^{\lambda \tau }L(\gamma_x^-(\tau),\dot\gamma_x^-(\tau))d\tau \\
=&\, u^-(\gamma_x^-(0))=u^-(x) .
\end{align*}
which implies that $u^-(x)=\mathcal{T}_t^+u^-(x) $ for any $x\in\mathcal{A} $. Similarly we can prove $u^-(x)=\mathcal{T}_t^-u^-(x) $. So $\mathcal{T}_s^-\mathcal{T}_t^+u^-$ is indeed a solution on $\mathcal{A} $. Recall that $\mathcal{T}_t^+u^-(x)$ is always semiconvex, and for sufficiently small $s>0$, it is proven in following Lemma \ref{lem:convex} that $\mathcal{T}_s^-\psi$ keeps the semiconvexity for any semiconvex function $\psi(x)$, then $\mathcal{T}_s^-\mathcal{T}_t^+u^-(x)$ is both semiconcave and semiconvex, {thus has} to be {$C^{1,1}$}. So we finish the proof.\qed
\begin{lem}\label{lem:convex}
Assume $H\in C^k(T^{\ast}M,\mathbb{R})$ is a Tonelli Hamiltonian, for each semiconvex function $\psi:M\rightarrow\mathbb{R}$ with a linear modulus, there is $t_{0}>0$ such that $\mathcal{T}_t^-\psi$ is semiconvex for $t\in[0,t_{0}]$.
\end{lem}
\begin{proof}
We follow the proof of \cite[Lemma 4]{Ber} to prove that, there exists $t_{0}>0$ such that, for $t\in[0,t_0], \mathcal{T}_t^-\psi$ is supreme of a family of $C^{2}$ functions with a uniform $C^{2}$-bound. Then semiconvexity of $\mathcal{T}_t^-\psi$ is a direct corollary of that.
\medskip
Since $\psi$ is semiconvex with a linear modulus, by \cite[Proposition 10]{Ber} or \cite[Theorem 3.4.2]{CS}, there exists a bounded subset $\Psi\subset C^{2}(M,\mathbb{R})$ such that
\begin{enumerate}
\item $\psi=\max_{\varphi\in\Psi}\varphi$,
\item for each $x\in M$ and $p\in D^{-}\psi(x)$, there exists a function $\varphi\in\Psi$ satisfying $(\varphi(x),d\varphi(x))=(\psi(x), p)$.
\end{enumerate}
By the definition of $\mathcal{T}^{-}_{t}$ and (1), we have
\begin{equation}\label{eq:sup}
\mathcal{T}^{-}_{t}\psi\geq\sup_{\varphi\in\Psi}\mathcal{T}^{-}_{t}\varphi.
\end{equation}
On the other hand, for the family $\Psi$, there exists $t_{0}>0$ such that, for each $t\in[0,t_0]$, the image $\mathcal{T}^-_t(\Psi)$ is also a bounded subset of $C^2(M,\mathbb{R})$ and for all $\varphi\in\Psi$ and $x\in M$,
\[
\mathcal{T}^-_t\varphi(\gamma(t))=e^{-\lambda t}\varphi(x)+\int^{t}_0 e^{\lambda(\tau-t)}L(\gamma(\tau),\dot{\gamma}(\tau))d\tau
\]
where $\gamma(\tau)=\pi\circ\Phi^\tau_{H,\lambda}(x,d\varphi(x))$.
Let $(\gamma(\tau),p(\tau)):[0,t]\rightarrow T^{\ast}M$ be a trajectory of \eqref{eq:ode0} which is optimal for $h_\lambda^t(y,x)$, i.e., $\gamma(0)=y, \gamma(t)=x$ and
\[
h_\lambda^t(y,x)=\int^t_0 e^{\lambda\tau}L(\gamma(\tau),\dot{\gamma}(\tau))d\tau.
\]
It is not difficult to see that $p(0)$ is a super-differential of the function $z\mapsto h_\lambda^t(z,x)$ at $y$. Since the function $z\mapsto e^{-\lambda t}[\psi(z)+h_\lambda^t(z,x)]$ is minimal at $y$, then $p(0)\in D^{-}\psi(y)$.
\medskip
We consider a function $\varphi\in\Psi$ such that $(\varphi(y), d\varphi(y))=(\psi(y),p(0))$, then we have $(\gamma(t),p(t))=\Phi_{H,\lambda}^t(y,d\varphi(y))$ and
\[
\mathcal{T}^-_t\varphi(x)=e^{-\lambda t}\varphi(y)+\int^t_0 e^{\lambda(\tau-t)}L(\gamma(\tau),\dot{\gamma}(\tau))d\tau=e^{-\lambda t}[\psi(y)+h^t_\lambda(y,x)]=\mathcal{T}^-_t\psi(x).
\]
Thus for each $x\in M$, there exists a function $\varphi\in\Psi$ such that $\mathcal{T}^-_t\varphi(x)=\mathcal{T}^-_t\psi(x)$, therefore
\[
\mathcal{T}^{-}_{t}\psi \leqslant \sup_{\varphi\in\Psi}\mathcal{T}^{-}_{t}\varphi.
\]
So we complete the proof.
\end{proof}
\begin{rmk}
Actually, obtaining $C^{1,1}-$functions via the forward Lax-Oleinik semigroup and backward semigroup is a typical application of the Lasry-Lions regularization method. For the discounted Hamilton-Jacobi equation, the readers can refer to \cite{CCZ} for more applications of this method.
\end{rmk}
\vspace{20pt}
\subsection{ Proof of Theorem C}
To prove Theorem C, the following Lemma is needed:
\begin{lem}[$C^k$ graph]\label{lem:ck-gra}
Assume $\widetilde \mathcal{A}$ consists of finitely many hyperbolic equilibrium or periodic orbits, then there exists a neighborhood $V\supset\mathcal{A}$, such that for all $x\in V$, $(x,d u^-(x))$ lies exactly on the local unstable manifold $W^u_{loc}(\widetilde \mathcal{A})$ (which is actually $C^{k-1}-$graphic).
\end{lem}
\begin{proof}
We claim that:\smallskip
{\tt For any neighborhood $V$ of $\mathcal{A}$, there always exists an open neighborhood $U\subset V$ containing $\mathcal{A}$, such that for any $x\in U$, the associated backward calibrated curve $\gamma_x^-:(-\infty,0]\rightarrow M$ would lie in $V$ for all $t\in(-\infty,0]$.} \smallskip
Otherwise, there exists a $V_*$ neighborhood of $\mathcal{A}$ and a sequence $\{x_n\in V_*\}_{n\in\mathbb{N}}$ converging to some point $z\in\mathcal{A}$, such that the associated backward calibrated curve $\gamma_n^-$ ending with $x_n$ can go outside $V_*$ for all $n\in\mathbb{N}$. Namely, we can find a sequence $\{T_n\geq 0\}_{n\in\mathbb{N}}$ such that $\gamma_n^-(-T_n)\in \partial V_*$. Due to item 2 of Proposition \ref{lem:ms}, any accumulating curve $\gamma_\infty^-$ of the sequence $\{\gamma_n^-\}_{n\in\mathbb{N}}$ is also a calibrated curve in 2 cases:
Case 1: the accumulating value $T_\infty$ of associated $\gamma_\infty^-$ is finite, which implies $\gamma_\infty^-:[- T_\infty,0]\rightarrow M$ connecting $z$ and $\partial V_*$. Since $z\in\mathcal{A}$ and $\widetilde \mathcal{A}$ is $\Phi_{L,\lambda}^t-$invariant, then $\gamma_\infty^-:\mathbb{R}\rightarrow M$ is contained in $\mathcal{A}$ as well. That's a contradiction.
Case 2: the accumulating value $T_\infty$ of associated $\gamma_\infty^-$ is infinite, then $\eta_n^-(t):=\gamma_n^-(t-T_n):(-\infty,T_n]\rightarrow M$ accumulates to a $\eta_\infty^-:\mathbb{R}\rightarrow M$ which is globally calibrated by $u^-$. Due to the definition of $\widetilde \mathcal{A}$, $\eta_\infty^-$ has to be contained in $\widetilde \mathcal{A}$. That's a contradiction.
After all, the claim holds. Since $W^u_{loc}(\widetilde \mathcal{A})$ has to be $C^{k-1}-$graphic in a suitable neighborhood of $\widetilde \mathcal{A}$ (Proposition B of \cite{CI}), then our claim actually indicates there exists a suitable $V\supset \mathcal{A}$, such that for all $x\in V$, the backward calibrated curve $\gamma_x^-:(-\infty,0]\rightarrow M$ is unique and $(x, du^-(x))\in W^u_{loc}(\widetilde \mathcal{A})$.
\end{proof}
Due to Lemma \ref{lem:ck-gra},
$u^-$ has to be a $C^k$ graph in a small neighborhood $\mathcal{U}$ of $\mathcal{A}(H)$ and $(x, du^-(x))\in W^u(\widetilde \mathcal{A}(H))$ for all $x\in\mathcal{U}$.
Notice that there exists a nonnegative, $C^\infty-$smooth function $V:M\rightarrow\mathbb{R}$ which is zero on $\mathcal{A}(H)$ and keeps positive outside $\mathcal{A}(H)$. Moreover, $\|V\|_{C^k}$ can be taken sufficiently small, so for the new Hamiltonian
\[
\widetilde H(x,p):=H(x,p)+V(x),
\]
the hyperbolicity of $\widetilde \mathcal{A}(\widetilde H)$ persists and $\widetilde \mathcal{A}(\widetilde H)=\widetilde \mathcal{A}(H)$ due to the upper semicontinuity of the Aubry set (see Lemma \ref{lem:u-semi}). So if we denote by $\widetilde u^-$ the viscosity solution of $\widetilde H$, $\widetilde u^-$ is also $C^k$ on $\mathcal{U}$. We can easily see that $\widetilde u^-$ is a strict subsolution of $H$ in $\mathcal{U} \backslash \mathcal{A}( H) $. Outside $\mathcal{U}$ we can convolute $\widetilde u^-$ with a $C^\infty$ function, and keeps $\widetilde u^-$ invariant on $\mathcal{U}$. Without loss of generality, let us denote by $\widehat u^-$ the modified function,
then for any $x\notin \mathcal{U}$ being a differentiable point of $\widetilde u^-$, we have
\begin{eqnarray}
\lambda \widehat u^-(x)+ H(x,d \widehat u^-(x))&=&\lambda \widehat u^-(x)+\widetilde H(x,d \widehat u^-(x))-V(x)\nonumber\\
&\leq& \lambda \widetilde u^-(x)+\widetilde H(x,d \widetilde u^-(x))-V(x)+\lambda |\widehat u^-(x)-\widetilde u^-(x)|\nonumber\\
& &+\max_{\theta\in[0,1]}\Big|H_p\Big(x,\theta d\widetilde u^-(x)+(1-\theta)d\widehat u^-(x)\Big)\Big|\cdot\big|d \widetilde u^-(x)-d\widehat u^-(x)\big|\nonumber\\
&\leq&-V(x)+ C\cdot\big[|\widehat u^-(x)-\widetilde u^-(x)|+|d\widehat u^-(x)-d\widetilde u^-(x)|\big]\nonumber\\
&\leq&-V(x)/2<0, \nonumber
\end{eqnarray}
since $|\widehat u^-(x)-\widetilde u^-(x)|$ and $|d\widehat u^-(x)-d\widetilde u^-(x)|$ can be made sufficiently small. Recall that $\widehat u^-\big|_\mathcal{U}=\widetilde u^-\big|_{\mathcal{U}}$, so $\widehat u^-$ is a $C^k$ smooth {constrained} subsolution of \eqref{eq:hj} which is a solution on $\mathcal{A}(H)$ and {strict subsolution outside}.\qed
\vspace{20pt}
\section{Global attractors and the proof of Theorem D}\label{s3}
\vspace{20pt}
Another usage of $\mathcal{S}_c^-$ is to identify the {\sf maximal global attractor}. Due to Theorem B, $\mathcal{S}_c^-\cap C^1(M,\mathbb{R})$
is nonempty. Moreover, by sending $s,t\rightarrow 0_+$ in the Lax-Oleinik operators $\mathcal{T}_s^-$ and $\mathcal{T}_t^+$ as in the proof of Theorem B, we can find a sequence of $C^1$ constrained subsolutions converging to $u^-$ w.r.t. the $C^0-$norm.\\
\noindent{\it Proof of Theorem D:}
Due to Theorem \ref{thm:0}, for any $u\in \mathcal{S}_c^-\cap CX^1(M,\mathbb{R})$, we have $u\leq u^-$. Therefore,
\[
\{(x, p)\in T^*M| \lambda u(x)+H(x,p)\leq 0\}\supset \{(x, p)\in T^*M| \lambda u^-(x)+H(x,p)\leq 0\}.
\]
which accordingly indicates
\[
\Sigma_c^-=\{(x, p)\in T^*M| \lambda u^-(x)+H(x,p)\leq 0\}
\]
On the other side, let us denote
\[
F_u(x,p):=\lambda u(x)+H(x,p),
\]
then we can prove that
\begin{eqnarray*}
\frac d{dt}\Big|_{t=0} F_u(\Phi_{H,\lambda}^t(x,p))&=&\lambda \langle du(x),\dot x \rangle + H_x(x,p) \dot x +H_p(x,p)\dot p \\
&=& \lambda \langle du(x),\dot x \rangle+ H_x(x,p) H_p(x,p)+H_p(x,p) (-H_x(x,p)-\lambda p) \\
&=& \lambda \langle du(x),\dot x) \rangle -\lambda \langle \dot x,p \rangle \\
&\leq& \lambda (H(x,du(x))+L(x,\dot x) ) -\lambda \langle \dot x,p \rangle \\
&=& \lambda [ H(x,du(x))+ \langle \dot x,p \rangle-H(x,p) ] - \lambda \langle \dot x,p \rangle \\
&=& -\lambda [ H(x,p) - H(x,du(x)) ] \\
&\leq& -\lambda [\lambda u(x )+ H(x,p)] \\
&=& -\lambda F_u(x,p)
\end{eqnarray*}
where the second equality is according to equation \eqref{eq:ode0} and the first inequality is due to Fenchel transform.
It implies that
every trajectory of (\ref{eq:ode0}) tends to $\Sigma_c^-(u)$ as $t\rightarrow+\infty$, with
\[
\Sigma_c^-(u):=\{(x, p)\in T^*M| F_u(x,p) \leq 0\}.
\]
So
\[
\bigcap_{u\in\mathcal{S}^-_c\cap C^1(M,\mathbb{R})}\Sigma_c^-(u)=\Sigma_c^-
\]
is a global attracting set. Define
\[
\mathcal{K}=\bigcap_{t\geq 0}\Big(\bigcap_{u\in\mathcal{S}_c^-\cap C^1(M,\mathbb{R})}\Phi_{H,\lambda}^t(\Sigma_c^-(u))\Big),
\]
we can easily see that $\mathcal{K}$ contains all the $\omega-$limit sets in the phase space, then due to the definition of $\mathcal{K}$, it has to be maximal.\qed
\begin{rmk}
A similar conclusion as Theorem D was firstly proved by Maro and Sorrentino in \cite{MS}, where they used a complicated real analysis method to handle with the low regularity of $u^-$. For each $u$ contained in $\mathcal{S}_c^-\cap C^1(M,\mathbb{R})$, we can take the derivative of $u$ directly and avoid this difficulty.\\
\end{rmk}
\section{Exponential convergence of the Lax-Oleinik semigroup}\label{s4}
\vspace{20pt}
\noindent{\it Proof of Theorem E:} Recall that
\[
\mathcal{T}_t^-0(x)=\inf_{\substack{\gamma:[-t,0]\rightarrow M\\\gamma(0)=x}}\int_{-t}^0e^{\lambda s}L(\gamma,\dot\gamma)ds,
\]
then we have
\begin{eqnarray}\label{eq:leq}
u^-(x)-\mathcal{T}_t^-0(x)\geq \int_{-\infty}^{-t} e^{\lambda s}L(\gamma_x^-,\dot\gamma_x^-)ds
\end{eqnarray}
where $\gamma_x^-:(-\infty,0]\rightarrow M$ is the backward calibrated curve by $u^-$ and ending with $x$. On the other side, suppose $\widehat \gamma:[0,t]\rightarrow M$ is the infimum achieving $\mathcal{T}_t^-0(x)$, then
\begin{eqnarray}\label{eq:geq}
u^-(x)-\mathcal{T}_t^-0(x)&\leq& \int_{-\infty}^{-t}e^{\lambda s}L(\eta,\dot\eta)ds+\int_{-t}^0e^{\lambda s}L(\widehat \gamma,\dot{\widehat \gamma})ds-\mathcal{T}_t^-0(x)\nonumber\\
&=&\int_{-\infty}^{-t}e^{\lambda s}L(\eta,\dot\eta)ds
\end{eqnarray}
where $\eta:(-\infty,-t]\rightarrow M$ is the backward calibrated curve by $u^-$ and ending with $\widehat \gamma(-t)$. Recall that $\widetilde \mathcal{A}$ consists of a unique hyperbolic equilibrium, without loss of generality, we assume $\mathcal{A}=\{z=(x_0,0)\}$. Combining (\ref{eq:leq}) and (\ref{eq:geq}) we get
\begin{eqnarray*}
& &|u^-(x)-\mathcal{T}_t^-0(x)-e^{-\lambda t}\alpha|\\
&\leq& \max\bigg\{\bigg|\int_{-\infty}^{-t}e^{\lambda s}L(\eta,\dot\eta)ds-e^{-\lambda t}\alpha\bigg|, \bigg|\int_{-\infty}^{-t} e^{\lambda s}L(\gamma_x^-,\dot\gamma_x^-)ds-e^{-\lambda t}\alpha\bigg|\bigg\}
\end{eqnarray*}
with
\[
\alpha:=\int_{-\infty}^0e^{\lambda t}L(x_0,0)dt=u^-(x_0).
\]
On the other side, we can claim the following conclusion:\smallskip
{\tt Claim: For any neighborhood $V\supset \mathcal{A}$, there exists a uniform time $T_V>0$, such that for any $x\in M$, the associated backward calibrated curve $\gamma_x^-:(-\infty,0]$
$\rightarrow M$ will not stay outside $V$ for a time longer than $T_V$.}\smallskip
Otherwise, there must be a neighborhood $V_*\supset\mathcal{A}$ and a sequence $\{x_n\}_{n\in\mathbb{N}}\subset M$, such that the associated backward calibrated curve $\gamma_n^-$ would stay outside of $V_*$ for a time $T_n$, with $T_n\rightarrow +\infty$ as $n\rightarrow +\infty$. With almost the same analysis as in Lemma \ref{lem:ck-gra}, we can show that any accumulating curve of $\{\gamma_n^-\}$ would lie outside $V_*$ for infinitely long time, which implies $V_*^c\cap\widetilde \mathcal{A}\neq\emptyset$. This contradiction lead to the claim.\smallskip
Now we choose a suitably small neighborhood $\widetilde \mathcal{U}\supset\widetilde \mathcal{A}$, such that both the Hartman Theorem is available in $\widetilde \mathcal{U}$ and $W^u(\widetilde \mathcal{A})\cap \widetilde \mathcal{U}$ is $C^{k-1}-$graphic. Due to Lemma \ref{lem:ck-gra}, there exists a constant $K_1>0$, such that for any $x\in \mathcal{U}:=\pi\widetilde \mathcal{U}$, the associated backward calibrated curve $\gamma_x^-:(-\infty,0]\rightarrow M$ can be estimated by the following:
\[
\|\gamma_x^-(-t)-z\|\leq K_1 \exp(-\mu t),\quad\forall \ t\geq 0
\]
with $\mu>0$ being the largest negative Lyapunov exponent of the hyperbolic equilibrium. Due to our claim and Lemma \ref{lem:ck-gra}, there exists a constant $K_2\geq K_1$, such that for any $x\in M$, the associated backward calibrated curve $\gamma_x^-:(-\infty,0]\rightarrow M$ satisfies
\begin{eqnarray}\label{eq:dist}
\|\gamma_x^-(-t)-z\|\leq K_2 \exp(-\mu t),\quad\forall \ t\geq 0.
\end{eqnarray}
Due to Theorem C, we can find a sequence of $C^k$ subsolutions $\{u_n\in \mathcal{S}_c^-\}_{n\in\mathbb{N}}$ approaching to $u^-$ w.r.t. $C^0-$norm. Then for any $x\in M$, and the associated backward calibrated curve (with a time shift) $\eta_x^-:(-\infty,-t]\rightarrow M$ ending with it, we have
\begin{eqnarray*}
\bigg|\int_{-\infty}^{-t}e^{\lambda s}L(\eta_x^-,\dot\eta_x^-)ds-e^{-\lambda t}\alpha\bigg|&=&\lim_{n\rightarrow +\infty}\bigg|\int_{-\infty}^{-t}\frac{d}{ds}\Big(e^{\lambda s}u_n\big(\eta_x^-(s)\big)-e^{\lambda s}u_n(x_0)\Big)ds\bigg|\\
&=&\lim_{n\rightarrow +\infty}\big|e^{-\lambda t}\big(u_n(\eta_x^-(-t))-u_n(x_0)\big)\big|\\
&=&\lim_{n\rightarrow +\infty}e^{-\lambda t}\Big|u_n\big(\eta_x^-(-t)\big)-u_n(x_0)\Big|\\
&\leq&\lim_{n\rightarrow +\infty}e^{-\lambda t} \|du_n\|\cdot \|\eta_x^-(-t)-x_0\|\\
&\leq& \lim_{n\rightarrow +\infty}e^{-\lambda t} \|du_n\|\cdot K_2 \exp(-\mu t)\\
&\leq & C\cdot K_2\cdot \exp(-(\mu+\lambda) t)
\end{eqnarray*}
due to the uniform semiconcavity of $\{u_n\}_{n\in\mathbb{N}}$. We can apply this inequality to both (\ref{eq:leq}) and (\ref{eq:geq}), then prove this Theorem.
\qed
\vspace{20pt}
\noindent{\it Proof of Corollary F:} We can totally borrow previous analysis, with only the following adaption: since now $\widetilde \mathcal{A}$ is a periodic orbit $\{(\gamma_p(t),\dot\gamma_p(t))|t\in[0,T_p]\}$ which may no longer be an equilibrium, so we can just assume $z\in\mathcal{A}$ being one point such that $u^-(z)=0$. Therefore, we can only guarantee the existence of a constant $K_3>0$, such that for any $x\in M$, the associated backward calibrated curve $\gamma_x^-:(-\infty,0]\rightarrow M$ satisfies
\[
\liminf_{t\rightarrow+\infty}\frac{\|\gamma_x^-(-t)-z\|}{\exp(-\mu t)}\leq K_3,\quad\forall \ t\geq 0.
\]
That's different from (\ref{eq:dist}), but other parts of the analysis follows.
\qed
\vspace{40pt}
| proofpile-arXiv_065-305 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Ultra-diffuse galaxies (UDGs) are a class of low surface brightness (LSB) galaxies that have luminosities characteristic of dwarf galaxies, but sizes more typical of giants. Although the existence of such galaxies was established in the 1980s \citep{Rea83,Bin85,Imp88}, interest in them surged following the discovery of 47 faint, diffuse galaxies in the Coma Cluster by \citet{vanD15}, who introduced the UDG terminology. Many UDG candidates have now been discovered in both cluster (e.g., \citealp{Mih15,Mun15,vanB16,Lee17,Mih17,Jan17,Ven17,Wit17}), and group or field environments (e.g., \citealp{Mar16,Mer16,Rom17,Tru17,Gre18,Pro19a}).
The very existence of UDGs in high-density regions, such as rich clusters, prompts the question: how can such faint and diffuse galaxies survive in these hostile environments? While the physical properties and overall numbers of these galaxies remain uncertain, several scenarios have been proposed to explain their origin, with no clear consensus having yet emerged. On one hand, estimates of their total (gravitating) mass based on their globular cluster (GC) systems seem to suggest that at least some UDGs are embedded in massive dark matter halos ($10^{11} M_{\odot} \lesssim M_{DM} \lesssim 10^{12} M_{\odot}$; e.g., \citealp{Bea16,Pen16,vanD17,Tol18}) that allow them to survive in dynamically hostile environments. If this inference is correct, then one could consider UDGs to be ``failed'' galaxies \citep{Yoz15}: i.e., dark matter dominated systems that were, for some reason, inefficient in assembling (or retaining) the stellar components typical of most galaxies with such massive dark matter halos. On the other hand, the discovery of some UDGs in low-density environments suggests that such systems may be more akin to ``normal" LSB dwarfs, but with unusually large sizes due perhaps to unusual initial conditions, such as high angular momentum content (e.g., \citealp{Amo16}) or a particularly bursty star formation history \citep{Chan18,DiC17}. The study of satellite galaxies in galaxy clusters also suggest that tidal stripping may contribute to the formation of at least some UDG-like dwarfs \citep{Car19, Sal20}.
Our recent analysis of the GC systems for UDGs in the Coma cluster revealed large object-to-object variations in GC specific frequency, suggesting that objects belonging to this somewhat loosely defined class may not share a single formation channel \citep{Lim18}. Velocity dispersion measurements and stellar population studies similarly suggest that UDGs may have formed via multiple processes \cite[see, e.g.,][]{Zar17,Fer18,Tol18}. There is some evidence too that environment might play a role in their formation: i.e., a roughly linear relationship exists between host cluster mass and the total number of UDGs \citep{vanB17}, although the slope is still under debate \citep{Man18}.
The many questions surrounding these puzzling objects --- which include even the appropriateness of the claim that they make up a unique and distinct galaxy class (see \citealp{Con18,Dan19}) --- stem, in large part, to the incomplete and heterogeneous datasets that have been used to find and study them. Ideally, deep, wide-field imaging that is sensitive to both LSB and ``normal" galaxies --- across a range of luminosity and environments --- is required to detect and study these objects, and to explore the mechanisms by which they formed and their relationship to ``normal" galaxies.
As the rich cluster of galaxies nearest to the Milky Way, the Virgo Cluster is an obvious target for a systematic study for all types of stellar systems including UDGs. The {\it Next Generation Virgo Cluster Survey} \cite[NGVS;][]{Fer12} is a powerful resource for discovering, studying and inter-comparing UDGs and normal galaxies in this benchmark cluster. Indeed, the NGVS has already been used to study a wide range of galaxy types in this environment. Previous papers in this series have examined the properties of structurally extreme galaxies in Virgo, including both compact \citep{Zha15,Gue15,Liu15,Liu16,Zha18} and extended systems (i.e., UDGs). For the latter population, previous NGVS papers have reported the initial discovery of such galaxies in Virgo \citep{Mih15}, their kinematics and dark matter content \citep{Tol18}, and their photometric and structural properties \citep{Cot20}. Other NGVS papers have examined the detailed properties of Virgo galaxies \citep{Fer20}, including distances \citep{Can18}, intrinsic shapes \citep{San16}, nuclei and nucleation rates \citep{Spe17,San19}, color-magnitude relations \citep{Roe17}, luminosity functions \citep{Fer16} and abundance matching analyses \citep{Gro15}.
This paper is structured as follows. In \S\ref{data}, we present an overview of the NGVS survey: its design, imaging materials and data products, including the catalog of Virgo Cluster member galaxies that forms the basis of our study. In \S\ref{results}, we describe our selection criteria for UDGs as well as their observed and derived properties, such as the luminosity function, structural parameters, spatial distribution, and globular cluster content. In \S\ref{discussion}, we discuss our findings in the context of UDG formation scenarios. We summarize our findings and outline directions for future work in \S\ref{summary}. In an Appendix, we present notes on the individual galaxies that satisfy some, or all, of our UDG selection criteria.
\section{Observations and Data Products}
\label{data}
\subsection{Galaxy Detection}
The NGVS is a deep imaging survey, in the $u^*g'r'i'z'$ bands, carried out with MegaCam on the Canada-France-Hawaii Telescope (CFHT) over six consecutive observing seasons from 2008 to 2013. The survey spans an area of 104~deg$^2$ (covered in 117 distinct NGVS pointings) contained within the virial radii of the Virgo A and Virgo B subclusters, which are centered on M87 and M49, respectively. Full details on the survey, including observing and data processing strategies, are available in \cite{Fer12}. A complete description of the data reduction and analysis procedures, including the galaxy catalog upon which this study is based, can be found in \cite{Fer20}.
Briefly, an automated identification pipeline for candidate Virgo Cluster member galaxies --- optimized for the detection of low-mass, low surface brightness (LSB) systems that dominate the cluster population by numbers --- was developed using a training set of 404 visually selected and quantitatively vetted cluster members in the Virgo core region. Galaxies, identified over the entire cluster using this custom pipeline, were assigned a membership probability based on several pieces of information, including location within the surface brightness vs. isophotal area plane; photometric redshifts from LePhare \citep{Arn99,Ilb06}; goodness-of-fit statistics and model residuals; and the probability of a spurious detection arising from blends or image artifacts. Following a final series of visual checks, candidates were assigned membership classifications ranging from 0 to 5. In this analysis, we restrict ourselves to the 3689 galaxy candidates classified as types 0, 1 or 2: i.e., ``certain", ``likely" or ``possible" Virgo cluster members. Note that our sample of UDGs includes no Class 2 objects, and roughly equal numbers of Class 0 and Class 1 types. For reference, the mean membership probabilities for these classes are $84\pm23\%$ (class 0) and $77\pm21\%$ (class 1; see Figure~12 of \citealt{Fer20}).
\subsection{Estimation of Photometric and Structural Parameters}
Photometric and structural parameters were measured for each galaxy using up to three different techniques, depending on the galaxy magnitude and surface brightness. First, for most galaxies brighter than $g' \simeq 17$, an isophotal analysis was carried out using a custom software package, {\tt Typhon} \citep{Fer20}, that is built around the {\tt IRAF ELLIPSE} and related tasks, followed by parametric fitting of the one-dimensional surface brightness profile using one- or two-component S\'ersic models (with the possible inclusion of a central point source for nucleated galaxies). Second, basic global parameters (e.g., integrated magnitudes, mean and effective surface brightnesses, half-light radii) were measured non-parametrically using a curve-of-growth analysis, including an iterative background estimation. Finally, galaxies fainter than $g' \sim 16$ were also fitted in the image plane with GALFIT \citep{Pen02}, with assumed S\'ersic profiles (with, or without, point source nuclei). In our study, which relies entirely on global parameters for UDGs and normal galaxies, we use parameters from the {\tt Typhon} analysis whenever available; otherwise, we rely on GALFIT parameters derived from two-dimensional fitting of the images. In \citet{Fer20}, it is shown that these techniques yield parameters that are in good statistical agreement for the majority of galaxies. Nevertheless, one should bear in mind that the subjects of this paper, UDGs, are among the faintest and most diffuse galaxies in the NGVS catalog, and thus present special challenges for the measurement of photometric and structural parameters.
To gauge survey completeness, an extensive series of simulations were carried out in which artificial galaxies --- convolved with the appropriate point spread function and with appropriate amounts of noise added --- were injected into the core region frames, and then recovered using the same procedure employed to build the galaxy catalog. In all, 182,500 simulated galaxies were generated, equally divided among the $u{^*}g'r'i'z'$ filters, under the assumption that their intrinsic light distributions follow S\'ersic profiles. Simulated galaxies were randomly generated so as to populate (and extend beyond) the scaling relations expected for Virgo members, with S\'ersic indices, effective radii and total magnitudes in the range $0.4 \le n \le 2.5$, $0\farcs9 < R_e < 53\arcsec~$ ($75 \le R_e \le 4200$~pc), and $16.0 \le g' \le 25.2$ mag. Completeness contours for each scaling relation were then derived, with a 50\% completeness limit in magnitude of $g' \simeq 21.5$~mag ($M_g \simeq -9.6$~mag). For a thorough discussion of catalog completeness and the reliability of our photometric and structural parameters, the reader is referred to \cite{Fer20}, who describe the completeness tests in detail, and \cite{Cot20}, who compare the measured NGVS photometric and structural parameters to those in the literature for previously cataloged Virgo galaxies.
\begin{figure*}
\epsscale{1.0}
\plotone{def_udg.png}
\caption{Scaling relations (in the $g'$ band) between luminosity, effective radius and surface brightness, and mean effective surface brightness for 3689 galaxies in the Virgo cluster (small black points). The filled red and blue circles show our UDGs (i.e., the {\tt primary} and {\tt secondary} samples, respectively; see \S\ref{selection}). Eight bright spiral galaxies that were initially selected as LSB outliers in our secondary sample, but discarded from our analysis, are also shown (open blue circles). The dotted and dashed curves show the mean scaling scaling relations and their $2.5\sigma$ confidence limits, respectively. The gray solid curve in each panel shows the UDG definitions adopted by \citet{vanD15}.
\label{udgsel}}
\end{figure*}
\section{Results}
\label{results}
\subsection{Identification of UDG Candidates}
\label{selection}
Large and LSB dwarf galaxies have been known for some time (see, e.g., Table XIV and Appendix C-4 of \citealt{Bin85}), but the notion that UDGs comprise a new and distinct galaxy class was introduced by \citet{vanD15}. In their Dragonfly survey of the Coma Cluster, these authors defined UDGs as those galaxies with central $g'$-band surface brightnesses below $\mu_0 = 24$ mag~arcsec$^{-2}$ and effective radii larger than $R_e = 1.5$ kpc. It was subsequently noted that this size criterion is close to the limit set by the resolution of Dragonfly at the distance of Coma: i.e., Full Width at Half Maximum (FWHM) = 6\arcsec~ $\sim$ 3~kpc. Other studies (e.g., \citealt{Mih15,Kod15,Yag16,vanB16,vanB17}) have used different criteria (e.g., different size cuts and/or different $\mu_0$ or $\langle\mu(R,R_e)\rangle$ limits), which has led to some confusion on the properties of UDGs as a class. It is clear that {\it we require new classification criteria that are based on the physical properties of galaxies and largely independent of the classification material.}
\begin{figure}
\epsscale{1.2}
\plotone{dSBe.png}
\caption{Deviations from the mean relation between luminosity and effective surface brightness, $\Sigma_e$. The vertical lines show 2.5$\sigma$ deviations from the mean which we used in UDG definition (the positions of dashed lines in Figure 1), while the dotted and dashed curves show Gaussian fits to the distributions of galaxies in the core region and whole cluster, respectively. Note that the dashed curve in the upper panel and the dotted curve in the lower panel have been renormalized. The solid histogram in the upper panel shows galaxies in the core region, while the solid histogram in the lower panel shows all cluster member galaxies from the NGVS. The dashed red histogram in the lower panel shows the UDG {\tt primary} sample; results for the combined {\tt primary and secondary} samples are shown in purple.
\label{dSBe}}
\end{figure}
Our selection criteria for UDGs is based on scaling relations for 404 galaxies in Virgo's core region, which is a $\simeq$ 4 deg$^2$ region roughly centered on M87, the central galaxy in Virgo's A sub-cluster \citep{Cot20}. This study used NGVS data to examine the relationships between a variety of photometric and structural parameters, including size, surface brightness, luminosity, color, axial ratio and concentration. In Figure \ref{udgsel}, we show scaling relations, plotted as a function of $g$-band magnitude, for the full NGVS catalog of 3689 certain, likely or possible Virgo Cluster members \citep{Fer20}. From left to right, the three panels of this figure show: (1) surface brightness measured at the effective radius, $\Sigma_{e}$; (2) mean surface brightness measured within the effective radius, $ \langle \Sigma \rangle_{e}$; and (3) effective radius, $R_e$, which is defined as the radius containing half of the total luminosity. The dotted curve in each panel of Figure~\ref{udgsel} shows a fourth order polynomial, as recorded in Table~8 of \cite{Cot20}, which was obtained by maximum likelihood fitting to the observed scaling relations for galaxies in the Virgo core region. These polynomials were fitted over the luminosity range $10^5 \le {\cal L}_g/{\cal L}_{g,\odot} \le 10^{11}$ ($-8 \lesssim M_g \lesssim -22.4$~mag) and account for incompleteness by modifying the likelihood function with weights that are inversely proportional to the completeness functions derived from the galaxy simulations described in \S\ref{data} (\citealt{Fer20}).
\begin{figure}
\epsscale{1.2}
\plotone{dmSBe.png}
\caption{Same as Figure \ref{dSBe}, but showing the deviations from the mean relation between luminosity and mean effective surface brightness, $\langle\Sigma\rangle_e$.
\label{dmSBe}}
\end{figure}
\begin{figure}
\epsscale{1.2}
\plotone{dRe.png}
\caption{Same as Figure \ref{dSBe}, but showing the deviations from the mean relation between luminosity and effective radius, $R_e$.
\label{dRe}}
\end{figure}
We have opted to base our selection criteria on galaxies in the cluster core because this region has the full set of information needed to establish cluster membership and determine scaling relations. The relations themselves are quite similar to those found for galaxies over the whole cluster and, indeed, Figures~\ref{udgsel}--\ref{dRe} demonstrate that these relations provide an adequate representation of the scaling relations for the full sample of galaxies. To identify UDG candidates in a systematic way, we wish to select galaxies that have unusually large size, and/or unusually low surface brightness, at fixed luminosity. We thus use the scatter about the fitted polynomials in the Virgo core region to identify outliers in these diagrams. The dashed curves in each panel show the $\pm2.5\sigma$ band that brackets each polynomial. The standard deviation, $\sigma$, in each case has been determined using the sample of 404 central galaxies (i.e., given the limited sample size, we make no attempt to quantify any variation in $\sigma$ with luminosity).
We are now in a position to select UDG candidates using these relations. For galaxies in the core region, the distributions of deviations from the mean relations are shown in the upper panels of Figures \ref{dSBe}, \ref{dmSBe} and \ref{dRe}. In these distributions, particularly those involving effective surface brightness, there may be a gap close to $2.5 \sigma$ from the mean relation. We henceforth use this condition for identifying LSB outliers, defining our {\tt primary sample} of UDGs to be those systems that deviate --- towards large size or low surface brightness --- by more than $2.5 \sigma$ in {\it each of the three scaling relations}.
While these requirements should produce a robust sample of UDGs, it is likely that some bonafide LSB systems may be missed by these stringent selection criteria. We therefore consider an augmented sample of galaxies that satisfy only two, or one, of these criteria. Therefore, in the analysis that follows, we rely on two UDG samples:
\begin{itemize}
\item[1.] {\tt Primary (26 UDGs):} This sample is made up of the 26 galaxies that deviate by at least $2.5 \sigma$ in each of the three defining scaling relations: i.e., $L$-$R_e$, $L$-$\mu_e$ and $L$-$\langle\mu\rangle_e$. This sample has the benefit of {\it high purity} but may exclude some LSB objects that do not satisfy all selection criteria.
\item[2.] {\tt Secondary (18 UDGs):} This sample is defined by starting with the 26 galaxies that deviate by at least $2.5 \sigma$ in only one, or two, of the scaling relations. Beginning with this sample, we have excluded eight bright (and face-on) spiral galaxies with $g$-band luminosities greater than $\sim$10$^{9.25}$ L$_{g,\odot}$, leaving us with 18 additional UDGs. The combined primary and secondary samples (28 + 18 = 44 objects) has the advantage of {\it high completeness}.
\end{itemize}
In an appendix, we present more information on these samples, including tabulated parameters, color images, and detailed notes on individual objects. Meanwhile, Figure~\ref{udgsel} shows the distribution of our UDGs in each of the three scaling relations. Objects that belong to our {\tt primary sample} are shown as filled red circles in each panel while the 18 UDGs from our {\tt secondary sample} are shown as filled blue circles. The eight, bright spiral galaxies that were initially selected and discarded from our secondary sample are shown as open blue circles.
Table \ref{tbl:udgcat} lists information on the 26 UDGs that belong to our {\tt primary} sample (see the appendix for information on the {\tt secondary} sample). From left to right, the columns record the object name, right ascension and declination, magnitude, effective radius, effective surface brightness and mean effective surface brightness (all measured in the $g'$ band). The final two columns report previous names, if applicable, and the official object identification from the NGVS.
Exactly half (13 $\simeq 50\%$) of the 26 galaxies in Table~\ref{tbl:udgcat} were previously cataloged, mostly in the Virgo Cluster Catalog (VCC) of \citet[but see also \citealt{Rea83}]{Bin85}. This is also true of the combined {\tt primary} and {\tt secondary} samples, where 21/44 $\simeq$ 48\% of the UDGs are previously catalogued galaxies (though not necessarily identified in the past as extreme LSB systems). In Figure \ref{ngvsvcc}, we show distributions for the effective radius and mean surface brightness for our sample (upper and lower panels, respectively). For reference, we highlight the subsets of UDGs that were previously cataloged by \citet{Bin85}, the most comprehensive catalog of Virgo galaxies prior to the NGVS. As noted above, roughly half of these UDGs were previously cataloged galaxies, although the NGVS clearly excels in the detection of the smallest, faintest and lowest surface brightness UDGs.
Figure \ref{udgthumb_primary} shows thumbnail images for UDGs belonging to our {\tt primary sample}. Several objects have elongated shapes that may point to tidal effects but others have a more circular appearance (see Appendix B for detailed notes on individual galaxies and \S\ref{morphology} for a more complete discussion of UDG morphologies and their implications).
\begin{figure}
\epsscale{1.18}
\plotone{ngvsvcc.png}
\caption{Effective radii and surface brightness distributions of UDGs. {\it Panel (a)}. Distribution of UDG effective radii. The solid and dashed black histograms show distributions for the {\tt primary} and combined {\tt primary} and {\tt secondary} UDG samples, respectively. Hatched versions of these histograms show the subset of UDGs that appear in the VCC catalog of \citet{Bin85}. {\it Panel (b)}. The distribution of mean surface brightness within effective radius for UDGs. All histograms are the same as in the upper panel.
\label{ngvsvcc}}
\end{figure}
\begin{figure*}
\epsscale{1.2}
\plotone{image_udg.png}
\caption{Thumbnail images of UDGs from our {\tt primary} sample. The field of view is $4\farcm4 \times 5\farcm3$. North is up and East is left. The color is constructed from $u^*gi$ images. To enhance LSB features, all images have been smoothed using a Gaussian kernel with $\sigma = 2.5$~pixels (${\rm FWHM} = 1\farcs1)$. The ID of each UDG is noted at the top of its image. \label{udgthumb_primary}}
\end{figure*}
We can now compare our selection methodology to that of \citet{vanD15}. The solid gray curves in Figure~\ref{udgsel} show their criteria. Note that these relations exhibit a break just below $L_g \sim 10^8$ $L_{g,\odot}$. This is a consequence of the fact that the original criteria were specified in terms of effective radius and {\it central} surface brightness, for an assumed exponential profile.\footnote{For the NGVS, galaxies are fitted using a more generalized S\'ersic model, which is flexible enough to capture the change in galaxy concentration with luminosity and mass.} For the most part, the selection criteria are in reasonable agreement although it is clear that the \citet{vanD15} definition does include some ``normal" galaxies around luminosities of $L_g \sim 10^8 L_{g,\odot}$. Our objective method initially selects some luminous late-type galaxies in the {\tt secondary} sample, but these objects are not present in the {\tt primary} sample. In our UDG selection criteria, there is a room for faint UDGs to have effective radii smaller than $R_e = 1.5$ kpc, but we find only a single UDG in the {\tt primary} sample (NGVSUDG-08 = NGVSJ12:27:15.75+13:26:56.1) to have an effective radius less than $R_e=1.5$ kpc. The small number of outliers among faint galaxies is possibly due to the onset of incompleteness in the survey.
Returning to Figures~\ref{dSBe}, \ref{dmSBe}, and \ref{dRe}, the bottom panels of these figures show the deviations from the mean scaling relations presented in Figure~\ref{udgsel}. For the residual surface brightness distributions, the fitted Gaussians have means and standard deviations that are consistent, within their respective errors, between the core region and entire cluster. The residual effective radius distributions are slightly broader for the full sample of galaxies than those in the core region, evidence that low-mass galaxies in low-density environments have larger sizes.
It should be noted that we also find a number of compact galaxies located on the {\it opposite} side of UDGs in the scaling relation sequences. These relatively rare objects (i.e., cEs, BCDs) are distinct from ultra-compact dwarfs \citep[see][]{Liu15}, but are found throughout the cluster and span a wide range in luminosity and color. We will examine these objects in detail in a future paper.
\subsection{Luminosity Function}
Figure \ref{udglf} shows luminosity functions for our UDG samples. As explained in \S\ref{selection}, we exclude from this analysis the eight, bright, late-type systems whose luminosities range from $10^{9.3}$ to $10^{10.3}$ $L_{g,\odot}$. The brightest of the 26 UDGs in the {\tt primary} sample has a luminosity of $\sim\!\!\!10^{8.7} L_{g\odot}$ --- comparable to that of the brightest of the 18 UDGs in the {\tt secondary} sample, which has $\sim\!\!10^{9.1} L_{g,\odot}$. The faintest objects in either sample have $\sim\!\!10^{6.2} L_{g,\odot}$, which is slightly brighter than the detection limit of the survey. The luminosities of our faintest UDGs are well below those of the UDGs discovered in the Coma cluster by Dragonfly \citep{vanD15}, a reflection of the depth and spatial resolution afforded by NGVS imaging.
Broadly speaking, the luminosity distribution of the combined {\tt primary} and {\tt secondary} sample is fairly similar to that of ``normal" Virgo Cluster galaxies \cite[see][]{Fer16}. The luminosity function of the {\tt primary} sample alone appears flatter than that of ``normal" galaxies although the relatively small number of galaxies (26) limits our ability to draw firm conclusions. We caution that, for either UDG sample, selection effects can be significant given the faint and diffuse nature of these galaxies (see \citealt{Fer20})
Figure~\ref{udgmSBe} shows the distribution of effective surface brightness, $\langle \Sigma \rangle _{e}$, for our UDGs. The UDGs belonging to our {\tt primary} sample span a range of $10^{-1.0} \lesssim \langle\Sigma\rangle_e \lesssim 10^{1.0} L_{\odot}~{\rm pc^{-2}}$. Overall, the number of UDGs increases with decreasing surface brightness until $\langle\Sigma\rangle_{g,R_e} \sim 10^{-0.6} L_{g,\odot}{\rm pc^{-2}}$ (which is equivalent to $\langle\mu\rangle_e \sim 28~ {\rm mag~arcsec^{-2}}$ in the $g$ band). Naturally, about half of the Virgo UDGs are fainter than the mean surface brightness of $\langle\Sigma\rangle_e \sim 10^{-0.3} L_{g,\odot}{\rm pc^{-2}}$ (or $\langle\mu\rangle_e \sim 27.5 \, {\rm mag \, arcsec^{-2}}$ in the $g$ band). This corresponds to the surface brightness of the faintest Coma Dragonfly UDGs \citep{vanD15}, and is significantly fainter than most other UDG surveys (e.g., \citealp{vanB16,Rom17,Man18}). If Virgo is representative of these other environments, then this would suggest that other clusters may contain significantly more very low surface brightness UDGs than currently cataloged.
Although we found a significant population of very low surface brightness UDGs, the number of UDGs in the Virgo cluster is consistent with the expected number from the \citet{vanB16} relation between halo mass and number of UDGs when we use a survey limit similar to that in previous studies.
\begin{figure}
\epsscale{1.15}
\plotone{lf_udg.png}
\caption{The $g$-band luminosity function for Virgo UDGs. The red dashed histogram shows the luminosity function for our {\tt primary} sample of 26 UDGs; results for the combined {\tt primary} and {\tt secondary} samples (44 objects) are shown as the purple dotted histogram. The luminosity function for galaxies in the Virgo core region is shown as the solid histogram (after renormalizing). For comparison, the luminosity function of Coma UDGs from \citet{vanD15} is shown by the gray histogram.
\label{udglf}}
\end{figure}
\begin{figure}
\epsscale{1.15}
\plotone{mSBe.png}
\caption{The distribution of mean effective surface brightness for UDGs. Symbols and notations are the same as those in Figure \ref{udglf}.
\label{udgmSBe}}
\end{figure}
\subsection{Spatial Distribution}
\begin{figure}
\epsscale{1.15}
\plotone{spa_udg.png}
\caption{The spatial distribution of Virgo UDGs within the NGVS footprint. The grayscale map shows the surface number density of $\sim$3689 Virgo cluster member galaxies from the NGVS. Red and blue symbols show UDGs, {\tt primary }and {\tt secondary} samples, respectively. The orange and light blue crosses show the center of the Virgo cluster and the peak of the galaxy density distribution, respectively.
\label{udgspa}}
\end{figure}
\begin{figure}
\epsscale{1.15}
\plotone{cum_udg.png}
\caption{The cumulative distribution of cluster-centric radii for Virgo Cluster UDGs. The upper panel shows the distribution of distances from the Virgo cluster center (i.e., the centre of M87). The lower panel shows the distribution of distances from the location of the galaxy peak density (see Figure~\ref{udgspa}). In both panels, the red dashed and purple dotted curves show the distributions for the {\tt primary}, and combined {\tt primary} and {\tt secondary} samples, respectively. Results from KS tests are summarized in each panel. {\it D} is the maximum deviation between the two distributions, and {\it prob} is the significance level of the KS statistic. A small {\it prob} value means more significantly different distributions.
\label{cumudg}}
\end{figure}
The distribution of UDGs within the cluster may hold clues to their origin, and we begin by noting that the cluster core appears to be overabundant in UDGs. In fact, 10 of the 44 galaxies (23\%) in our combined {\tt primary} and {\tt secondary} samples are found in the central 4~deg$^2$. While we cannot rule out the possibility that some of these candidates are being seen in projection against the cluster center, the enhancement is likely real as this region represents $\lesssim 4\%$ of the survey area. We shall return to this issue in \S\ref{spatial}.
Figure~\ref{udgspa} shows the distribution of UDGs within the Virgo Cluster. Objects belonging to the {\tt primary} and {\tt secondary} samples are are shown separately. The underlying gray scale map shows the surface density of the 3689 certain or probable member galaxies from the NGVS. The orange cross shows the location of M87 --- the center of the Virgo A sub-cluster and the galaxy that has traditionally been taken to mark the center of Virgo. This figure shows that the UDGs are distributed over the entirety of the cluster, yet concentrated toward the cluster center. Additionally, it appears the UDGs are offset from both M87 and from the centroid of the galaxy population (marked as a light blue cross in this figure), although they are more closely associated with the latter. We note that the offset of the galaxy density centroid from M87 is in the direction of the infalling M86 group, and the spatial distribution of UDGs is also offset in this direction.
To examine the concentration of the UDG population in more detail, we compare the cumulative distributions of UDGs and other galaxies in Figure~\ref{cumudg}. The upper and lower panels of this figure show the cumulative distribution of distances from M87 and the galaxy centroid, respectively. Whichever center is used, the UDGs --- from both the {\tt primary} and {\tt secondary} samples --- are found to be more centrally concentrated than other cluster galaxies. This is also true if we use the definition of UDGs from \citet{vanD15}. The results of our Kolmogorov-Smirnov (KS) tests show that radial distributions (centered on the galaxy density peak) for all galaxies and the {\tt primary} sample differ with $95\%$ probability. This finding is notable, as it differs from some previous studies that found UDGs in other clusters to be less centrally concentrated than normal galaxies (e.g., \citealp{vanB16,Tru17,Man18}).
\begin{figure}
\epsscale{1.15}
\plotone{dis_mSBe.png}
\caption{Mean effective surface brightness of UDGs, in four different luminosity bins, plotted as a function of distance from the Virgo Cluster center. The black dotted curves show the mean trends for normal galaxies. The red and blue circles show UDGs from our {\tt primary} and {\tt secondary} samples, respectively.
\label{dismSBe}}
\end{figure}
Figure~\ref{dismSBe} shows how the effective surface brightness of UDGs varies with distance from the cluster center. The four panels show the trends for galaxies divided into four bins of luminosity that decease from the top to bottom panels. The symbols are the same as in previous figures, and each panel includes a dotted curve that indicates the mean behavior of other cluster members. As expected, the UDGs fall well below the mean surface brightness of other galaxies at all radii. Interestingly, there is no apparent dependence of $\langle\Sigma\rangle_{e}$ on distance in each luminosity bin. However, we find few UDGs with $\langle\Sigma\rangle_e \gtrsim 1$ $L_{g,\odot}{\rm pc}^{-2}$ in the inner regions of the cluster although such objects should be detectable, if present. Indeed, the UDGs we do find in the central region are often fainter than the surface brightness limits of previous surveys (e.g., \citealp{vanB16,Tru17,Man18}). Although the Virgo B sub-cluster complicates the use of clustercentric distance as a proxy for environment, the effect is minimal because most UDGs in the Virgo B sub-cluster are found outside the dense sub-cluster core (see Figure \ref{udgspa}).
\subsection{Globular Cluster Systems}
At first, the diffuse nature of UDGs seems at odds with the formation of massive star clusters, as the latter require environments with a high density of star formation to form \citep[cf.,][]{Kru14,Kru15}. However, many UDGs harbor significant populations of GCs \citep[e.g.,][]{Pen16,vanD17,Amo18,Lim18}.
We have used the NGVS data to examine the GC content of UDGs belonging to our {\tt primary} and {\tt secondary} sample, selecting high probability GC candidates on the basis of their $u^*g'i'$ colors and concentration indices \citep[see][]{Mun14,Dur14}.
GCs at the distance of Virgo are nearly point sources in our survey data, so we chose point-like sources based on concentration indices ($\Delta i_{4-8}$): i.e., the difference between four- and eight-pixel diameter aperture-corrected $i$-band magnitudes (the median NGVS $i$-band seeing is $0\farcs54$; \citealt{Fer12}). We selected objects with $-0.05 \leq \Delta i_{4-8} \leq 0.6$, a range slightly wider than in \citet{Dur14}, to allow for the existence of larger GCs in UDGs \citep{vanD18,vanD19}. Among point-like sources, we then selected GCs using their $u^*g'i'$ colors, with the GC selection region in $u^*g'i'$ color space determined from spectroscopically confirmed GCs in M87 (see \citealt{Mun14} and especially \citealt{Lim17} for details).
We must make two assumptions to compute the total number of GCs associated with a galaxy: (1) the effective radius of the GC system; and (2) the shape of the GC luminosity function (GCLF). We take the effective radius of the GC system to be $R_{e,GCS} = 1.5 R_{e,gal}$, where the effective radius of the galaxy is derived from NGVS imaging.
In practice, this assumption means that half of the GCs associated with a given galaxy are found within 1.5 times the galaxy effective radius (e.g., \citealp{Lim18}). Under this assumption, the number of GCs within this aperture was counted (after applying a background correction) and then doubled to arrive at an estimate of the total number of observable GCs associated with each galaxy. In some cases, no GC candidate is detected within $1.5 R_{e,gal}$, so we expanded the aperture (up to $5R_e$) to have at least one GC candidate within the aperture. To correct for the spatial coverage of these enlarged apertures, we assume the GC number density profile to be a \citet {Ser63} function with $R_{e,GCS} = 1.5 R_{e,gal}$ and S\'ersic index $n=1$. When this was done, it has been noted in Tables~\ref{tbl:gcsn} and \ref{tbl:gcsn2}. Although some other studies have fitted GC spatial distributions directly to estimate the total number of GCs, the technique used in this paper is able to provide a homogeneous estimate of GC numbers in a diverse set of galaxies, from those with large GC populations to those containing few or no GCs. Note that the total number of GCs in Coma UDGs estimated with this method are consistent with those measured directly from GC profile fitting \citep{Lim18}. We discuss the shape of the GCLF in Section~\ref{sec:gclf}.
In practice, our GC selection includes contaminants such as foreground stars, background galaxies, intracluster GCs, and GCs belonging to neighboring galaxies. We therefore estimated a local background by choosing eight local background regions, with each box-shaped region having an area of three square arcminutes. The mean and standard deviation in the number density of GC candidates in these background regions was then used to estimate the total numbers of GCs and their uncertainties in each UDG.
\subsubsection{GC Luminosity Function} \label{sec:gclf}
\begin{figure}
\epsscale{1.2}
\plotone{gclf.png}
\caption{Composite globular cluster luminosity function for Virgo UDGs. The dashed and solid histograms show GC candidates in the {\tt primary} and the {\tt primary+secondary} sample, respectively. Both samples are background subtracted. Light grey filled histograms represent stacked GC luminosity functions for NGC1052-DF2 and NGC1052-DF4. Both NGC~1052 group galaxies appear to have anomalously large and bright GC populations \citep{vanD18,vanD19}. Gaussian functions, fitted to the distributions of the primary and combined samples, are shown as the purple dotted and red long-dashed curves, respectively. For these curves, we fix the Gaussian mean to be $\mu_{g,TO}=-7.2$~mag, and find the best-fit $\sigma_g=1.0$~mag\label{gclf}}
\end{figure}
GC luminosity functions are typically well described by a Gaussian function. The mean magnitude of the GCLF has a nearly universal value ($\mu_g=-7.2$~mag) with little dependence on host galaxy properties, while sigma has a correlation with the host galaxy luminosity \citep{Jor07,Vil10}.
Recent studies of two UDGs (NGC1052-DF2 \& -DF4; \citealp{vanD18,vanD19}), however, suggest that these galaxies have GCLFs with mean magnitudes about $1.5$ magnitude brighter than a standard GCLF (although there has been some debate about their distances; \citealt{Tru19}). It would be scientifically interesting if the Virgo UDGs have a significantly different GCLF from other galaxies, but it would also introduce an additional source of uncertainty when we try to estimate the total number of GCs.
To test whether the form of the GCLF in Virgo UDGs is, on the whole, different from a standard GCLF, we have constructed a ``stacked'' GC luminosity function using the {\tt primary} and {\tt primary+secondary} samples. Figure \ref{gclf} shows the composite, background-subtracted GC luminosity functions. These luminosity functions are well fit with Gaussian functions down to our selection limit, and their mean magnitudes are consistent with the universal GC luminosity function. Overall, we do not find any significant excess at the peak luminosity of NGC1052-DF2 and -DF4's GCs. Additionally, although the numbers are small, we do not find any individual GC systems with an obvious LF peak around $M_g\approx-9$~mag. This result suggests that the form of the GCLF in Virgo UDGs is likely similar to those in other low-mass early-type galaxies \citep{Jor07,Mil07,Vil10}.
Ultimately, we adopted a Gaussian GC luminosity function with parameters $\mu_g=-7.2$~mag and $\sigma_g=1.0$~mag, the latter of which was estimated from the stacked {\tt primary} sample GCLF with $\mu_g$ fixed. Our GC selection has a limiting magnitude of $g'\simeq24.5$~mag (at which we are 95\% complete), which is slightly deeper than the turn-over magnitude of GCLF at Virgo distance ($\mu_{g,TO}=23.9$~mag), so we should detect $\sim\!\!73\%$ of the GCs in a Gaussian distribution. To estimate the full number, we extrapolate the remainder of the GCLF using our assumed Gaussian LF. We note that $\sigma_g=1.0$~mag is consistent with what is seen in low-mass dwarfs \citep[$0.5\lesssim\sigma_g\lesssim1.3$~mag;][]{Jor07,Mil07}.
\subsubsection{GC Specific Frequencies}
\begin{figure*}[th]
\epsscale{1.2}
\plotone{mvsn.png}
\caption{Globular cluster specific frequencies, $S_N$, for Virgo cluster UDGs and comparison galaxies plotted as a function of absolute $V$-band magnitude. On the top axis, we show the corresponding stellar mass assuming $M/L_V=1.5$.
The red filled circles show the {\tt primary} UDG sample. The red solid line and purple dashed line display the mean values of specific frequencies for the {\tt primary} sample and combined {\tt primary+secondary} sample, respectively. The red shaded region shows the uncertainty in the running mean for the {\tt primary} sample. The magenta squares and dotted magenta line show individual and mean $S_N$ values for Coma UDGs from \citet{Lim18}.
The black triangles, diamonds, and crosses show ``normal" early-type galaxies from \citet{Pen08}, \citet{Mil07}, and \citet{Geo10}, respectively. The solid black curve shows the predicted trend for $S_N$ assuming that the number of GCs scales with the host galaxy's inferred halo mass following \citet{Har17}, which assumes the stellar-to-halo mass relation (SHMR) of \citet{Hud15}. (Note that $S_N$ can formally be negative due to background subtraction.)\label{mvsn}}
\end{figure*}
With the total number of GCs in hand, we can then compute the GC specific frequency, $S_N$ \citep{Har81}. To estimate the $V$-band magnitude of the galaxies, we use our $g'$-band magnitudes with an assumed $(g'-V)=0.1$~mag color for all galaxies. GC specific frequencies for the {\tt primary} sample of UDGs are compiled in Table~\ref{tbl:gcsn}, and for the {\tt secondary} sample of UDGs are compiled in Table~\ref{tbl:gcsn2}.
Figure~\ref{mvsn} compares the specific frequencies for Virgo UDGs in the {\tt primary} sample (red symbols) to those found in other types of galaxies and environments (i.e., Fornax, Coma and nearby dwarfs). In this plot, specific frequencies for high- and intermediate-luminosity early-type galaxies from the ACS Virgo Cluster Survey
\citep[ACSVCS;][]{Cot04} are shown as open triangles \citep{Pen08}, with lower mass early-type dwarfs from \citet{Mil07} and \citet{Geo10}, and Coma cluster UDGs from \citet{Lim18}. Although the uncertainties in $S_N$ at such low stellar masses are large for any one galaxy, the smoothed running mean (red line) does show a steady rise toward low masses, with $\langle S_N\rangle\sim70$ at $M_V=-11$~mag ($M_\star\sim3\times10^6 M_\odot$). We also show the running mean for the combined {\tt primary+secondary} sample (purple dashed line).
The Virgo cluster UDGs on average have higher $S_N$ than classical dwarf galaxies in Virgo and Fornax, but lower than Coma cluster UDGs at comparable luminosities. The combined sample has a lower mean $S_N$ at low masses, suggesting that the {\tt secondary} sample galaxies are more like classical dwarfs. Fornax cluster UDGs have shown a similar trend \citep{Pro19b}. \citet{Lim18} also found that Coma cluster UDGs have systematically higher $S_N$ than classical dwarf counterparts at fixed stellar mass. In all cases, however, the scatter is large, with some UDGs having no GCs, and some having extremely high $S_N$. A direct comparison between the Virgo and Coma UDG populations is challenging given that many of the Virgo UDGs are fainter than those of Coma UDGs, and the extreme faintness of the Virgo systems means that the measurement of their effective radii is more difficult; as a result, the specific frequencies for Virgo UDGs have larger uncertainties than their Coma counterparts.
Previous observational and theoretical studies \citep{Pen08,Mis16,Lim18} have shown that low mass galaxies in denser environments can have higher $S_N$. It is possible that similar processes may explain the difference in $S_N$ between the Coma UDGs and the ones in the Virgo and Fornax clusters.
There is increasing evidence that the number of GCs (or the total mass of the GC system) correlates better with total galaxy halo mass than with stellar mass \citep[e.g.,][]{Bla97,Pen08,Har17}, although the reason why this might true is still under debate \citep{Boy17,ElB19,Cho19}. Very few galaxies have both GC numbers and directly measured halo masses. What is typically done is to assume a stellar-to-halo mass relation (SHMR), and then estimate the total mass fraction of GCs. We show the implications of this assumed relation with the gray line in Figure~\ref{mvsn}. This curve is calculated in the same way as in the study of \citet{Har17}, using the SHMR from \citet{Hud15} evolved to redshift zero. We then extrapolate the SHMR to $10^6 M_\odot$ and assume $\eta = 2.9\times10^{-5}$, where $\eta$ is the number of GCs per solar mass of halo mass \citep{Bla97}.
We note, however, that the SHMR of \citet{Hud15} is not calibrated below $\sim\!\!2\times10^{10} M_\odot$ for quenched galaxies, so our use of this relation is an extrapolation of over four orders of magnitude. Additionally, this and most other SMHRs are for centrals, while most of the data used to calibrate $\eta$ is from satellites. Assuming different SHMRs may be informative, like those for satellites in Virgo \citep{Gro15}, but we should then re-estimate $\eta$ using the appropriate data. We leave a more involved discussion of this subject for a future paper, and simply note that the mean trend of $S_N$ with $M_V$ for UDGs is generally consistent with $\eta = 2.9\times10^{-5}$ down to low mass when the \citet{Hud15} SMHR is extrapolated to low masses. However, given the scatter in $S_N$ for the UDGs, one should be careful in trying to invert this relation and estimate individual UDG halo masses from GC numbers or masses.
\subsubsection{GC Color Distributions}
\begin{figure}
\epsscale{1.2}
\plotone{gccol_udg.png}
\caption{Composite globular cluster $(g-i)_0$ color distribution for Virgo UDGs. The solid and dashed-dot histograms show GC candidates in the {\tt primary+secondary} sample and GC candidates in the adjacent backgrounds, respectively. The solid histograms are background subtracted. {\it Panel (a)} and {\it Panel (b)} show the bright UDGs ($-18 < M_g \leq -13$) and faint UDGs ($-13 < M_g \leq -10$), respectively. Two Gaussian functions, fitted simultaneously to the background-subtracted distribution, are shown as the blue and red dotted curves, but {\it Panel (b)} only has a blue curve.
\label{gccol}}
\end{figure}
The colors of GCs, as rough proxies for their metallicities or ages, have been characterized across a wide range of galaxies \citep[e.g.,][]{Lar01,Pen06}, and can provide insights into the formation and evolution of their host galaxies. GC metallicities, represented in a crude way by their broadband colors under the assumption of old ages, have been observed to have a wide spread in massive galaxies, with both metal-poor (blue, possibly accreted) and metal-rich (red, possibly formed {\it in situ}) populations. The mean colors and relative fractions of both of these populations have been shown to be correlated with the stellar mass of the host, although the slope is much steeper for the metal-rich GCs, especially when GC color gradients are taken into account \citep{Liu11}. The color distributions of GCs in UDGs, both their mean colors and the relative fractions of blue and red GCs, have the potential to tell us about their chemical enrichment history, although the exact translation of colors to metallicity in old GCs is still a subject of debate \citep[e.g.,][]{Pen06,Yoo06,Bla10,Villaume19,Fah20}.
Unfortunately, due to the small number of GCs typically associated with any individual UDG --- and the lack of multi-color imaging in a number of previous surveys --- few UDGs have had their GC color distributions studied in any detail, with DF17 \citep{Bea16b} and DF44 \citep{vanD17} in the Coma Cluster being the exceptions. In DF17, the mean GC color of $\langle(g-z)\rangle = 1.04$, roughly equivalent to $\langle(g'-i')\rangle \approx 0.79$, is a bit redder than expected for a galaxy of its stellar mass, and similar to those of GCs in galaxies with virial masses of $\sim\!10^{11}M_{\odot}$.
Although some of the Virgo UDGs studied here have large GC populations relative to their stellar luminosity, their low absolute numbers make difficult a case-by-case study of their color distributions. We have therefore constructed combined GC color distributions using a sample of 36 UDGs from the combined {\tt primary+secondary} sample that have GC detections within $1.5R_e$. The only galaxy with GC candidates that is excluded is NGVSUDG-A09 (VCC~1249), because its close proximity to M49 makes its GC color distribution extremely uncertain. We further divide this sample into two bins of luminosity ($-18< M_g<-13$ and $-13\leq M_g<-10$, the dividing point roughly corresponding to $M_\star\sim2\times10^7 M_\odot$), with 19 and 17 galaxies in the ``bright'' and ``faint'' samples, respectively. Figure \ref{gccol} presents the resulting composite background-subtracted $(g'-i')_0$ GC color distributions.
The ``bright'' UDG sample shows an apparent bimodality in GC colors similar to what is seen in more massive galaxies. We fitted this distribution with two Gaussian functions using Gaussian Mixture Modeling (GMM) code \citep{Mur10}. GMM provides best-fit Gaussian parameters as well as a D value that indicates the separation of two peaks relative to their width. Fitting a pair of Gaussians is statistically justified when $D>2$, and the D value for our composite GC color distribution is $3.9\pm1.2$. The two Gaussian peaks are located at colors of $(g'-i')_0=0.72\pm0.01$~mag and $(g-i)_0=1.10\pm0.01$~mag. In terms of total numbers, we find 87\% and 13\% of the GCs belonging to the blue and red populations, respectively. The blue peak is consistent with the peak of the GC color distribution of the Fornax UDGs \citep{Pro19b}. The ``faint'' UDG sample has only a single peak of blue GCs whose mean color is $(g'-i')_0 = 0.66\pm0.01$~mag.
There are two interesting results here. First, the data suggest the existence of a significant (if small) population of red GCs in the ``bright'' UDG sample. Upon closer inspection, a majority of the red GCs are in the brightest UDGs in the sample, like NGVSUDG-05, -09, -26, and -A10. A couple of these (NGVSUDG-09 and -A10) show disturbed isophotes or shells indicating a possible interaction or post-merger state.
Second, we can compare the mean colors of the different populations to each other and to those seen in normal early-type galaxies. Comparing the mean colors of the blue peaks shows a clear difference, where the fainter galaxies have a blue GC population that is bluer by $\Delta (g'-i')_0 = 0.06$~mag. To compare with the relations in \citet{Liu11}, we transform to the HST/ACS filter system using GCs that are in both the NGVS and ACS Virgo observations:
\begin{equation}
(g'-z')_{0,ACS} = 1.65\times(g'-i')_{0,NGVS} - 0.27
\end{equation}
For the ``bright'' sample. the blue and red peaks thus have mean colors of $(g'-z')_0=0.92\pm0.02$~mag and $(g'-z')_0=1.54\pm0.16$~mag, respectively, which we can compare to Figure~6 in \citet{Liu11}. Despite having a low red GC fraction that is consistent with what we would expect for galaxies at this mass \citep{Pen08}, we find that both the blue and red GCs in the ``bright'' UDG sample have mean colors that are much redder than expected for the stellar mass of their hosts. These UDGs are $100\times$ less massive than the ACSVCS galaxies that host GCs with similar colors. None are obviously near massive galaxies whose more metal-rich GC systems may be sources of contamination. The ``faint'' sample has a single peak at $(g'-z')_0=0.82\pm0.02$~mag. As a contrast to the ``bright'' sample, it has very blue GCs, with a mean color bluer than those in the least massive ACSVCS galaxies, and consistent with being an extension of the previously established relationship between blue GC mean color and galaxy luminosity.
We have inspected the red GC candidates in the individual UDGs. Typically, just one or two extreme objects, with $(g-i)_0\gtrsim1.0$, are found in any individual UDG, so this population is not free from uncertainties due to small number statistics and imperfect background subtraction. Moreover, a number of red GCs are located far from their galaxy centers (i.e., 11 of the 15 red GCs in {\tt primary} UDGs are found beyond the effective radius of their host galaxy). We suspect some of these objects may be due to residual contamination by background objects. Radial velocity confirmation of membership and spectroscopic age and metallicities for these objects will be needed to establish their true nature.
\subsubsection{Nuclear Star Clusters}
\citet{San19} studied the fraction of nucleated galaxies in the Virgo core region, and showed that it varies from $f_{nuc}\approx0$ to $f_{nuc}\approx90\%$ depending on the stellar mass of the host galaxy. We have examined the evidence for stellar nuclei in the NGVS isophotal models and through visual inspection, and find that 3-4 of the 26 UDGs in our {\tt primary sample} appear to be nucleated\footnote{These are NGVSUDG-01, NGVSUDG-06, NGVSUDG-26 and, possibly, NGVSUDG-04. Likewise, 3-5 galaxies in our {\tt secondary} sample may also be nucleated: i.e., NGVSUDG-A08, NGVSUDG-A11, NGVSUDG-A15 and, possibly, NGVSUDG-A03 and NGVSUDG-A10}. Throughout the entire cluster, the overall UDG nucleation fraction is therefore $f_{nuc,UDG}= (6-9)/44 \simeq 14-20\%$. The nucleation fraction in the core is similar: i.e., 2 of the 10 UDGs (20\%) belonging to the combined {\tt primary} and {\tt secondary} sample appear to be nucleated. For comparison, the nucleation fraction of ``normal" galaxies, with luminosities similar to the UDGs, ranges from $20-60\%$ \citep{San19}. Thus, the UDGs may have a slightly lower nucleation fraction than other Virgo galaxies, consistent with our recent findings in Coma \citep{Lim18}.
\section{Discussion}
\label{discussion}
\subsection{The Uniqueness of UDGs as a Population}
\label{unique}
Based on the residuals from the mean scaling relations observed in the core region (see Figures~\ref{dSBe}, \ref{dmSBe} and \ref{dRe}), the 10 UDGs in the central $\sim4$ deg$^{2}$ ({\tt primary} and {\tt secondary} samples combined) seem to be marginally distinct, slightly separated from the population of $\sim$400 ``normal" galaxies. However, a different picture emerges when one considers the full sample of $\sim$3700 galaxies that are distributed throughout the cluster. With an order-of-magnitude larger sample size, the gaps in effective radius and surface brightness are no longer apparent, and the UDG candidates (26 or 44 galaxies, depending on which sample is used) seem to occupy the tails of Gaussian-like distributions in structural parameters. While it is entirely possible that scaling relations of normal and diffuse galaxies depend on environment --- and perhaps behave differently for the two populations --- it is also possible that the gaps seen in the core region are an artifact of the smaller sample. Our provisional conclusion is that, when one considers the cluster in its entirety, UDGs are simply galaxies that occupy the LSB tail of the full population. Of course, this interpretation does not rule out the possibility that the galaxies that populate this LSB tail do so because they have been prone to physical processes, such as tidal heating and disruption (e.g. \citealp{Car19}), that may give rise to at least some UDGs.
\subsection{The Spatial Distribution of UDGs as a Clue to their Formation}
\label{spatial}
Our study differs from most previous UDG surveys in that we target a single environment with high and nearly uniform photometric and spatial completeness: i.e., roughly speaking, the NGVS reaches detection limits of $g\sim25.9$~mag and $\mu_g \sim 29$ mag~arcsec$^{-2}$, over the entire $\sim\!\!100$ deg$^2$ region contained within the cluster virial radius. Thus, it is possible with the NGVS to explore the spatial distribution of UDGs within the cluster, compare it to that of normal galaxies, and use this information to critically assess formation scenarios.
Figure~\ref{cumudg} shows one of the principal findings of this paper: {\it the Virgo UDG candidates are more spatially concentrated on the central region than other cluster members}. This is true for both the {\tt primary} and the combined {\tt primary} and {\tt secondary} samples. This finding is noteworthy because previous studies --- often relying on incomplete or heterogeneous data --- have reached conflicting conclusions on whether or not UDGs favor the dense central regions of rich clusters (e.g. \citealp{vanB16,Man18}). It is worth bearing in mind that the UDG candidates in Virgo extend to significantly lower luminosities and surface brightness levels than those uncovered in previous surveys (e.g., fully half of the Virgo UDGs are fainter than $\langle\mu\rangle_{e}$ = 26 mag arcsec$^{-2}$). Deeper imaging and/or expanded spatial coverage of other clusters, with a consistent definition of the UDG class, will be required to know if Virgo is unique in this sense.
Unlike previous UDG studies, the selection criteria used in this study rely on the {\it empirical trends} between luminosity and structural parameters ($R_e$, $\Sigma_e$, $\langle\Sigma\rangle_e$) defined by a nearly complete sample of cluster members in the core region, as well as the {\it observed scatter} about these mean relations. Interestingly, we find the core to be overabundant in UDG candidates relative to the rest of the cluster. For example, 10/44 (23\%) of the galaxies in our combined {\tt primary} and {\tt secondary} samples, are found in the central $\sim 4$~deg$^2$. Although we cannot rule out the possibility that some of these candidates are being seen in projection against the cluster core, there are reasons to believe the overall enhancement is real. While it is difficult to assess the importance of projection effects without an {\it a priori} knowledge of the three-dimensional (volume) density distribution, we can nevertheless test the possibility that the observed excess is due to random chance. To illustrate, a total of 2101 cluster galaxies have luminosities in the range defined by the combined {\tt primary} and {\tt secondary} samples. In random selections of these galaxies, carried out 5000 times, a total of 10 or more galaxies fall in the central 4 deg$^ 2$ in only $1.5\%$ of the cases. This suggests that the observed central enhancement is real, and not purely the result of projection effects.
This central concentration of UDGs in the Virgo core region is puzzling. \citet{Ron17} investigated a possible dwarf origin for UDGs using the Millennium II cosmological simulation and Phoenix simulations of rich clusters. Comparing to a variety of observations, they concluded that a dwarf origin for UDGs is feasible since the predicted objects match the observations in a number of cases, including their spatial distribution and apparent absence in the central regions of clusters like Coma. Furthermore, tidal disruption modeling within the hydrodynamical simulations IllustrisTNG showed that UDGs might have a dual origin: a sub-population of dwarf-like halos with late infall, in agreement with \citet{Ron17}, and another sub-population resulting from tidal disruption of more massive galaxies with remnants consistent with the properties of UDGs \citep{Sal20}. It would be worthwhile to revisit these theoretical results in light of our Virgo observations. For example, \citet{Sal20} predict that a tidal origin might be confirmed by UDGs showing a combination of low velocity dispersion and an enhanced stellar metallicity. Encouragingly, the distribution of UDGs within these Virgo-like clusters in IllustrisTNG is also peaked towards the cluster centers, in good agreement with our findings in Virgo.
We note that the apparent lack of tidal features in some UDG samples may not rule out tidal disruption as an important formation process. For instance, our previous study of the kinematics of GCs in VLSB-D shows clear evidence for on-going disruption \citep{Tol18} that could have been missed from shallower surveys or lack of kinematics information for the GCs (and see \S\ref{morphology} and Appendix~B for some specific evidence for this UDG and others). It is important to bear in mind that the mere {\it detection} of UDGs can be challenging given their low surface brightness, and faint tidal features even more so.
\subsection{Clues from Morphologies}
\label{morphology}
We now pause to consider the question of UDG morphologies, and what clues they may hold for formation models.
Thumbnail images for our sample of UDGs can be found in Figure~\ref{udgthumb_primary} and \ref{udgthumb_secondary}. Although the majority of these galaxies are, by their nature, faint and diffuse objects, a careful inspection of the NGVS images, combined with an analysis of the best-fit models from ELLIPSE/BMODEL and/or GALFIT (see \S\ref{data}), offers some clues to the origin of UDGs, and their (non)uniformity as a class. For instance, the 44 galaxies belonging to our {\tt primary} and {\tt secondary} samples exhibit a wide range in axial ratio: i.e., a number of objects are highly flattened but many others have a nearly circular appearance. This na\"ively suggests that a tidal origin, which may be a viable explanation for some UDGs, is unlikely to account for all members of this class.
Nevertheless, a tidal origin seems likely, if not certain, for some UDGs. As noted in the Appendix, we see evidence for tidal streams associated with at least four galaxies: NGVSUDG-01 (VCC197), NGVSUDG-A07 (VCC987), NGVSUDG-A08 and NGVSUDG-A09 (VCC1249). Within this small subsample, there is one object that belongs to an infalling group located on the cluster periphery (VCC197; \citealt{Pau13}), two galaxies that are deep within the cluster core (VCC987 and NGVSUDG-A08), and one low-mass, star-forming galaxy (VCC1249; \citealt{Arr12}) that is tidally interacting with M49, the brightest member of the cluster.
A number of other UDGs clearly have disturbed morphologies --- such as twisted or irregular isophotes, shells and ripples --- that are indicative of post-merger, or post-interaction, galaxies: i.e., NGVSUDG-02 (VCC360), NGVSUDG-09 (VCC1017), NGVSUDG-10 (VCC1052) and NGVSUDG-A10 (VCC1448). Additionally, a handful of UDGs --- most notably NGVSUDG-08 and NGVSUDG-A14 --- may be members of LSB pairs, while at least one object --- NGVSUDG-A11 --- shows clear evidence for a faint spiral pattern at large radius, despite its previous classification as a dE0,N galaxy \citep{Bin85}.
In short, their morphologies demonstrate that at least some of the objects found in the LSB tail of the ``normal" galaxy population probably owe their diffuse nature to physical processes --- such as tidal interactions or low-mass mergers --- that are at play within the cluster environment. Likewise, the diversity in their morphologies provides {\it prima facie} evidence that no single process has given rise to all objects within the UDG class. It will be valuable to investigate UDG morphologies more closely for a subset of the Virgo objects, ideally with deep, high-resolution images that can be used to map ultra-LSB features using individual RGB stars.
\section{Summary}
\label{summary}
As part of the Next Generation Virgo Cluster Survey \citep{Fer12}, we have identified and characterized UDGs in the nearby Virgo Cluster. Employing a new, quantitative definition for UDGs based on the structural parameters of galaxies in the Virgo Cluster core (i.e., luminosity, effective radius, effective surface brightness and mean effective surface brightness; \citealt{Cot20}), we have identified candidate UDGs throughout the cluster, from the core to the periphery. In our analysis, we define two UDGs samples: (1) a {\tt primary} sample of 26 candidates selected as LSB outliers in each of three scaling relations, which ensures {\it high purity}; and (2) a combined {\tt primary} and {\tt secondary} sample of 44 UDGs which was assembled to ensure {\it high completeness}. Roughly half of these objects (21/44) are previously-cataloged galaxies, including eight galaxies previously identified as dwarfs of very large size and low surface brightness by \citet{Bin85}.
Our principal conclusions are:
\begin{enumerate}
\item[{$\bullet$}] In a 4 deg$^{2}$ region in the Virgo core, which was used to establish our UDG selection criteria, we find 10 UDG candidates among a sample of 404 galaxies (an occurrence rate of $\sim$2.5\%). These candidates appear marginally distinct in their structural properties: i.e., separated by small gaps in effective radius and surface brightness from the population of ``normal" galaxies. However, when one considers the full sample of 3689 member galaxies distributed throughout the cluster, this separation vanishes.
\item[{$\bullet$}] We compare the spatial distribution of our UDG candidates to ``normal" Virgo galaxies, and find the UDGs to be more centrally concentrated than the latter population, contrary to some findings in other clusters (e.g., \citealp{vanB16,Man18}). A significant number of UDGs reside in the core region, including some of the faintest candidates. Using the combined sample of 44 {\tt primary} and {\tt secondary} UDGs, 10 objects, or 23\% of the entire UDG population, are found in the core region (which represents less than 4\% of the cluster by area). Although we cannot rule out the possibility that some of these objects are seen in projection against the cluster core, the central enhancement is likely real and may be related to strong tidal forces in this region, or perhaps to the earlier infall expected of objects in this region.
\item[{$\bullet$}] Many of the UDG candidates in Virgo are exceptionally faint, and they expand the parameter space known to be occupied by UDGs. The faintest candidates have mean effective surface brightnesses of $\langle\mu\rangle_e \sim$ 29 mag arcsec$^{-2}$ in the $g$-band. Previous imaging surveys targeting UDGs in other environments have typically been limited to candidates brighter than $\langle\mu\rangle_e =$ 27.5 mag arcsec$^{-2}$ in the $g$-band. More than half of our Virgo UDG candidates are fainter than this.
\item[{$\bullet$}] We have carried out a first characterization of the GC systems of these galaxies. Although a direct comparison between the Virgo UDGs and those in other environments is complicated by the fact that the samples differ in luminosity and surface brightness, we find the Virgo UDGs to have GC specific frequencies that are slightly lower than those in Coma UDGs at comparable luminosities, yet somewhat elevated compared to ``normal" early-type dwarf galaxies. Consistent with recent findings in the Coma Cluster, the Virgo UDGs appear to show a wide range in their GC content. The mean $S_N$ of Virgo UDGs increases with decreasing stellar mass, roughly consistent with the expectation from a constant scaling between $N_{GC}$ and halo mass.
\item[{$\bullet$}] The GCs in these UDGs are predominantly blue. UDGs fainter than $M_g=-13$ have entirely blue, metal-poor GC populations. UDGs brighter than $M_g=-13$ have, $\sim\!13\%$ of their clusters in a red, metal-rich population. Moreover, the mean colors of both the blue and red GCs in the bright sample have colors that are significantly redder than in galaxies of comparable luminosity. The mean color of the blue GCs in the faint sample are consistent with an extrapolation of known scaling relations. The number of red GC candidates is small, and spectroscopy will be needed to confirm membership and establish their true nature.
\item[$\bullet$] In terms of morphology, there is clear diversity within the UDG class, with some objects showing evidence of a tidal origin whiles others appear to be post-merger or post-interaction systems. This suggests that no single process has given rise to all objects within the UDG class.
\item[$\bullet$] Weighing the available evidence --- and especially the apparent continuities in the size and surface brightness distributions (at fixed luminosity) when the cluster is considered in its entirety --- we suggest that UDGs may simply be those systems that occupy the extended tails of the galaxy size and surface brightness distributions. The physical mechanisms that shape the low (and high) surface brightness tails of the galaxy distributions remain interesting topics for future study.
\end{enumerate}
Some obvious extensions of this work present themselves. Radial velocity measurements for GCs in these UDGs will make it possible to measure dynamical masses and dark matter content. Deep imaging from the Hubble Space Telescope (and/or future space-based imaging telescopes) will allow the detection of individual RGB stars in these galaxies, and allow some key parameters to be measured, such as distance, chemical abundance, mean age and surface density profiles. Such images would additionally enable a search for LSB tidal features, and perhaps provide insight into the role of tidal forces in the formation of these extreme galaxies.
\acknowledgments
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2018R1A6A3A03011821).
EWP acknowledges support from the National Natural Science Foundation of China (11573002), and from the Strategic Priority Research Program, ``The Emergence of Cosmological Structures,'' of the Chinese Academy of Sciences (XDB09000105). C.L. acknowledges support from the National Natural Science Foundation of China (NSFC, Grant No. 11673017, 11833005, 11933003, 11203017)
| proofpile-arXiv_065-306 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Discrepancy with respect to Arbitrary Convex Bodies}
Our main result of this section is the following theorem.
\Banaszczyk*
\subsection{Potential Function and Algorithm}
As in the previous section, it is without loss of generality to assume that $\mathsf{p}$ is $\kappa$-dyadic, where $\kappa = 8 \lceil \log(nT)\rceil$. For any $k \in [\kappa]$, recall that $\Pi_k$ denotes the projection matrix onto the eigenspace of $\mathbf{\Sigma}$ corresponding to the eigenvalue $2^{-k}$ and $\Pi = \sum_{k=1}^{\kappa} \Pi_k$. Further, let us also recall that $\Pi_\mathsf{err}$ is the projection matrix onto the subspace spanned by eigenvectors corresponding to eigenvalues of $\mathbf{\Sigma}$ that are at most $2^{-\kappa}$. We also note that $\dim(\mathrm{im}(\Pi_k)) \le \min\{2^{k},n\}$ since $\mathrm{Tr}(\mathbf{\Sigma}) \le 1$.
Our algorithm to bound the discrepancy with respect to an arbitrary symmetric convex body $K \subseteq \mathbb{R}^n$ with $\gamma_n(K) \ge 1/2$ will use a greedy strategy with a similar potential function as in \S\ref{sec:arbitTestVectors}.
Let $\mathsf{p}_z$ be a distribution on \emph{test vectors} in $\mathbb{R}^n$ that will be specified later. Define the noisy distribution $\mathsf{p}_x = \mathsf{p}/2 + \mathsf{p}_z/2,$ \emph{i.e}, a random sample from $\mathsf{p}_x$ is drawn from $\mathsf{p}$ or $\mathsf{p}_z$ with probability $1/2$ each.
At any time step $t$, let $d_{t} = \chi_1 v_1 + \ldots + \chi_t v_t$ denote the current discrepancy vector after the signs $\chi_1, \ldots, \chi_t \in \{\pm1\}$ have been chosen. Set $\lambda^{-1} = 100 {\kappa}\log(nT)$, and define the potential
\[ \Phi_t = \Phi(d_t) := \sum_{k\in [\kappa]} \mathbb{E}_{x\sim \mathsf{p}_x}\left[\exp\left(\lambda ~d_{t}^\top \Pi_k x \right)\right].\]
When the vector $v_t$ arrives, the algorithm chooses the sign $\chi_t$ that minimizes the increase $\Phi_t - \Phi_{t-1}$.
\paragraph{Test Distribution.} To complete the description of the algorithm, we need to choose a suitable distribution $\mathsf{p}_z$ on test vectors to give us control on the norm $\|\cdot\|_K = \sup_{y \in K^\circ} \ip{\cdot}{y}$. For this, we will use generic chaining.
First let us denote by $H_k = \mathrm{im}(\Pi_k)$ the linear subspace that is the image of the projection matrix $\Pi_k$ where the subspaces $\{H_k\}_{k\in [\kappa]}$ are orthogonal and span $\mathbb{R}^n$. Moreover, recall that $\dim(H_k) \le \min\{2^k,n\}$.
Let us denote by $K_k = K \cap H_k$ the slice of the convex body $K$ with the subspace $H_k$. \pref{prop:slicemeasure} implies that $\gamma_{H_k}(K) \ge 1/2$ for each $k\in [\kappa]$ and combined with \pref{prop:width} this implies that $K^\circ_k := (K_k)^\circ = \Pi_k (K^\circ)$ satisfies $\mathsf{diam}(K^\circ_k) \le 4$ and $w_{H_k}(K^\circ_k) \le 3/2$ for every $k$.
Consider $\epsilon$-nets of the polar bodies $K^\circ_k$ at geometrically decreasing dyadic scales. Let
\[ \eps_\mathsf{min}(k) = 2^{-\left\lceil \log_2\left(\frac{1}{10\lambda}\sqrt{\dim(H_k)} \right)\right\rceil} \text{ and } \eps_\mathsf{max}(k) = 2^{-\log_2 \lceil 1/\mathsf{diam}(K_k^\circ) \rceil},\] be the finest and the coarsest scales for a fixed $k$, and for integers $\ell \in [\log_2(1/\eps_\mathsf{max}(k)), \log_2(1/\eps_\mathsf{min}(k))]$, define the scale $\epsilon(\ell,k) = 2^{-\ell}$. We call these \emph{admissible} scales for any fixed $k$.
Note that for a fixed $k\in [\kappa]$, the number of admissible scales is at most $2\log_2(nT)$ since $\mathsf{diam}(K^\circ_k) \le 4$. The smallest scale is chosen because with high probability we can always control the Euclidean norm of the discrepancy vector in the subspace $H_k$ to be $ \lambda^{-1}\log(nT) \sqrt{\dim(H_k)}$ using a test distribution as used in Komlos's setting.
Let $\mathcal{T}(\ell,k)$ be an optimal $\epsilon(\ell,k)$-net of $K^\circ_k$.
For each $k$, define the following directed layered graph $\mathcal{G}_k$ (recall \figref{fig:chaining}) where the vertices in layer $\ell$ are the elements of $\mathcal{T}(\ell,k)$. Note that the first layer indexed by $\log_2(1/\eps_\mathsf{max}(k))$ consists of a single vertex, the origin. We add a directed edge from $u \in \mathcal{T}(\ell,k)$ to $v \in \mathcal{T}(\ell+1,k)$ if $\|v-u\|_2 \le \epsilon(\ell,k)$. We identify an edge $(u,v)$ with the vector $v-u$ and define its length as $\|v-u\|_2$. Let $\mathscr{E}(\ell,k)$ denote the set of edges between layer $\ell$ and $\ell+1$. Note that any edge $(u,v) \in \mathscr{E}(\ell,k)$ has length at most $\epsilon(\ell,k)$ and since $w_{H_k}(K^\circ_k) \le 3/2$, \pref{prop:sudakov} implies that,
\begin{equation}\label{eqn:edges}
\ |\mathscr{E}(\ell,k)| ~~\le~~ |\mathcal{T}({\ell+1},k)|^2 ~~\le~~ 2^{16/\epsilon(\ell,k)^2}.
\end{equation}
Pick the final test distribution as $\mathsf{p}_z = \mathsf{p}_\mathbf{\Sigma}/2 + \mathsf{p}_y/2$ where $\mathsf{p}_\mathbf{\Sigma}$ and $\mathsf{p}_y$ denote the distributions given in \figref{fig:test}.
\begin{figure}[!h]
\begin{tabular}{|l|}
\hline
\begin{minipage}{\textwidth}
\vspace{1ex}
\begin{enumerate}[label=({\alph*})]
\item $\mathsf{p}_\mathbf{\Sigma}$ is uniform over the eigenvectors $u_1, \ldots, u_n$ of the covariance matrix $\mathbf{\Sigma}$.
\item $\mathsf{p}_y$ samples a random vector as follows: pick an integer $k$ uniformly from $[\kappa]$ and an admissible scale $\epsilon(\ell,k)$ with probability $\dfrac{2^{-2/\epsilon(\ell,k)^2}}{\sum_{\ell} 2^{-2/\epsilon(\ell,k)^2}}$. Choose a uniform vector from $r(\ell,k)^2 \cdot \mathscr{E}(\ell,k)$, where the scaling factor $r(\ell,k) := 1/\epsilon(\ell,k)$.
\end{enumerate}
\vspace{0.1ex}
\end{minipage}\\
\hline
\end{tabular}
\caption{Test distributions $\mathsf{p}_\mathbf{\Sigma}$ and $\mathsf{p}_y$}
\label{fig:test}
\end{figure}
The above test distribution completes the description of the algorithm. Note that adding the eigenvectors will allow us to control the Euclidean length of the discrepancy vectors in the subspaces $H_k$ as they form an orthonormal basis for these subspaces. Also observe that, as opposed to the previous section, the test vectors chosen above may have large Euclidean length as we scaled them. For future reference, we note that the entire probability mass assigned to length $r$ vectors in the support of $\mathsf{p}_y$ is at most $2^{-2r^2}$ where $r \ge 1/4$.
\subsection{Potential Implies Low Discrepancy}
The test distribution $\mathsf{p}_z$ is useful because of the following lemma.
In particular, a $\mathrm{poly}(n,T)$ upper bound on the potential function implies a polylogarithmic discrepancy upper bound on $\|d_t\|_K$.
\begin{lemma} \label{lemma:chaining}
At any time $t$, we have that
\[ \|\Pi_k d_t\|_2 \le \lambda^{-1} \log (4 n \Phi_t)\sqrt{\dim(H_k)} ~~\text{ and }~~ \|d_t\|_K \le O({\kappa} \cdot \lambda^{-1} \cdot \log(nT) \cdot \log(\Phi_t)).\]
\end{lemma}
\begin{proof}
To derive a bound on the Euclidean length of $\Pi_kd_t$, we note that a random sample from $\mathsf{p}_x$ is drawn from the uniform distribution over $\{u_i\}_{i\le n}$ with probability $1/4$, so $\exp\left(\lambda |d_t^\top \Pi_k u_i|\right) \le 4n \Phi_t$
for every $k \in [\kappa]$ and every $i \in [n]$. Since $\{u_i\}_{i \le n}$ also form an eigenbasis for $\Pi$, we get that $|d_t^\top \Pi_k u_i| \le \lambda^{-1} \log (4 n \Phi_t)$ which implies that $\|\Pi_k d_t\|_2 \le \lambda^{-1} \log (4 n \Phi_t)\sqrt{\dim(H_k)}$.
To see the bound on $\|d_t\|_K$, we note that
\begin{equation}\label{eqn:chaining}
\ \|d_t\|_K = \sup_{y \in K^\circ} \ip{d_t}{y} ~\le~ {\sum_{k \in [\kappa]} ~\sup_{y \in K^\circ_k} \ip{\Pi_kd_t}{y}} ~\le~ {\sum_{k \in [\kappa]} \left(\sup_{z \in \mathcal{T}(\ell,k)} |d_t^\top \Pi_k z| + \eps_\mathsf{min}(k)\|\Pi_kd_t\|_2\right)},\\
\end{equation}
where the last inequality holds since $\mathcal{T}(\ell,k)$ is an $\eps_\mathsf{min}(k)$-net of $K^\circ_k$.
By our choice of $\eps_\mathsf{min}(k)$ and the bound on $\|\Pi_k d_t\|_2$ from the first part of the Lemma, we have that $\eps_\mathsf{min}(k)\|\Pi_k d_t\|_2 \le 10 \log(4n \Phi_t)$.\\
To upper bound $\sup_{z \in \mathcal{T}(\ell,k)} \ip{\Pi_kd_t}{z}$, we pick any arbitrary $z \in \mathcal{T}(\ell,k)$ and consider any path from the origin to $z$ in the graph $\mathcal{G}_k$. Let $(u_\ell,u_{\ell+1)}$ be the edges of this path for $\ell \in [\log_2(1/\eps_\mathsf{min}),\log_2(1/\eps_\mathsf{max})]$ where $u_\ell = 0$ for $\ell=\log_2(1/\eps_\mathsf{max})$ and $u_\ell=z$ for $\ell = \log_2(1/\eps_\mathsf{min})$. Then $z = \sum_{\ell} w_\ell$ where $w_\ell = (u_{\ell+1}-u_\ell)$. By our choice of the test distribution, the bound on the potential implies the following for any edge $w \in \mathscr{E}(\ell,k)$,
\[\exp\left( \lambda \cdot r(\ell,k)^2 \cdot |d_t^\top\Pi_k w|\right) ~\le~ 2^{2/\epsilon(\ell,k)^2}\cdot |\mathscr{E}(\ell,k)| \cdot 4 \Phi_t ~\le~ 2^{18/\epsilon(\ell,k)^2} \cdot 4 \Phi_t,\]
where the second inequality follows from $|\mathscr{E}(\ell,k)| \le 2^{16/\epsilon(\ell,k)^2}$ in \eqref{eqn:edges}. This implies that for any edge $w \in \mathscr{E}(\ell,k)$,
\[ |d_t^\top \Pi_k w| ~\le~ \lambda^{-1}\log(4 \Phi_t).
\]
Since $z = \sum_{\ell} w_\ell$ and there are at most $\log(n)$ different scales $\ell$, we get that $|d_t^\top \Pi_k z| \le \lambda^{-1} \cdot \log(n) \cdot \log (4 \Phi_t)$. Since $z$ was arbitrary in $\mathcal{T}(\ell,k)$, plugging the above bound in \eqref{eqn:chaining} completes the proof.
\end{proof}
The next lemma shows that the expected increase (or drift) in the potential is small on average.
\begin{lemma}[Bounded Positive Drift]\label{lemma:gen-drift-ban} Let $\mathsf{p}$ be supported on the unit Euclidean ball in $\mathbb{R}^n$ and has a sub-exponential tail. There exist an absolute constant $C > 0$ such that if $ \Phi_{t-1} \le T^5$ for any $t$, then $\mathbb{E}_{v_t \sim \mathsf{p}}[\Phi_t] - \Phi_{t-1} \le C$.
\end{lemma}
Analogous to the proof of \thmref{thm:gen-disc}, \lref{lemma:gen-drift-ban} implies that w.h.p. the potential $\Phi_t \le T^5$ for every $t \in [T]$. Combined with \lref{lemma:chaining}, and recalling that $\kappa = O(\log nT)$ and $\lambda^{-1}=O({\kappa} \log(nT))$, this proves \thmref{thm:gen-disc-ban}. To finish the proof, we prove \lref{lemma:gen-drift-ban} in the next section.
\subsection{Drift Analysis: Proof of \lref{lemma:gen-drift-ban}}
The proof is quite similar to the analysis for Komlos's setting. In particular, we have the following tail bound analogous to \lref{lemma:tail}.
Let $\mathcal G_t$ denote the set of {\em good} vectors $v$ in the support of $\mathsf{p}$ that satisfy $\lambda|d_t^\top \Pi v| \le {\kappa} \cdot \log (4 \Phi_t/\delta)$.
\begin{lemma}\label{lemma:gen-tail-ban}
For any $\delta > 0$ and any time $t$, we have $\mathbb{P}_{v \sim \mathsf{p}}(v \notin \mathcal G_t) \le \delta$.
\end{lemma}
We omit the proof of the above lemma as it is the same as that of \lref{lemma:tail}.
\begin{proof}[Proof of \lref{lemma:gen-drift-ban}]
Recall that our potential function is defined to be
\[ \Phi_t := \sum_{k\in [\kappa]} \mathbb{E}_{x\sim \mathsf{p}_x}\left[\exp\left(\lambda ~d_{t}^\top \Pi_k x \right)\right],
\]
where $\mathsf{p}_x = \mathsf{p}/2 + \mathsf{p}_{\mathbf{\Sigma}}/4 + \mathsf{p}_y / 4$ is a combination of the input distribution $\mathsf{p}$ and test distributions $\mathsf{p}_{\mathbf{\Sigma}}$ and $\mathsf{p}_y$, each constituting a constant mass.
Let us fix a time $t$. To simplify the notation, we denote $\Phi = \Phi_{t-1}$ and $\Delta\Phi = \Phi_t - \Phi$, and denote $d = d_{t-1}$ and $v = v_t$.
To bound the potential change $\Delta \Phi$, we use the following inequality, which follows from a modification of the Taylor series expansion of $\cosh(r)$ and holds for any $a,b \in \mathbb{R}$,
\begin{align}\label{eqn:taylor_exp}
\ \cosh(\lambda a)-\cosh(\lambda b) & \le \lambda \sinh(\lambda b) \cdot (a - b) + \frac{\lambda^2}{2} \cosh(\lambda b) \cdot e^{|a-b|}(a - b)^2.
\end{align}
Note that when $|a-b| \ll 1$, then $e^{|a-b|} \le 2$, so one gets the first two terms of the Taylor expansion as an upper bound, but here we will also need it when $|a-b|\gg 1$.
Note that every vector in the support of $\mathsf{p}$ and $\mathsf{p}_{\mathbf{\Sigma}}$ has Euclidean length at most $1$, while $y \sim \mathsf{p}_y$ may have large Euclidean length due to the scaling factor of $r(\ell,k)^2$.
Therefore, we decompose the distribution $\mathsf{p}_x$ appearing in the potential as $\mathsf{p}_x = \frac34 \mathsf{p}_w + \frac14 \mathsf{p}_y$, where the distribution $\mathsf{p}_w = \frac23 \mathsf{p} + \frac13 \mathsf{p}_\mathbf{\Sigma}$ is supported on vectors with Euclidean length at most $1$.
After choosing the sign $\chi_t$ for $v$, the discrepancy vector $d_t$ becomes $d + \chi_t v$. For ease of notation, define $s_{k}(x) = \sinh(\lambda \cdot d^\top\Pi_k x)$ and $c_{k}(x) = \cosh(\lambda \cdot d^\top\Pi_k x)$ for any $x \in \mathbb{R}^n$. Now \eqref{eqn:taylor_exp} implies that $\Delta\Phi := \Delta\Phi_1 + \Delta\Phi_2$ where
\begin{align*}
\ \Delta\Phi_1 &\le \chi_t \cdot \frac34\left (\sum_{k\in [\kappa]} \lambda ~\mathbb{E}_{w}\left[ s_{k}(w) v^\top\Pi_k w\right]\right) + \frac34\sum_{k\in [\kappa]}\lambda^2 ~\mathbb{E}_{w}\left[c_{k}(w) \cdot w^\top\Pi_kvv^\top\Pi_kw\right] : = \chi_t L_1 + Q_1, \text{ and },\\
\ \ \Delta\Phi_2 &\le \chi_t \cdot \frac14\left (\sum_{k\in [\kappa]} \lambda ~\mathbb{E}_{y}\left[ s_{k}(y) v^\top\Pi_k y\right]\right) + \frac14\sum_{k\in [\kappa]} \lambda^2 ~\mathbb{E}_{y}\left[c_{k}(y) \cdot e^{\lambda |v^\top \Pi_ky|} y^\top\Pi_kvv^\top\Pi_ky\right] : = \chi_t L_2 + Q_2.
\end{align*}
Since our algorithm chooses sign $\chi_t$ to minimize the potential increase, taking expectation over the incoming vector $v$, we get
\begin{align*}
\ \mathbb{E}_{v}[\Delta\Phi] &\le -\mathbb{E}_{v}[|L_1 + L_2|] + \mathbb{E}_{v}[Q_1 + Q_2].
\end{align*}
We will prove the following upper bounds on the quadratic terms (in $\lambda$) $Q_1$ and $Q_2$.
\begin{claim}\label{claim:quadratic-ban_1}
$ \mathbb{E}_{v}[Q_1+Q_2] \le C\cdot \lambda^2 \sum_{k\in [\kappa]} 2^{-k} ~\mathbb{E}_{x}[c_{k}(x)\|x\|_2^2]$ for an absolute constant $C>0$.
\end{claim}
On the other hand, we will show that the linear (in $\lambda$) terms $L_1 + L_2$ is also large in expectation.
\begin{claim}\label{claim:linear-ban_1}
$\mathbb{E}_{v}[|L_1 + L_2|] \ge \lambda B^{-1} \sum_{k\in [\kappa]} 2^{-k}~~\mathbb{E}_{x} [c_{k}(x) \|x\|_2^2] - O(1)$ for some $B \le 4{\kappa} \log(\Phi^2 n \kappa)$.
\end{claim}
By our assumption of $\Phi \le T^5$, so it follows that $2\lambda \le B^{-1}$. Therefore, combining the above two claims,
$$\mathbb{E}_{v}[\Delta \Phi] ~~\le~~ (2\lambda^2-\lambda B^{-1}) \left(\sum_{k\in [\kappa]} 2^{-k} ~ \mathbb{E}_{x}\left[c_{k}(x)\|x\|_2^2\right]\right) + C ~~\le~~ C ,$$
which finishes the proof of \lref{lemma:gen-drift-ban} assuming the claims.
\end{proof}
\vspace*{8pt}
To prove the missing claims, we need the following property that follows from the sub-exponential tail of the input distribution $\mathsf{p}$.
\begin{lemma}\label{lemma:subexp_1}
There exists a constant $C>0$, such that for every integer $k\in [\kappa]$, and any $y \in \mathrm{im}(\Pi_k)$ satisfying $\|y\|_2 \le \frac14\sqrt{\min\{2^k, n\}}$, the following holds
\[\mathbb{E}_{v \sim \mathsf{p}}\left[e^{\lambda |v^\top y|} \cdot |v^\top y|^2\right] \le C\cdot 2^{-k}\cdot\|y\|_2^2 \text { for all } \lambda \le 1.\]
\end{lemma}
We remark that this is the only step in the proof which requires the sub-exponential tail, as otherwise the exponential term above may be quite large. It may however be possible to exploit some more structure from the test vectors $y$ and the discrepancy vector to prove the above lemma without any sub-exponential tail requirements from the input distribution.
\begin{proof}
As $y \in \mathrm{im}(\Pi_k)$, we have that $v^\top y = v^\top \Pi_k y$ which is a scalar sub-exponential random variable with zero mean and variance at most
\[\sigma_y^2 ~~:=~~ \mathbb{E}_v[|v^\top \Pi_k y|^2] ~~\le~~ \|\Pi_k \mathbf{\Sigma} \Pi_k\|_{\mathsf{op}}\|y\|_2^2 ~~\le~~ 2^{-k}\|y\|_2^2 ~~\le~~ 1/16.\]
Using Cauchy-Schwarz and \pref{prop:logconcave}, we get that
\begin{align*}
\mathbb{E}_v\left[e^{\lambda |v^\top y|} \cdot |v^\top y|^2\right] ~~\le~~ \sqrt{\mathbb{E}_v\left[e^{2\lambda |v^\top y|}\right]} \cdot \sqrt{ \mathbb{E}_v\left[|v^\top y|^4\right]} ~~\le~~ C \cdot \mathbb{E}_v\left[|v^\top \Pi_k y|^2\right] ~~\le~~ C \cdot 2^{-k}~\|y\|_2^2,
\end{align*}
where the exponential term is bounded since $\sigma_y \le 1/4$.
\end{proof}
\vspace*{8pt}
\begin{proof}[Proof of \clmref{claim:quadratic-ban_1}]
Recall that $\mathbb{E}_{v}[vv^\top] = \mathbf{\Sigma}$ which satisfies $\Pi_k \mathbf{\Sigma} \Pi_k = 2^{-k} \Pi_k $. Therefore, using linearity of expectation,
\begin{align}\label{eqn:int1_1}
\ \mathbb{E}_{v}[Q_1] ~=~ \frac34 \sum_{k\in [\kappa]} \lambda^2 ~ \mathbb{E}_{w}[c_{k}(w) \cdot w^\top\Pi_k \mathbf{\Sigma} \Pi_k w] ~&=~ \lambda^2 \cdot \frac34 \sum_{k\in [\kappa]} 2^{-k} ~ \mathbb{E}_{w}[c_{k}(w) \cdot w^\top \Pi_k w] \notag\\
\ &\le~ 2\lambda^2 \cdot \frac34 \sum_{k\in [\kappa]} 2^{-k} ~ \mathbb{E}_{w}[c_{k}(w)\|w\|_2^2].
\end{align}
We next use \lref{lemma:subexp_1} to bound the second quadratic term
\[
\mathbb{E}_v[Q_2] ~=~ \frac14\sum_{k\in [\kappa]} \lambda^2 ~\mathbb{E}_{y}\left[c_{k}(y) \cdot e^{\lambda |v^\top \Pi_ky|} y^\top\Pi_kvv^\top\Pi_ky\right] .\]
For any $k\in [\kappa]$ and any $y \in \mathrm{im}(\Pi_k)$ that is in the support of $\mathsf{p}_y$, we have that
\[\lambda \|\Pi_k y\|_2 \le \lambda\cdot \|y\|_2 ~\le~ \lambda / \eps_\mathsf{min}(k) ~\le~ \lambda \cdot \frac{1}{10\lambda} \cdot \sqrt{\dim(H_k) } ~\le~ \frac14\sqrt{\min\{n,2^k\}}.\]
On the other hand, if $y \in \mathrm{im}(\Pi_{k'})$ for $k'\neq k$, then the above quantity is zero.
\lref{lemma:subexp_1} then implies that for any $y$ in the support of $\mathsf{p}_y$,
\[ \mathbb{E}_{v}[e^{|\lambda v^\top \Pi_k y|} \cdot |\lambda v^\top\Pi_k y|^2] ~\le~ C_1 \cdot 2^{-k} \|\lambda \Pi_k y\|_2^2 ~\le~ C_1 \lambda^2 \cdot 2^{-k} \|y\|_2^2,\]
where $C_1$ is some absolute constant.
Therefore, we obtain the following bound
\begin{align}\label{eqn:int2_1}
\ \mathbb{E}_{v}[Q_2] ~\le~ C_1 \cdot \lambda^2 \cdot\sum_{k \in [\kappa]} 2^{-k}~\mathbb{E}_{y}[c_{k}(y) \|y\|_2^2] .
\end{align}
Summing up \eqref{eqn:int1_1} and \eqref{eqn:int2_1} finishes the proof of the claim.
\end{proof}
\vspace*{8pt}
\begin{proof}[Proof of \clmref{claim:linear-ban_1}]
Let $L = L_1 + L_2$. To lower bound the linear term, we proceed similarly as in the proof of \clmref{claim:linear} and use the fact that $|L(v)| \ge {\|f\|^{-1}_{\infty}} \cdot f(v) \cdot L(v)$ for any real-valued non-zero function $f$. We will choose the function $f(v) = d^\top\Pi v \cdot \mathbf{1}_{\mathcal G}(v)$ where $\mathcal G$ will be the event that $|d^\top\Pi v|$ is small which we know is true because of \lref{lemma:gen-tail-ban}.
In particular, set $\delta^{-1} = \lambda^{-2} n \cdot\Phi \cdot \log(4n \Phi)$ and let $\mathcal G$ denote the set of vectors $v$ in the support of $\mathsf{p}$ such that $\lambda|d^\top \Pi v| \le {\kappa} \cdot \log (4 \Phi/\delta) := B$.
Then, $f(v) = d^\top\Pi v \cdot \mathbf{1}_{\mathcal G}(v)$ satisfies $\|f\|_\infty \le \lambda^{-1} B$, and we can lower bound,
\begin{align}\label{eqn:lterm_1}
\ \mathbb{E}_{v}[|L|] &\ge \frac{\lambda}{\lambda^{-1} B} \cdot \frac34 \sum_{k \in [\kappa]} \mathbb{E}_{vw} [s_{k}(w) \cdot d^\top \Pi v \cdot v^\top \Pi_k w\cdot \mathbf{1}_{\mathcal G}(v)] \notag\\
\ & ~~~~~~~ + \frac{\lambda}{\lambda^{-1}B} \cdot \frac14\sum_{k \in [\kappa]} \mathbb{E}_{vy} [s_{k}(y) \cdot d^\top \Pi v \cdot v^\top \Pi_k y\cdot \mathbf{1}_{\mathcal G}(v)] \notag \\
\ &= \frac{\lambda^2}{B} \cdot \frac34 \sum_{k \in [\kappa]} \mathbb{E}_{w} [s_{k}(w) \cdot d^\top \Pi \mathbf{\Sigma} \Pi_k w] ~-~ \frac{\lambda^2}{B} \cdot \frac34 \sum_{k \in [\kappa]} \mathbb{E}_{w} [s_{k}(w) \cdot d^\top \Pi \mathbf{\Sigma}_\mathsf{err} \Pi_k w] \notag \\
\ &\qquad + \frac{\lambda^2}{B} \cdot \frac14 \sum_{k \in [\kappa]} \mathbb{E}_{y} [s_{k}(y) \cdot d^\top \Pi \mathbf{\Sigma} \Pi_k y] ~-~\frac{\lambda^2}{B}\cdot \frac14 \sum_{k \in [\kappa]} \mathbb{E}_{y} [s_{k}(y) \cdot d^\top \Pi \mathbf{\Sigma}_\mathsf{err} \Pi_k y],
\end{align}
where $\mathbf{\Sigma}_\mathsf{err} = \mathbb{E}_{v}[vv^\top (1-\mathbf{1}_{\mathcal G}(v))]$ satisfies $\|\mathbf{\Sigma}_\mathsf{err}\|_{\mathsf{op}} \le \mathbb{P}_{v \sim \mathsf{p}}(v \notin \mathcal G) \le \delta$ using \lref{lemma:tail}.
To bound the terms involving $\mathbf{\Sigma}$ in \eqref{eqn:lterm_1}, we recall that $s_k(x) = \sinh(\lambda d^\top \Pi_k x)$ and $c_k(x) = \cosh(\lambda d^\top \Pi_k x)$. Using $\Pi \mathbf{\Sigma} \Pi_k = 2^{-k} \Pi_k$ and the fact that $\sinh(a)a \ge \cosh(a)|a| - 2$ for any $a \in \mathbb{R}$, we have
\begin{align*}
\lambda ~\mathbb{E}_{w} [s_{k}(w) \cdot d^\top \Pi \mathbf{\Sigma} \Pi_k w] ~=~ 2^{-k} ~\mathbb{E}_{w} [s_{k}(w) \cdot \lambda d^\top \Pi_k w] ~\ge~ 2^{-k}~ \left(\mathbb{E}_{w} [c_{k}(w)|\lambda d^\top\Pi_k w|] - 2\right) ,
\end{align*}
and similarly for $y$.
The terms with $\mathbf{\Sigma}_\mathsf{err}$ can be upper bounded using $\| \mathbf{\Sigma}_\mathsf{err} \|_\mathsf{op} \leq \delta$.
In particular, we have
\[|d^\top \Pi \mathbf{\Sigma}_\mathsf{err} \Pi_k x | \le \|\Pi d\|_2 \|\mathbf{\Sigma}_\mathsf{err}\|_{\mathsf{op}}\|x\|_2 \le \delta \|\Pi d\|_2 \|x\|_2.\]
Since $\Pi = \sum_{k\in[\kappa]} \Pi_k$ and $(\Pi_k)_{k \in [\kappa]}$ are orthogonal projectors, \lref{lemma:chaining} implies that $\|\Pi d\|_2 \le \lambda^{-1} \log(4n\Phi) \sqrt{ n}$.
Moreover, we have $\|w\|_2 \leq 1$ and $\|y\|_2 \le \min_k \{1/\eps_\mathsf{min}(k)\} \le \frac1{10\lambda}\cdot \sqrt{n}$. Then, by our choice of $\delta^{-1} = \lambda^{-2} n\Phi \cdot \log(4n \Phi)$, we have
\[
\lambda ~|d^\top \Pi \mathbf{\Sigma}_\mathsf{err} \Pi_k x| ~~\le~~ \delta \lambda^{-1} n \log(4n \Phi) ~~=~~ \Phi^{-1}.
\]
Plugging the above bounds in \eqref{eqn:lterm_1}, we obtain
\begin{align}\label{eq:int1_1}
\mathbb{E}_{v}[|L|] &~~\ge~~ \frac{\lambda}{B} \cdot \frac34 \sum_{k \in [\kappa]} 2^{-k} ~\mathbb{E}_{w} [c_{k}(w)|\lambda d^\top\Pi_k w|] + \frac{\lambda}{B} \cdot \frac14 \sum_{k \in [\kappa]} 2^{-k} ~ \mathbb{E}_{y} [c_{k}(y)|\lambda d^\top\Pi_k y|] - 4
\end{align}
where we used the upper bound $\sum_{k \in [\kappa]} \mathbb{E}_x[|s_k(x)|] \le \Phi$ to control the error term involving $\mathbf{\Sigma}_{\mathsf{err}}$.
To finish the proof, we bound the two terms in \eqref{eq:int1_1} separately. We first use the inequality that $\cosh(a)a \ge \cosh(a) - 2$ for all $a \in \mathbb{R}$ and the fact that $\|w\|_2 \le 1$ for every $w$ in the support of $\mathsf{p}_w$ to get that
\begin{equation}\label{eq:int2_1}
\mathbb{E}_{w} [c_{k}(w)|\lambda d^\top\Pi_k w|] ~~\ge~~ \mathbb{E}_{w} [c_{k}(w)] - 2 ~~\ge~~ \mathbb{E}_{w} [c_{k}(w)\|w\|_2^2] - 2.
\end{equation}
To bound the second term in \eqref{eq:int1_1}, we recall that the entire probability mass assigned to length $r$ vectors (i.e. $\epsilon(\ell,k) = 1/r$) in the support of $\mathsf{p}_y$ is at most $2^{-2r^2}$, where $r \ge 1/4$. Let $\mathcal{E}$ be the event that $|\lambda d^\top\Pi_k y| \le \|y\|_2^2$.
Note that $c_k(y) \|y\|_2^2 \le 2^{r^2}r^2$ if $\|y\|_2=r$. This implies that
\begin{align}\label{eq:int3_1}
\mathbb{E}_{y} [c_{k}(y)|\lambda d^\top\Pi_k y|] &~\ge~ \mathbb{E}_{y} [c_{k}(y)\|y\|_2^2] - \mathbb{E}_{y} [c_{k}(y) \|y\|_2^2 \cdot \mathbf{1}_\mathcal{E}(y)] \notag \\
\ &~\ge~ \mathbb{E}_{y} [c_{k}(y)\|y\|_2^2] - \int_{1/4}^{\infty} 2^{-2r^2} 2^{r^2}r^2 ~\ge~ \mathbb{E}_{y} [c_{k}(y)\|y\|_2^2] - 1.
\end{align}
Since $\mathsf{p}_x = \frac34 \mathsf{p}_w + \frac14 \mathsf{p}_y$, plugging \eqref{eq:int2_1} and \eqref{eq:int3_1} into \eqref{eq:int1_1} give that
$\mathbb{E}_{v}[|L|] \ge \lambda B^{-1} \sum_{k \in [\kappa]} 2^{-k}~~\mathbb{E}_{x} [c_{k}(x) \|x\|_2^2] - C$, for some constant $C>0$,
which completes the proof of the claim.
\end{proof}
\section{Discrepancy for Arbitrary Convex Bodies}
In this section, we prove the following theorem which uses a stronger potential than used in the previous section. Later in Section \ref{sec:banasczyck} and Section \ref{sec:subgaussian} we will use this to bound the discrepancy in any arbitrary norm as well as combine it with the min-max theorem to imply the existence of an online algorithm that produces a $\mathrm{polylog}(T)$-subgaussian distribution over discrepancy vectors. \\
To state the result, we use the same notation as in the previous section. Let $V \in \mathbb{R}^n$ be our input random vector with $\|V\|_2 \le 1$ and $M^+_k$ be as stated in \eqref{eqn:matdecomp}.
\mnote{For now this statement is only for a single scale.}
\begin{theorem}\label{thm:gen-disc-ban}
Let $r > 0$ and $\mathcal{Z} \subseteq \mathbb{R}^n$, $|\mathcal{Z}|\le 2^{r^2}$ be a set of test vectors of Euclidean length $r$. Then, for vectors $v_1, \ldots, v_T$ sampled i.i.d. from $\mathsf{p}(V)$, there is an online algorithm that w.h.p. maintains the following for the discrepancy vector $d_t$ at any time $t$,
\[ \max_{z \in \mathcal{Z}} |d_t^T z| \le r^2 \log T.\]
\end{theorem}
\subsection{Discrepancy for an arbitrary norm}
\subsection{Proof of Theorem \ref{thm:gen-disc-ban}}
\paragraph{New Potential Function and Algorithm.}
Recall the notation used in \eqref{eqn:eigendecomp} and \eqref{eqn:matdecomp}. As before, to define the potential, we first define a distribution which is a noisy version of the input distribution where some noise is added to account for the test directions. Let $W \in \mathbb{R}^n$ be an independent copy of $V$ and $Z$ be uniform over $\mathcal{Z}$. Let $X \in \mathbb{R}^n$ equal $W$ or $Z$ with probability $1/2$ each. As before, $\|X\|_2 \le 1$ since $\|M\|_{\mathsf{op}} \le 1$.\\
At any time step $t$, let $d_{t} = \chi_1 v_1 + \ldots + \chi_t v_t$ denote the current discrepancy vector after the signs $\chi_1, \ldots, \chi_t \in \{\pm1\}$ have been chosen. Set $\lambda^{-1} = 12\log^2(nT)$ and define the potential
\[ \Phi_t = \Phi(d_t) := \sum_{k=0}^{\kappa} \mathbb{E}_{p(x)}\left[\exp\left(\lambda^2 (d_{t}^T M^+_k x)^2\right)\right]. \]
When the vector $v_t$ arrives, the algorithm chooses the sign $\chi_t$ that minimizes the increase $\Phi_t - \Phi_{t-1}$.\\
\paragraph{Analysis.} The analysis of this algorithm is quite similar to the analysis we encountered in Komlos's setting. In particular, we have the following lemma analogous to Lemma \ref{lemma:tail}. Let $\mathcal G_t = \mathcal G(d_t)$ denote the set of vectors in the support of $\mathsf{p}(V)$ such that $\lambda^2|d_t^T M^+_k v|^2 \le \log (2 \Phi_t/\delta)$ for every $k \le \kappa$.
\begin{lemma}\label{lemma:gen-tail}
For any $\delta > 0$, the following holds at any time $t$, $\mathsf{p}(V \notin \mathcal G_t) \le \delta$.
\end{lemma}
\begin{proof}
As defined above, we have that $X=W$ with probability $1/2$ where $W$ is an independent copy of $V$. Therefore, $\mathbb{E}_{w}\left[\exp(\lambda^2 |d_t^TM^+_k w|^2)\right] \le 2 \Phi_t$ for any $k \le \kappa$. By an application of Markov's inequality, it follows that $\mathsf{p}(V \notin \mathcal G_t) = \mathsf{p}(W \notin \mathcal G_t) \le \delta$.\mnote{Readjust parameters to account for a union bound.}
\end{proof}
\vspace*{8pt}
The next lemma shows that the expected increase (or drift) in the potential is small on average.
\begin{lemma}[Drift]\label{lemma:gen-drift} At any time $t$ if $T \le \Phi_{t-1} \le T^4$, then $\mathbb{E}_{v_t}[\Phi_t] - \Phi_{t-1} \le 2.$
\end{lemma}
Lemma \ref{lemma:gen-drift} implies that for any time $t \in [T]$, the expected value of the potential $\Phi_t$ over the input sequence $v_1, \ldots, v_T$ is at most $3T$. By Markov's inequality, it follows that with probability at least $1-T^{-4}$, the potential $\Phi_t \le 3T^5$ for every $t \in [T]$. This proves Theorem \ref{thm:gen-disc}. To finish the proof we prove Lemma \ref{lemma:gen-test} next.
\begin{proof}[Proof of Lemma \ref{lemma:gen-drift}]
Let us fix a time $t$. To simplify the notation, let $\Phi = \Phi_{t-1}$ and $\Delta\Phi = \Phi_t - \Phi$, and let $d = d_{t-1}$ and $v = v_t$.
To bound the change $\Delta \Phi$, for any $a, b \in \mathbb{R}$ satisfying $|a-b| \le 1$, by Taylor series expansion of $f(r)=\exp(\lambda^2r^2)$, we have
\begin{align*}
\ f(a)-f(b) & \le 2\lambda^2~ bf(b) (a-b) + \frac{1}{2} \cdot 2\lambda^2~f(b)(2\lambda^2 b^2 + 1) (a-b)^2 + \frac16 \cdot \max_{r \in [a,b]}|f'''(r)|(a-b)^3 \\
\ & \le 2\lambda^2 bf(b) (a-b) + 4\lambda^4 b^2 f(b)(a-b)^2 + 2\lambda^2f(b)(a-b)^2 + \text{Remainder}.
\end{align*}
\mnote{This seems like a big problem. Don't see how to control the remainder term. The outliers may dominate it completely.}
\hnote{
We can write $\exp(\lambda^2(r + \Delta r)^2)$ as $\exp(\lambda^2(r + \Delta r)^2) = \exp(\lambda^2 r^2 + \lambda^2 r \Delta r + \lambda^2 \Delta r^2)$.
We have $|\Delta r| \leq 1$. Let's imagine $\lambda = 1/\log(T)$, then $r$ is roughly $\log^{3/2}(T)$, so we can expect the change in the exponent $\lambda^2 r \Delta r + \lambda^2 \Delta r^2$ to be less than $1$?
For the outliers, maybe the hope is that the tail probability cancels out the exponent growth?
}
\begin{align*}
\ \cosh(\lambda a) - \cosh(\lambda b) &= \lambda \sinh(\lambda b) \cdot (a-b) + \frac{\lambda^2}{2!} \cosh(\lambda b) \cdot (a-b)^2 + \frac{\lambda^3}{3!} \sinh(\lambda b)\cdot (a-b)^3 + \cdots , \\[0.8ex]
\ & \le \lambda \sinh(\lambda b) \cdot(a-b) + \lambda^2 \cosh(\lambda b) \cdot(a-b)^2,\\[1.1ex]
\ & \le \lambda \sinh(\lambda b) \cdot(a-b) + \lambda^2 |\sinh(\lambda b)| \cdot(a-b)^2 + \lambda^2(a-b)^2,
\end{align*}
where the first inequality follows since $|\sinh(a)| \le \cosh(a)$ for all $a \in \mathbb{R}$, and since $|a-b|\le 1$ and $\lambda < 1$, so the higher order terms in the Taylor expansion are dominated by the first and second order terms. The second inequality uses that $\cosh(a) \le |\sinh(a)|+1$ for $a \in \mathbb{R}$.
After choosing the sign $\chi_t$, the discrepancy vector $d_t = d + \chi_t v$. Defining $s_{k}(x) = \sinh(\lambda \cdot d^TM^+_k x)$ and noting that $|v^T M^+_k x| \le 1$, the above upper bound on the Taylor expansion gives us that
\begin{align*}
\ \Delta\Phi &\le \chi_t \left (\sum_{k = 0}^{\kappa} \lambda ~\mathbb{E}_{x}\left[ s_{k}(x) v^TM^+_k x\right]\right) + \sum_{k = 0}^{\kappa} \lambda^2 ~\mathbb{E}_{x}\left[|s_{k}(x)| \cdot x^TM^+_kvv^TM^+_kx\right] + \sum_{k = 0}^{\kappa} \lambda^2~\mathbb{E}_{x}\left[ x^TM^+_kvv^TM^+_kx\right]\\
\ &:= \chi_t L + Q + Q_*.
\end{align*}
where $L, Q$ and $Q_*$ denote the first, second and third terms respectively.
Since, our algorithm uses the greedy strategy, choosing $\chi_t$ to be the sign that minimizes the potential, and taking expectation over the incoming vector $v$, we get
\begin{align*}
\ \mathbb{E}_{v}[\Delta\Phi] &\le -\mathbb{E}_{v}[|L|] + \mathbb{E}_{v}[Q] + \mathbb{E}_{v}[Q_{*}].\\
\end{align*}
We will prove the following upper bounds on the quadratic (in $\lambda$) terms $Q$ and $Q_*$.
\begin{claim}\label{claim:gen-quadratic}
$ \mathbb{E}_{v}[Q] \le 2\lambda^2 \sum_{k=0}^{\kappa} 2^{-k} ~\mathbb{E}_{x}[|s_{k}(x)|]$ and $\mathbb{E}_{v}[Q_*] \le 4\lambda^2.$
\end{claim}
On the other hand, we will show that the linear (in $\lambda$) term $L$ is also large in expectation.
\begin{claim}\label{claim:gen-linear}
$ \mathbb{E}_{v}[|L|] \ge \lambda B^{-1} \sum_{k=0}^{\kappa} 2^{-k} ~ \mathbb{E}_{x}[|s_{k}(x)|] - 1$ where $B = \kappa \log(\Phi^2 nT).$
\end{claim}
By our assumption $\Phi \ge T$, so it follows that $2\lambda \le B^{-1}$. Therefore, combining the above two claims, we get that
$$\mathbb{E}_{v}[\Delta \Phi] \le (2\lambda^2-\lambda B^{-1}) \left(\sum_{k=0}^{\kappa} 2^{-k} ~ \mathbb{E}_{x}[|s_{k}(x)|]\right) + 1 + 4\lambda^2 \le 2.$$
This finishes the proof of Lemma \ref{lemma:gen-drift} assuming the claims which we prove next.
\end{proof}
\begin{proof}[Proof of Claim \ref{claim:gen-quadratic}]
Recall that $\mathbb{E}_{v}[vv^T] = \mathbf{\Sigma}$ which satisfies $M^+_k \mathbf{\Sigma} M^+_k = 2^{-k} M^+_k $ and $\Pi_k \preccurlyeq M^+_k \preccurlyeq 2\Pi_k$. Therefore, using linearity of expectation,
\begin{align*}
\ \mathbb{E}_{v}[Q] &= \sum_{k=0}^{\kappa} \lambda^2 ~ \mathbb{E}_{x}[|s_{k(x)}| \cdot x^TM^+_k \mathbf{\Sigma} M^+_k x] = \lambda^2 \sum_{k=0}^{\kappa} 2^{-k} ~ \mathbb{E}_{x}[|s_{k}(x)| \cdot x^T M^+_k x]\\
\ &\le 2\lambda^2 \sum_{k=0}^{\kappa} 2^{-k} ~ \mathbb{E}_{x}[|s_{k}(x)|],
\end{align*}
where the last inequality uses that $x^T M^+_k x \le \|M^+_k\|_{\mathsf{op}}\|x\|_2^2 \le 2$ for all $x$ in the support of $q$.
Similarly, we may bound
\[ \mathbb{E}_{v}[Q_{*}] = \sum_{k = 0}^{\kappa} \lambda^2~\mathbb{E}_{x}\left[ x^TM^+_k\mathbf{\Sigma}M^+_kx\right] \le 2\lambda^2 \sum_{k=0}^{\kappa} 2^{-k} \le 4\lambda^2.\qedhere\]
\end{proof}
\vspace*{10pt}
\begin{proof}[Proof of Claim \ref{claim:gen-linear}]
To lower bound the linear term, we use the fact that $|L(v)| \ge {\|f\|^{-1}_{\infty}} \cdot f(v) \cdot L(v)$ for any real-valued non-zero function $f$. We will choose the function $f(v) = d^TM^+ v \cdot \mathbf{1}_{\mathcal G}(v)$ where $\mathcal G$ will be the event that $|d^TM^+ v|$ is small which we know is true because of Lemma \ref{lemma:tail}.\\
In particular, set $\delta^{-1} = 2\kappa (n\Phi) \log(4n \Phi)$ and let $\mathcal G$ denote the event that $\lambda|d_t^T M^+_k v| \le \log (4 \Phi/\delta)$ for every $k \le \kappa$. Recalling that $M^+ = \sum_{k=0}^{\kappa} M^+_k$, when the event $V \in \mathcal G$ occurs, then $\lambda |d^T M^+ v| \le \kappa \log (4 \Phi/\delta) := B$. Then, $f(v) = d^TM^+ v \cdot \mathbf{1}_{\mathcal G}(v)$ satisfies $\|f\|_\infty \le \lambda^{-1} B$, and we can lower bound,
\begin{align}\label{eqn:lterm}
\ \mathbb{E}_{v}[|L|] &\ge \frac{\lambda}{B} \sum_{k=0}^{\kappa} \lambda ~ \mathbb{E}_{vx} [s_{k}(x) \cdot d^T M^+ v \cdot v^T M^+_k x\cdot \mathbf{1}_{\mathcal G}(v)] \notag \\
\ &= \frac{\lambda^2}{B} \sum_{k=0}^{\kappa} \mathbb{E}_{x} [s_{k}(x) \cdot d^T M^+ \mathbf{\Sigma} M^+_k x] - \frac{\lambda^2}{B} \sum_{k=0}^{\kappa} \mathbb{E}_{x} [s_{k}(x) \cdot d^T M^+ \mathbf{\Sigma}_\mathsf{err} M^+_k x],
\end{align}
where $\mathbf{\Sigma}_\mathsf{err} = \mathbb{E}_{v}[vv^T \mathbf{1}_{\comp\mathcal G}(v)]$ satisfies $\|\mathbf{\Sigma}_\mathsf{err}\|_{\mathsf{op}} \le \mathsf{p}(V \notin \mathcal G) \le \delta$ using Lemma \ref{lemma:tail}.\\
To bound the first term in \eqref{eqn:lterm}, recall that $s_k(x) = \sinh(\lambda d^T M^+_k x)$. Using $M^+ \Sigma M^+_k = 2^{-k} M^+_k$ and the fact that $\sinh(a)a \ge |\sinh(a)| - 2$ for any $a \in \mathbb{R}$, we have
\begin{align*}
\lambda ~\mathbb{E}_{x} [s_{k}(x) \cdot d^T M^+ \mathbf{\Sigma} M^+_k x] = 2^{-k} ~\mathbb{E}_{x} [s_{k}(x) \cdot \lambda d^T M^+_k x] \ge 2^{-k}~ \left(\mathbb{E}_{x} [|s_{k}(x)|] - 2\right).
\end{align*}
While the second term can be upper bounded by using the bounds on the operator norm of the matrices $M^+_k$ and $\mathbf{\Sigma}_\mathsf{err}$. In particular, we have that $|d^T M^+ \mathbf{\Sigma}_\mathsf{err} M^+_k x | \le 2\|M^+ d\|_2\|x\|_2 \le 2\delta \|M^+ d\|_2$. Since $M^+ \preccurlyeq \sum_{k=0}^\kappa \Pi_k$, using Lemma \ref{lemma:tail}, it follows that $\|M^+ d\|_2 \le \kappa n \lambda^{-1}\log(4n \Phi)$ and by our choice of $\delta$, $$\lambda |d^T M^+ \mathbf{\Sigma}_\mathsf{err} M^+_k x| \le 2\delta\kappa n \log(4n \Phi) = \Phi^{-1}.$$
Plugging the above bounds in \eqref{eqn:lterm},
\begin{align*}
\mathbb{E}_{p(v)}[|L|] &\ge \frac{\lambda}{B} \sum_{k=0}^\kappa 2^{-k} ~\left(\mathbb{E}_{x} [|s_{k}(x)|] - 2\right) - \frac{\lambda}{B} \cdot \Phi^{-1} \left( \sum_{k=0}^{\kappa} \mathbb{E}_x[|s_k(x)|] \right) \\
&\ge \frac{\lambda}{B} \sum_{k=0}^\kappa 2^{-k} ~\mathbb{E}_{x} [|s_{k}(x)|] - \frac{\lambda}{B} \sum_{k=0}^\kappa 2^{-k+1} - \frac{\lambda}{B}\\
\ & \ge \frac{\lambda}{B} \sum_{k=0}^\kappa 2^{-k} ~\mathbb{E}_{x} [|s_{k}(x)|] - 1,
\end{align*}
where the second inequality follows since $\sum_{k=0}^\kappa \mathbb{E}_x[|s_k(x)|] \le \Phi.$
\end{proof}
\newpage
For brevity, let $d \equiv d(t-1)$ be the discrepancy vector at time step $t-1$, $v \equiv v(t)$ be the input vector at time $t$ and $a_{k}(x) = \exp((\alpha \cdot d^T\widetilde{M}_k x)^2)$. Let us also assume that $\|d\|_2 \le T$.\\
First we note by taylor expansion, that for $f(r)=e^{\alpha^2 r^2}$, for any $\delta$ satisfying $|\delta| < 1$, it holds that
\begin{align*}
\ f(r+\delta) - f(r) & \le 2\alpha^2 rf(r) \delta + \frac12 \cdot 2\alpha^2f(r)(2\alpha^2 r^2 + 1) \delta^2 + \ldots \\
\ & \le 2\alpha^2 rf(r) \delta + 4\alpha^4 r^2 f(r)\delta^2 + 2\alpha^2f(r)\delta^2.
\end{align*}
Since $d(t) = d + \epsilon_v v$, using the above, we can write
\begin{align*}
\ \Delta\Phi &\le \epsilon_v \left(\sum_{k = \eps_\mathsf{min}}^{\eps_\mathsf{max}} 2\alpha^2 \cdot \mathbb{E}_{q(x)}\left[ a_{k}(x) d^T\widetilde{M}_k x \cdot v^T\widetilde{M}_k x\right]\right) + \sum_{k = \eps_\mathsf{min}}^{\eps_\mathsf{max}} 4\alpha^4 \cdot \mathbb{E}_{q(x)}\left[a_k(x) \cdot (d^T\widetilde{M}_k x)^2 (v^T\widetilde{M}_k x)^2\right] \\
\ & \qquad + \sum_{k = \eps_\mathsf{min}}^{\eps_\mathsf{max}} 2\alpha^2 \cdot \mathbb{E}_{q(x)}\left[ a_k(x) (v^T\widetilde{M}_k x)^2\right]\\
\ &:= \epsilon_v L + Q_1 + Q_2.
\end{align*}
Choosing $\epsilon_v$ to be the sign that minimizes the potential, and taking expectation over the incoming vector $v$, we get
\begin{align*}
\ \mathbb{E}_v[\Delta\Phi] &\le -\mathbb{E}_v[|L|] + \mathbb{E}_v[Q_1] + \mathbb{E}_v[Q_2].\\
\end{align*}
\textbf{Quadratic Term $Q_1$:} Noting that $\widetilde{M}_k \mathbf{\Sigma} \widetilde{M}_k = 2^{-k} \widetilde{M}_k $ and that $\Pi_k \preccurlyeq \widetilde{M}_k \preccurlyeq 2\Pi_k$, we get
\begin{align*}
\ \mathbb{E}_{p(v)}[Q_2] &= \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} 4\alpha^4 \cdot \mathbb{E}_{q(x)}[a_{k}(x) \cdot (d^T\widetilde{M}_k x)^2 \cdot x^T\widetilde{M}_k \mathbf{\Sigma} \widetilde{M}_k x] = 4\alpha^4 \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} 2^{-k} \cdot \mathbb{E}_{q(x)}[a_{k}(x) \cdot (d^T\widetilde{M}_k x)^2 \cdot x^T \widetilde{M}_k x]\\
\ &\le 8\alpha^4 \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} 2^{-k} \cdot \mathbb{E}_{q(x)}[a_{k}(x) \cdot (d^T\widetilde{M}_k x)^2 ],
\end{align*}
where in the last line we used that $\|x\|_2^2 \le 1$ for all $x$ in the support of $q$.\\
\textbf{Quadratic Term $Q_2$:} Similarly, we have
\begin{align*}
\ \mathbb{E}_{p(v)}[Q_1] &= \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} 2\alpha^2 \cdot \mathbb{E}_{q(x)}[a_{k}(x) \cdot x^T\widetilde{M}_k \mathbf{\Sigma} \widetilde{M}_k x] = 2\alpha^2 \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} 2^{-k} \cdot \mathbb{E}_{q(x)}[a_{k}(x) \cdot x^T \widetilde{M}_k x]\\
\ &\le 4\alpha^2 \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} 2^{-k} \cdot \mathbb{E}_{q(x)}[a_{k}(x)].
\end{align*}
\textbf{Linear Term: } Recall that $\widetilde{M} = \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} \widetilde{M}_k$. To bound the linear term, we first notice that when $v \in \mathcal G$, then $|d^T \widetilde{M} v| \le \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} \frac{\sqrt{4\log \phi}}{\alpha} \le \frac{2\log^{3/2} (nT)}{\alpha}$. Using that $|L(v)| \ge \frac{f(v)}{\|f\|_{\infty}} L(v)$ for any real-valued function $f$, and choosing $f(v) = d^T\widetilde{M} v \cdot \mathbf{1}_{\mathcal G}(v)$, we get that
\begin{align*}
\ \mathbb{E}_{p(v)}[|L|] &\ge \frac{\alpha^3}{2\log^{3/2} (nT)} \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} \mathbb{E}_{p(v)q(x)} [a_{k}(x) \cdot (d^T \widetilde{M}_k x) (x^T \widetilde{M}_k v) (v^T \widetilde{M} d) \mathbf{1}_{\mathcal G}(v)] \\
\ &= \frac{\alpha^3}{2\log^{3/2} (nT)} \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}}\mathbb{E}_{q(x)} [a_{k}(x) \cdot d^T \widetilde{M}_k x x^T \widetilde{M} \mathbf{\Sigma}' \widetilde{M}_k d],
\end{align*}
where $\mathbf{\Sigma}' = \mathbf{\Sigma} - A$ with $A = \mathbb{E}_{p(v)}[vv^T \cdot \mathbf{1}_{\comp\mathcal G}(v)]$ satisfying $0 \preccurlyeq A \preccurlyeq \beta I$ for $\beta = \mathrm{poly}((nT)^{-1})$.\\
Now, $\|\Sigma - \Sigma'\|_{op} \leq \beta$. Thus, $|d^T \widetilde{M} x x^T \widetilde{M}_k (\Sigma' - \Sigma) \widetilde{M} d | \le \|d\|^2_2\|x\|^2_2 = O(T^2 \beta) = T^{-3}$ choosing $\beta$ appropriately. Further, $\widetilde{M} \Sigma \widetilde{M}_k = 2^{-k} \widetilde{M}_k$. Using this, we have
\begin{align*}
\mathbb{E}_{p(v)}[|L|] &\ge \frac{\alpha^3}{4\log^{3/2} (nT)} \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} 2^{-k} \cdot \mathbb{E}_{q(x)} [a_{k}(x) \cdot (d^T\widetilde{M}_k x)^2] - \frac{\Phi(d)}{T^2}.
\end{align*}
\paragraph{Small Drift.}
Collecting all the terms above, we get
\begin{align}\label{eqn:1}
\mathbb{E}_v[\Delta \Phi] \le \frac{\Phi(d)}{T^2} + \left(-\frac{\alpha^3}{\log^{3/2} (nT)} + 8 \alpha^4\right) \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} 2^{-k}\mathbb{E}_{q(x)}[a_k(x) (d^T \widetilde{M}_k x)^2] + 4\alpha^2 \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} 2^{-k}\mathbb{E}_{q(x)}[a_k(x)].
\end{align}
Let us set $\alpha = 1/(100\log^{3/2}(nT))$, so that the second term above is negative since
$$ \left(-\frac{\alpha^3}{\log^{3/2} (nT)} + 8 \alpha^4\right) \le -\frac{\alpha^3}{2\log^{3/2} (nT)}.$$
Further, note that $\exp(\alpha^2 (d^T M_k x)^2) \le e$ when $|d^T\widetilde{M}_kx| \le 1/\alpha$. Therefore,
\begin{align*}
\ \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} 2^{-k}\cdot \mathbb{E}_{q(x)}[a_k(x) (d^T \widetilde{M}_k x)^2] &\ge \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} 2^{-k} \cdot \frac1{\alpha^2} \cdot \left(\mathbb{E}_{q(x)}[a_k(x)] - \mathbb{E}_{q(x)}\left[a_k(x) \cdot \mathbf{1}_{|d^T \widetilde{M}_k x| \le \frac1\alpha}\right]\right) \\
\ & \ge \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} 2^{-k} \cdot \frac1{\alpha^2} \cdot \left(\mathbb{E}_{q(x)}[a_k(x)] - e\right).
\end{align*}
Plugging the above in \eqref{eqn:1}, we get that
\begin{align*}
\mathbb{E}_v[\Delta \Phi] \le \frac{\Phi(d)}{T^2} + O(1) + \left(-\frac{\alpha}{2\log^{3/2} (nT)} + 4\alpha^2\right) \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} 2^{-k}\mathbb{E}_{q(x)}[a_k(x)].
\end{align*}
One can check that for our previous choice of $\alpha = 1/(100\log^{3/2}(nT))$, we have that $$\left(-\frac{\alpha}{2\log^{3/2} (nT)} + 4\alpha^2\right) \le 0$$ and hence,
\[ \mathbb{E}_{p(v)}[\Delta\Phi] \le \frac{\Phi(d)}{T^2} + O\left(1\right).\]
Therefore, with high probability, the potential is bounded by $\mathrm{poly}(nT)$ at time $T$.
\section{Discrepancy with respect to Arbitrary Convex Bodies}
\label{sec:banasczyck}
\subsection{Discrepancy Bound}
For a set $T \subseteq \mathbb{R}^n$, let $w(T) = \mathbb{E}_G[\sup_{t \in T} \ip{G}{t}]$ denote the Gaussian width of $T$. Note that the gaussian width is $ \Theta(\sqrt{n})$ factor larger than the spherical width $\mathbb{E}_\theta[\sup_{t \in T} \ip{\theta}{t}]$ where $\theta$ is randomly chosen from the unit sphere $\mathbb{S}^{n-1}$. Let $\mathsf{diam}(T) $ denote the diameter of the set $T$.
\begin{lemma} Let $K \subseteq \mathbb{R}^n$ be a symmetric convex body with $\gamma_n(K) \ge 1/2$ and $G \in \mathbb{R}^n$ be a standard Gaussian. Then, $w(K^\circ) \le \frac{3}{2}$. Moreover, $\mathsf{diam}(K^\circ) \le 2$.
\end{lemma}
Let $\mathcal{N}(T, \epsilon)$ denote the size of the smallest $\epsilon$-net of $T$ in the Euclidean metric, \emph{i.e.}, the smallest number of closed balls of radius $\epsilon$ whose union covers $T$.
\begin{lemma}[Sudakov minoration] For any set $T \subseteq \mathbb{R}^n$ and any $\epsilon > 0$
\[ w(T) \gtrsim \epsilon \sqrt{\log \mathcal{N}(T,\epsilon)}.\]
\end{lemma}
Given a convex body $K^\circ$, let $\eps_\mathsf{max} = \lceil 4\log T \rceil$ be the finest scale and $\eps_\mathsf{min} = \log \lceil 1/\mathsf{diam}(K^\circ) \rceil$ be the coarsest scale. (Note that $\eps_\mathsf{min} \ge -1$ because of the lemma above). Let $T_k$ denote an optimal $2^{-k}$-net of $K^\circ$ where $k \in \{\eps_\mathsf{min}, \cdots, \eps_\mathsf{max}\}$. \\
Define the following layered graph (not a tree) where we index the layers starting from $\eps_\mathsf{min}$: the first layer labeled with index $\eps_\mathsf{min}$ consists of the origin and the vertices of layer $k$ are elements of $T_{k}$. There is an edge between $u \in T_k$ and $v \in T_{k+1}$ iff $v \in B(u,2^{-k})$ where $B(u,r)$ denotes the euclidean ball of radius $r$ centered at $u$. We identify an edge $(u,v)$ with the vector $u-v$ and define its length as $\|u - v\|$. Let $E_k$ denote the edges between layer $k$ and $k+1$. Note that $|E_k| \le |T_{k+1}|^2 \le \mathcal{N}(K^\circ, 2^{-k-1})^2$ and any $e \in E_k$ has length at most $2^{-k}$.
In our potential we will choose $y$ by first sampling an integer $k$ uniformly in $[\eps_\mathsf{min}, \eps_\mathsf{max}]$ and then taking $y$ uniform in $2^k E_k$. Note that $\|y\|_2 \le 1$. Suppose that the quadratic potential is bounded by $\phi$. Set $M = \frac{\sqrt{\log \phi}}{\alpha}$. Then, we have that for any layer $k$ and any edge $e \in E_k$,
\[\ip{d}{e} \lesssim M \cdot 2^{-(k+1)} \sqrt{\log |T_{k+1}|} = c M \cdot 2^{-(k+1)} \sqrt{\log \mathcal{N}(K^\circ, 2^{-{k+1}})}.\]
Let $e_{\eps_\mathsf{min}}, \ldots, e_{\eps_\mathsf{max}}$ denote the edges along any path from the origin to $v \in T_{\eps_\mathsf{max}}$ and note that $v = \sum_{k=\eps_\mathsf{min}}^{\eps_\mathsf{max}} e_i$. Then, for any $d \in \mathbb{R}^n$ with $\|d\|_2 \le T$, we have that
\begin{align*}
\ \sup_{y \in K^\circ} \ip{d}{y} &\le \sup_{v \in T_{\eps_\mathsf{max}}} \ip{d}{v} + o(1) \\
\ & \le (\eps_\mathsf{max} - \eps_\mathsf{min}) \cdot M \cdot \left(\max_k 2^{-(k+1)} \sqrt{\log \mathcal{N}(K^\circ, 2^{-{k+1}})}\right) + o(1) \\
\ & \le c w(K^\circ)\log T \cdot M = O(M \log T) = O(\log^3 T).
\end{align*}
\newpage
\section{Introduction}
We consider the following online vector balancing question, originally
proposed by Spencer \cite{Spencer77}: vectors $v_1,v_2,\ldots,v_T \in \mathbb{R}^n$
arrive online, and upon the arrival of $v_t$, a sign $\chi_t \in \{\pm 1\}$ must be
chosen irrevocably, so that the $\ell_\infty$-norm of the \emph{discrepancy vector} (signed sum)
$d_t := \chi_1 v_1 + \ldots + \chi_t v_t$ remains as small as possible. That is, find the
smallest $B$ such that $\max_{t \in [T]} \|d_t\|_\infty \leq B$.
More generally, one can consider the problem of minimizing $\max_{t \in T} \|d_t\|_K$ with respect to arbitrary norms given by a symmetric convex body $K$.
\paragraph{Offline setting.} The offline version of the problem, where the vectors $v_1,\ldots,v_T$ are given in advance,
has been extensively studied in discrepancy theory, and has various applications \cite{Matousek-Book09,Chazelle-Book01,ChenST-Book14}. Here we study three important problems in this vein:
\begin{description}
\item[Tusn\'ady's problem.] Given points $x_1,\ldots, x_T \in [0,1]^d$, we want to assign $\pm$ signs to the points, so that for every axis-parallel box, the difference between the number of points inside the box that are assigned a plus sign and those assigned a minus sign is minimized.
\item[Beck-Fiala and Koml{\'o}s problem.] Given $v_1,\ldots,v_T \in \mathbb{R}^n$ with Euclidean norm at most one, we want to minimize $\max_{t \in T} \|d_t\|_\infty$. After scaling, a special case of the Koml{\'o}s problem is the Beck-Fiala setting where $v_1,\ldots,v_T \in [-1,1]^n$ are $s$-sparse (with at most $s$ non-zeros).
\item[Banaszczyk's problem.] Given $v_1,\ldots,v_T \in \mathbb{R}^n$ with Euclidean norm at most one, and a convex body $K \in \mathbb{R}^n$ with Gaussian measure\footnote{The Gaussian measure $\gamma_n(\mathscr{E})$ of a set $\mathscr{E} \subseteq \mathbb{R}^n$ is defined as $\mathbb{P}[G \in \mathscr{E}]$ where $G$ is standard Gaussian in $\mathbb{R}^n$.} $\gamma_n(K) \geq 1-1/(2T)$, find the smallest $B$ so that there exist signs such that $d_t \in B \cdot K$ for all $t\in [T]$.
\end{description}
One of the most general and powerful results here is due to Banaszczyk~\cite{B12}: there exist signs such that $d_t \in O(1) \cdot K$ for all $t\in [T]$ for any convex body $K \in \mathbb{R}^n$ with Gaussian measure\footnote{We remark that if one only cares about the final discrepancy $d_T$, the condition in Banaszczyk's result can be improved to $\gamma_n(K)\geq 1/2$ (though, in all applications we are aware of, this makes no difference if $T=\text{poly}(n)$ and makes a difference of at most $\sqrt{\log T})$ for general $T$).} $\gamma_n(K)\ge 1-1/(2T)$.
In particular, this gives the best known bounds of $O((\log T)^{1/2})$ for the {Koml\'{o}s problem}; for the Beck-Fiala setting, when the vectors are $s$-sparse, the bound is $O((s \log T)^{1/2})$.
An extensively studied case, where sparsity plays a key role, is that of {Tusn\'ady's problem} (see~\cite{Matousek-Book09} for a history), where the best known (non-algorithmic) results, building on a long line of work, are an $O(\log^{d-1/2}T)$ upper bound of~\cite{Nikolov-Mathematika19} and an almost matching $\Omega(\log^{d-1}T)$ lower bound of~\cite{MN-SoCG15}.
In general, several powerful techniques have been developed for offline discrepancy problems over the last several decades, starting with initial non-constructive approaches such as \cite{Beck-Combinatorica81, Spencer85, gluskin-89, Giannopoulos,Banaszczyk-Journal98,B12}, and more recent algorithmic ones such as \cite{Bansal-FOCS10, Lovett-Meka-SICOMP15, Rothvoss14, MNT14, BansalDG16, LevyRR17, EldanS18, BansalDGL18,DNTT18}. However, none of them applies to the online setting that we consider here.
\paragraph{Online setting.}
A na\"ive algorithm is to pick each sign $\chi_t$ randomly and independently, which by standard tail bounds gives $B = \Theta((T \log n)^{
1/2})$ with high probability. In typical interesting settings, we have $T \ge \mathrm{poly}(n)$,
and hence a natural question is whether the dependence on $T$ can be
improved from $T^{1/2}$ to say, poly-logarithmic in $T$, and ideally to even match the known offline bounds.
Unfortunately, the $\Omega(T^{1/2})$ dependence is necessary if the adversary is adaptive\footnote{In the sense that the adversary can choose the next vector $v_t$ based on the current discrepancy vector $d_{t-1}$.}: at each time $t$, the adversary can choose the next input vector $v_t$ to be \emph{orthogonal} to $d_{t-1}$, causing $\|d_t\|_2$ to grow as $\Omega(T^{1/2})$ (see \cite{Spencer-Book87} for an even stronger lower bound). Even for very special cases, such as for vectors in $\{-1,1\}^n$, strong $\Omega(2^n)$ lower bounds are known \cite{Barany79}.
Hence, we focus on a natural \emph{stochastic} model where we relax the power of the adversary and assume that the arriving vectors are chosen in an i.i.d.~manner from some---possibly adversarially chosen---distribution $\ensuremath{\mathsf{p}}$. In this case, one could hope to exploit that $\ip{d_{t-1}}{v_t}$ is not always zero, \emph{e.g.}, due to anti-concentration properties of the input distribution, and beat the $\Omega(T^{1/2})$ bound.
Recently, Bansal and Spencer~\cite{BansalSpencer-arXiv19}, considered the special case where $\ensuremath{\mathsf{p}}$ is the uniform distribution on all $\{-1,1\}^n$ vectors, and gave an almost optimal $O(n^{1/2} \log T)$ bound for the $\ell_\infty$ norm that holds with high probability for all $t \in [T]$.
The setting of general distributions $\ensuremath{\mathsf{p}}$ turns out to be harder and was considered recently by \cite{JiangKS-arXiv19} and \cite{BJSS20}, motivated by \emph{envy minimization} problems and an online version of Tusn\'ady's problem. The latter was also considered independently by Dwivedi, Feldheim, Gurel-Gurevich, and Ramadas~\cite{DFGR19} motivated by the problem of placing points uniformly in a grid.
For an arbitrary distribution $\ensuremath{\mathsf{p}}$ supported on vectors in $[-1,1]^n$, \cite{BJSS20} give an algorithm achieving an $O(n^2 \log T)$ bound for the $\ell_\infty$-norm.
In contrast, the best offline bound is $O((n \log T)^{1/2})$, and hence $\widetilde{\Omega}(n^{3/2})$ factor worse, where $\widetilde{\Omega}(\cdot)$ ignores poly-logarithmic factors in $n$ and $T$.
More significantly, the existing bounds for the online version are much worse than those of the offline version for the case of $s$-sparse vectors (\emph{Beck-Fiala} setting) --- \cite{BJSS20} obtain a much weaker bound of $O(s n \log T)$ for the online setting while the offline bound of $O((s\log T)^{1/2})$ is independent of the ambient dimension $n$. These technical limitations also carry over to the online Tusn\'ady problem, where previous works~\cite{JiangKS-arXiv19,DFGR19,BJSS20} could only handle product distributions.
To this end, \cite{BJSS20} propose two key problems in the i.i.d.~setting. First, for a general distribution $\ensuremath{\mathsf{p}}$ on vectors in $[-1,1]^n$, can one get an optimal $\widetilde{O}(n^{1/2})$ or even $\widetilde{O}(n)$ dependence? Second, can one get $\text{poly}(s, \log T)$ bounds when the vectors are $s$-sparse. In particular, as a special case, can one get $(\log T)^{O(d)}$ bounds for the Tusn\'ady problem, when points arrive from an {\em arbitrary} non-product distribution on $[0,1]^d$.
\subsection{Our Results}
In this paper we resolve both the above questions of~\cite{BJSS20}, and prove much more general results that obtain bounds within poly-logarithmic factors of those achievable in the offline setting.
\paragraph{Online Koml{\'o}s and Tusn\'ady settings.}
We first consider Koml{\'o}s' setting for online discrepancy minimization where the vectors have $\ell_2$-norm at most $1$. Recall, the best known offline bound in this setting is $O((\log T)^{1/2})$~\cite{B12}. We achieve the same result, up to poly-logarithmic factors, in the online setting.
\begin{restatable}[Online Koml{\'o}s setting]{theorem}{komlos} \label{thm:komlos}
Let $\mathsf{p}$ be a distribution in $\mathbb{R}^n$ supported on vectors with Euclidean norm at most $1$.
Then, for vectors $v_1, \ldots, v_T$ sampled i.i.d. from $\mathsf{p}$, there is an online algorithm that with high probability maintains a discrepancy vector $d_t$ such that $\| d_t \|_\infty = O(\log^4 (nT))$ for all $t \in [T]$.
\end{restatable}
In particular, for vectors in $[-1,1]^n$ this gives an $O(n^{1/2} \log^4 (nT))$ bound, and for $s$-sparse vectors in $[-1,1]^n$, this gives an $O(s^{1/2} \log^4 (nT))$ bound, both of which are optimal up to poly-logarithmic factors.
The above result implies significant savings for the online Tusn\'ady problem. Call a set $B \subseteq [0,1]^n$ an axis-parallel box if $B = I_1 \times \cdots \times I_n$ for intervals $I_i \subseteq [0,1]$. In the online Tusn\'ady problem, we see points $x_1,\ldots,x_T \in [0,1]^d$ and need to assign signs $\chi_1,\ldots,\chi_T$ in an online manner to minimize the discrepancy of every axis-parallel box at all times. More precisely, for an axis-parallel box $B$, define\footnote{Here, and henceforth, for a set $S$, denote $\mathbf{1}_S(x)$ the indicator function that is $1$ if $x \in S$ and $0$ otherwise.}
\[ \mathsf{disc}_t(B) := \Big|\chi_1 \mathbf{1}_B(x_1) + \ldots + \chi_t \mathbf{1}_B(x_t)\Big|.\]
Our goal is to assign the signs $\chi_1,\ldots,\chi_t$ so as to minimize $\max_{t \le T} \mathsf{disc}_t(B)$ for every axis-parallel box $B$.
There is a standard reduction (see \secref{subsec:tusnady}) from the online Tusn\'ady problem to the case of $s$-sparse vectors in $\mathbb{R}^N$ where $s = (\log T)^d$ but the ambient dimension $N$ is $O_d(T^d)$. Using this reduction, along with \thmref{thm:komlos}, directly gives an $O(\log^{3d/2+4} T)$ bound for the online Tusn\'ady's problem that works for any {\em arbitrary} distribution on points, instead of just product distributions as in \cite{BJSS20}. In fact, we prove a more general result where we can choose arbitrary directions to test discrepancy and we use this flexibility (see \thmref{thm:gen-disc} below) to improve the exponent of the bound further, and essentially match the best offline bound of $O((\log^{d-1/2} T)$ \cite{Nikolov-Mathematika19}.
\begin{restatable}[Online Tusn\'ady's problem for arbitrary $\ensuremath{\mathsf{p}}$]{theorem}{tusnady}
\label{thm:tusnady}
Let $\ensuremath{\mathsf{p}}$ be an arbitrary distribution on $[0,1]^d$.
For points $x_1,\ldots,x_T$ sampled i.i.d from $\ensuremath{\mathsf{p}}$, there is an algorithm which selects signs $\chi_t \in \{\pm 1\}$ such that with high probability for every axis-parallel box $B$, we have $\max_{t \in [T]} \mathsf{disc}_t(B) = O_d(\log^{d+4} T)$.
\end{restatable}
\thmref{thm:komlos} and \thmref{thm:tusnady} follow from the more general result below.
\begin{restatable}[Discrepancy for Arbitrary Test Directions]{theorem}{test}
\label{thm:gen-disc}
Let $\mathscr{E} \subseteq \mathbb{R}^n$ be a finite set of test vectors with Euclidean norm at most $1$ and $\mathsf{p}$ be a distribution in $\mathbb{R}^n$ supported on vectors with Euclidean norm at most $1$. Then, for vectors $v_1, \ldots, v_T$ sampled i.i.d. from $\mathsf{p}$, there is an online algorithm that with high probability maintains a discrepancy vector $d_t$ satisfying
\begin{align*}
\max_{z \in \mathscr{E}} |d_t^\top z| = O((\log (|\mathscr{E}|) + \log T)\cdot \log^3(nT)) ~\text{ for every } t \in [T].
\end{align*}
\end{restatable}
In fact, the proof of the above theorem also shows that given any arbitrary distribution on unit test vectors $z$, one can maintain a bound on the exponential moment $\mathbb{E}_z[\exp(|\ip{d_t}{z}|)]$ at all times.
The key idea involved in proving \thmref{thm:gen-disc} above, is a novel potential function approach. In addition to controlling the discrepancy $d_t$ in the test directions, we also control how the distribution of $d_t$ relates to the input vector distribution $\ensuremath{\mathsf{p}}$. This leads to better anti-concentration properties, which in turn gives better bounds on discrepancy in the test directions.
We describe this idea in more detail in Sections
\ref{sec:high-level-tech} and \ref{sec:proofOverview}.
\paragraph{Online Banaszczyk setting.}
Next, we consider discrepancy with respect to general norms given by an arbitrary convex body $K$. To recall, in the offline setting, Banaszczyk's seminal result \cite{B12} shows that if $K$ is any convex body with Gaussian measure $1-1/(2T)$, then for any vectors $v_1,\ldots,v_T$ of $\ell_2$-norm at most $1$, there exist signs $\chi_1,\ldots, \chi_T$ such that the discrepancy vectors $d_t \in O(1) \cdot K$ for all $t \in T$.
Here we study the online version when the input distribution $\mathsf{p} \in \mathbb{R}^n$ has sufficiently good tails. Specifically, we say a univariate random variable $X$ has \emph{sub-exponential tails} if for all $r > 0$, $\mathbb{P}\big[ |X - \mathbb{E}[X]| > r \sigma(X)\big] \leq e^{-\Omega(r)}$,
where $\sigma(X)$ denotes the standard-deviation of $X$. We say a multi-variate distribution $\mathsf{p} \in \mathbb{R}^n$ has sub-exponential tails if all its one-dimensional projections have sub-exponential tails. That is, $$\mathbb{P}_{v \sim p}\left[\Big|\ip{v}{\theta} - \mu_\theta] \Big| \ge \sigma_\theta \cdot r\right] \le e^{-\Omega(r)} ~~ \text{ for every } \theta \in \mathbb{S}^{n-1} \text{ and every } r>0,$$
where $\mu_\theta$ and $\sigma_\theta$ are the mean and standard deviation\footnote{Note that when the input distribution $\mathsf{p}$ is $\alpha$-isotropic, \emph{i.e.} the covariance is $\alpha I_n$, then $\sigma_\theta = \alpha$ for every direction $\theta$, but the above definition is a natural generalization to handle an arbitrary covariance structure.} of the scalar random variable $X_\theta = \ip{v}{\theta}$.
Many natural distributions, such as when $v$ is chosen uniform over the vertices of the $\{\pm 1\}^n$ hypercube (scaled to have Euclidean norm one), uniform from a convex body, Gaussian distribution (scaled to have bounded norm with high probability), or uniform on the unit sphere, have a sub-exponential tail and in these cases our bounds match the offline bounds up to poly-logarithmic factors.
\begin{restatable}[Online Banaszczyk Setting]{theorem}{Banaszczyk}
\label{thm:gen-disc-ban}
Let $K \subseteq \mathbb{R}^n$ be a symmetric convex body with $\gamma_n(K) \ge 1/2$ and $\mathsf{p}$ be a distribution with sub-exponential tails that is supported over vectors of Euclidean norm at most 1. Then, for vectors $v_1, \ldots, v_T$ sampled i.i.d. from $\mathsf{p}$, there is an online algorithm that with high probability maintains a discrepancy vector $d_t$ satisfying $d_t \in C \log^5(nT) \cdot K$ for all $t \in [T]$ and a universal constant $C$
\end{restatable}
The proof of the above theorem, while similar in spirit to \thmref{thm:gen-disc}, is much more delicate. In particular, we cannot use that theorem directly as capturing a general convex body as a polytope may require exponential number of constraints (the set $\mathscr{E}$ of test vectors).
\paragraph{Online Weighted Multi-Color Discrepancy.}
Finally we consider the setting of weighted multi-color discrepancy, where we are given vectors $v_1, \ldots, v_T \in \mathbb{R}^n$ sampled i.i.d. from a distribution $\mathsf{p}$ on vectors with $\ell_2$-norm at most one, an integer $R$ which is the number of colors available, positive weights $w_c \in [1,\eta]$ for each color $c \in [R]$, and a norm $\arbnorm{\cdot}$. At each time $t$, the algorithm has to choose a color $c \in [R]$ for the arriving vector, so that the discrepancy $\mathsf{disc}_t$ with respect to $\arbnorm{\cdot}$, defined below, is minimized for every $t \in [T]$:
\begin{align*}
\mathsf{disc}_t(\arbnorm{\cdot}) := \max_{c\neq c'} \mathsf{disc}_t(c,c') ~\text{ where }~ \mathsf{disc}_t(c,c') := \arbnorm{ \frac{d_c(t) / w_c - d_{c'}(t) / w_{c'} } {1/w_c + 1/w_{c'} }},
\end{align*}
with $d_c(t)$ being the sum of all the vectors that have been given the color $c$ till time $t$. We note that (up to a factor of two) the case of unit weights and $R=2$ is the same as assigning $\pm$ signs to the vectors $(v_i)_{i \le T}$, and we will also refer to this setting as \emph{signed discrepancy}.
We show that the bounds from the previous results also extend to the setting of multi-color discrepancy.
\begin{restatable}[Weighted multi-color discrepancy]{theorem}{multicolor}\label{thm:multicolor-intro}
For any input distribution $\mathsf{p}$ and any set $\mathscr{E}$ of $\mathrm{poly}(nT)$ test vectors with Euclidean norm at most one, there is an online algorithm for the weighted multi-color discrepancy problem that maintains discrepancy $O(\log^2(R \eta) \cdot \log^4(nT))$ with the norm $\|\cdot\|_* = \max_{z \in \mathscr{E}} |\ip{\cdot}{z}|$.
Further, if the input distribution $\mathsf{p}$ has sub-exponential tails then one can maintain multi-color discrepancy $O(\log^2(R \eta)\cdot \log^5(nT))$ for any norm $\|\cdot\|_*$ given by a symmetric convex body $K$ satisfying $\gamma_n(K) \ge 1/2$.
\end{restatable}
As an application, the above theorem implies upper bounds for multi-player envy minimization in the online stochastic setting, as defined in~\cite{BenadeKPP-EC18}, by reductions similar to those in \cite{JiangKS-arXiv19} and \cite{BJSS20}.
We remark that in the offline setting, such a statement with logarithmic dependence in $R$ and $\eta$ is easy to prove by identifying the various colors with leaves of a binary tree and recursively using the offline algorithm for signed discrepancy. It is not clear how to generalize such a strategy to the online stochastic setting, since the algorithm for signed discrepancy might use the stochasticity of the inputs quite strongly.
By exploiting the idea of working with the Haar basis, we show how to implement such a strategy in the online stochastic setting: we prove that if there is a greedy strategy for the signed discrepancy setting that uses a potential satisfying certain requirements, then it can be converted to the weighted multi-color discrepancy setting in a black-box manner.
\subsection{High-Level Approach} \label{sec:high-level-tech}
Before describing our ideas, it is useful to discuss the bottlenecks in the previous approach. In particular,
the quantitative bounds for the online Koml\'{o}s problem, as well as for the case of sparse vectors obtained in \cite{BJSS20} are the best possible using their approach, and improving them further required new ideas. We describe these ideas at a high-level here, and refer to Section~\ref{sec:proofOverview}
for a more technical overview.
\paragraph{Limitations of previous approach.}
For intuition, let us first consider the simpler setting, where we care about minimizing the Euclidean norm of the discrepancy vector $d_t$ --- this will already highlight the main issues. As mentioned before, if the adversary is adaptive in the online setting, then they can always choose the next input vector $v_t$ to be orthogonal to $d_{t-1}$ (i.e., $\ip{d_{t-1}}{v_t} = 0$) causing $\|d_t\|_2$ to grow as $T^{1/2}$. However, if $\ip{d_{t-1}}{v_t}$ is typically large, then one can reduce $\|d_t\|_2$ by choosing $\chi_t = - \mathsf{sign}(\ip{d_{t-1}}{v_t})$, as the following shows:
\begin{equation}\label{eqn:l2}
\ \|d_t\|_2^2 - \|d_{t-1}\|_2^2 ~=~ 2\chi_t \cdot \ip{d_{t-1}}{v_t} + \|v_t\|^2_2 ~\le~ -2 |\ip{d_{t-1}}{v_t}| + 1.
\end{equation}
The key idea in \cite{BJSS20} was that if the vector $v_t$ has uncorrelated coordinates (i.e.~$\mathbb{E}_{v_t \sim \ensuremath{\mathsf{p}}} [v_t(i)v_t(j)]=0$ for $i\neq j$), then one can exploit \emph{anti-concentration} properties to essentially argue that $|\ip{d_{t-1}}{v_t}|$ is typically large when $\|d_{t-1}\|_2$ is somewhat big, and the greedy choice above works, as it gives a \emph{negative drift} for the $\ell_2$-norm. However, uncorrelated vectors satisfy provably weaker anti-concentration properties, by up to a $n^{1/2}$ factor ($s^{1/2}$ for $s$-sparse vectors), compared to those with independent coordinates.
This leads up to an extra $n^{1/2}$ loss in general.
Moreover, to ensure uncorrelation one has to work in the eigenbasis of the covariance matrix of $\ensuremath{\mathsf{p}}$, which could destroy sparsity in the input vectors and give bounds that scale polynomially with $n$. \cite{BJSS20} also show that one can combine the above high-level uncorrelation idea with a potential function that tracks a soft version of maximum discrepancy in any coordinate,
\begin{align}\label{eq:potential}
\ \Phi_{t-1} = \sum_{i=1}^n \exp( \lambda d_{t-1}(i)),
\end{align}
to even get bounds on the $\ell_\infty$-norm of $d_t$. However, this is also problematic as it might lead to another factor $n$ loss, due to a change of basis (twice).
To achieve sparsity based bounds in the special case of online Tusn\'ady's problem, previous approaches use the above ideas and exploit the special problem structure. In particular, when the input distribution $\mathsf{p}$ is a product distribution, \cite{BJSS20} (and \cite{DFGR19}) observe that one can work with the natural Haar basis which also has a product structure in $[0,1]^d$ --- this makes the input vectors uncorrelated, while simultaneously preserving the sparsity due to the recursive structure of the Haar basis. However, this severely restricts $\ensuremath{\mathsf{p}}$ to product distributions and previously, it was unclear how to even handle a mixture of two product distributions.
\paragraph{New potential: anti-concentration from exponential moments.}
Our results are based on a new potential. Typical potential analyses for online problems show that no matter what the current state is, the potential does not rise much when the next input arrives. As discussed above, this is typically exploited in the online discrepancy setting using \emph{anti-concentration} properties of the incoming vector $v_t \sim \mathsf{p}$ --- one argues that no matter the current discrepancy vector $d_{t-1}$, the inner product $\ip{d_{t-1}}{v_t}$ is typically large so that a sign can be chosen to decrease the potential (recall \eqref{eqn:l2}).
However, as in \cite{BJSS20}, such a worst-case analysis is restrictive as it requires $\ensuremath{\mathsf{p}}$ to have additional desirable properties such as uncorrelated coordinates. A key conceptual idea in our work is that instead of just controlling a suitable proxy for the norm of the discrepancy vectors $d_t$, we also seek to control structural properties of the distribution $d_t$. Specifically, we also seek to evolve the distribution of $d_t$ so that it has better anti-concentration properties with respect to the input distribution. In particular, one can get much better anti-concentration for a random variable if one also has control on the higher moments. For instance, if we can bound the fourth moment of the random variable $Y_t \equiv \ip{d_{t-1}}{v_t}$, in terms of its variance, say $\mathbb{E}[Y_t^4] \ll \mathbb{E}[Y_t^2]^2$, then the Paley-Zygmund inequality implies that $Y_t$ is far from zero. However, working with $\mathbb{E}[Y_t^4]$ itself is too weak as an invariant and necessitates looking at even higher moments.
A key idea is that these hurdles can be handled cleanly by looking at another potential that controls the \emph{exponential moment} of $Y_t$. Specifically, all our results are based on an aggregate potential function based on combining a potential of the form \eqref{eq:potential}, which enforces \emph{discrepancy constraints}, together with variants of the following potential, for a suitable parameter $\lambda$, which enforces \emph{anti-concentration constraints}:
$$\Phi_t \sim \mathbb{E}_v[\exp(\lambda |\ip{d_{t}}{v}|)].$$
This clearly allows us to control higher moments of $\ip{d_t}{v}$, in turn allowing us to show strong anti-concentration properties without any assumptions on $\ensuremath{\mathsf{p}}$. We believe the above idea of controlling the space of possible states where the algorithm can be present in, could potentially be useful for other applications.
To illustrate the idea in the concrete setting of $\ell_2$-discrepancy, let us consider the case when the input distribution $\mathsf{p}$ is mean-zero and $1/n$-\emph{isotropic}, meaning the covariance $\mathbf{\Sigma}=\mathbb{E}_{v\sim \mathsf{p}}[vv^\top]= I_n/n$. Here, if we knew that the exponential moment $\mathbb{E}_{v\sim \mathsf{p}}[\exp(|\ip{d_{t-1}}{v}|)] \le T$, then it implies that with high probability $|\ip{d_{t-1}}{v}| \le \log T$ for $v \sim \mathsf{p}$. To avoid technicalities, let us assume that $|\ip{d_{t-1}}{v}| \le \log T$ holds with probability one. Therefore, when $v_t$ sampled independently from $\mathsf{p}$ arrives, then since $\mathbb{E}\big[|AB|\big] \ge {\mathbb{E}[AB]}/{\|B\|_\infty}$ for any coupled random variables $A$ and $B$, taking $A=\ip{d_{t-1}}{v_t}$ and $B = \ip{d_{t-1}}{v_t}/\log T$, we get that
\[ \mathbb{E}[|\ip{d_{t-1}}{v_t}|] ~\ge~ \frac{1}{\log T}\cdot \mathbb{E}_{v_t}[ d_{t-1}^\top v_tv_t^\top d_{t-1}] ~=~ \frac{1}{\log T} \cdot d_{t-1}^\top \mathbf{\Sigma} d_{t-1} ~=~ \frac{\|d_{t-1}\|_2^2}{n\log T}.\]
Therefore, whenever $\|d_{t-1}\|_2 \gg (n\log T)^{1/2}$, then the drift in $\ell_2$-norm of the discrepancy vector $d_t$ is negative. Thus, we can obtain the optimal $\ell_2$-discrepancy bound of $O((n\log T)^{1/2})$.\\
\noindent {\bf Banasaczyk setting.} In the Banaszczyk setting, the algorithm uses a carefully chosen set of test vectors at different scales that come from \emph{generic chaining}. In particular, we use a potential function based on test vectors derived from the generic chaining decomposition of the polar $K^{\circ}$ of the body $K$.
However, as there can now be exponentially many such test vectors, more care is needed. First, we use that the Gaussian measure of $K$ is large to control the number of test vectors at each scale in the generic chaining decomposition of $K^{\circ}$.
Second, to be able to perform a union bound over the test vectors at each scale, one needs substantially stronger tail bounds than in \thmref{thm:gen-disc}. To do this, we scale the test vectors to be quite large, but this becomes problematic with standard tools for potential analysis, such as Taylor approximation, as the update to each term in the potential can be much larger than potential itself, and hard to control. Nevertheless, we show that if the distribution has sub-exponential tails, then such an approximation holds ``on average'' and the growth in the potential can be bounded.
\paragraph{Concurrent and Independent Work.}
In a concurrent and independent work, Alweiss, Liu, and Sawhney \cite{ALS-arXiv20} obtained online algorithms achieving poly-logarithmic discrepancy bound for the Koml\'os and Tusn\'ady’s problems in the more general setting where the adversary is oblivious. Their techniques, however, are completely different from the potential function based techniques of the present paper. In fact, as noted by the authors of \cite{ALS-arXiv20}, a potential function analysis encounters significant difficulties here --- the algorithm is required to control the evolution of the discrepancy vectors and such an invariant is difficult to maintain with a potential function, even for stochastic inputs. With the techniques and conceptual ideas we introduce, we can overcome this barrier in the stochastic setting. We believe that our potential-based approach to control the state space of the algorithm could prove useful for other stochastic problems
\section{Discrepancy for Isotropic Convex Bodies}
\label{sec:isotropic}
We say that a symmetric convex body $K \subseteq \mathbb{R}^n$ is \emph{dual isotropic} if the covariance matrix for the uniform distribution on $K^\circ$ is the identity matrix $I_n$. In this scenario, we also call the polar body $K^\circ$ isotropic.
We remark that this is not the usual notion of the isotropic position of a convex body as we do not require $K^\circ$ to be of unit volume.
In this section, we prove the following.
\mnote{This proof does not work at the moment}
\begin{theorem}\label{thm:iso-disc}
Let $K \subseteq \mathbb{R}^n$ be a symmetric convex body such that $K/n$ is dual isotropic. Let be a distribution supported over vectors with Euclidean norm at most $1$ in $\mathbb{R}^n$. Then, for vectors $v_1, \ldots, v_T$ sampled i.i.d. from $\mathsf{p}$, there is an online algorithm that w.h.p. maintains discrepancy a discrepancy vector $d_t$ satisfying $\|d_t\|_K = O(\log^6(nT))$ at all times $t \in [T]$.
\end{theorem}
Note that the scaling above is chosen to compare it to the Komlos and Banasczyck setting.
\subsection{Preliminaries on Isotropic Convex Bodies and Log-concave Measures}
For a linear subspace $H \subseteq \mathbb{R}^n$ and a convex body $K \subseteq H$, we say that $K$ is dual isotropic with respect to $H$ if the covariance matrix of $K$ is $\Pi_H$ where $\Pi_H$ is the orthogonal projection on to the subspace $H$.
\begin{proposition} \label{prop:isotropic}
Let $K \subseteq \mathbb{R}^n$ be a symmetric convex body that is dual isotropic, and let $H \subseteq \mathbb{R}^n$ be a linear subspace. Then, $K\cap H$ is dual isotropic with respect to $H$.
\end{proposition}
\begin{proof}
Let $X$ be drawn uniformly from $K^\circ$, so that $Y=\Pi_HX$ is uniform over $(K \cap H)^\circ$. Then the covariance matrix of $Y$ is given by
\[ \mathbb{E}_y[yy^T] = \mathbb{E}_x[\Pi_H xx^T \Pi_H] = \Pi_H \cdot I_n \cdot \Pi_H = \Pi_H. \qedhere\]
\end{proof}
Isotropic bodies are approximated by balls as the following proposition shows.
\begin{proposition}[\cite{KLS95}]
\label{prop:kls}
If $K \subseteq \mathbb{R}^n$ is a dual isotropic convex body, and $B$ is the unit Euclidean ball, then
\[ \sqrt{\frac{n+2}{n}}B \subseteq K^\circ \subseteq \sqrt{n(n+2)}B.\]
\end{proposition}
\begin{lemma}\label{lemma:john}
Let $K \subseteq \mathbb{R}^n$ be a dual isotropic convex body. Then, for any vector $d \in \mathbb{R}^n$,
\begin{align*}
\mathbb{P}_{y \sim K^\circ}\left[\ip{d}{y} \geq \frac12 \|d\|_K\right] \geq n^{-2n}.
\end{align*}
\end{lemma}
\newcommand{\mathsf{Cov}}{\mathsf{Cov}}
\begin{proof}
\mnote{To be simplified and notation to be updated}
The following sandwiching condition is by John's theorem:
\begin{align*}
E(\mathsf{Cov}(K^\circ)^{-1}) \subseteq K^\circ \subseteq \sqrt{n} \cdot E(\mathsf{Cov}(K^\circ)^{-1}),
\end{align*}
where $E(A) = \{y: y^\top A y \leq 1\}$.
Let $v = d / \norm{d}_2$ be the unit vector in the direction of $d$.
We consider cross-sections of $K^\circ$ of the form $K^\circ \cap (v^\perp + t v)$, whose $(n-1)$-dimensional volume is denoted as $g_{K,v}(t)$.
The largest $t_{\max}$ that satisfies $K^\circ \cap (v^\perp + t_{\max} \cdot v) \neq \emptyset$ gives an extreme point $x_d$ (this might be a face of $K$ but the proof is the same) of $K^\circ$ such that $\langle d, x_d \rangle = 1$.
We are interested in ratio between the volume of $K \cap (\cup_{t \geq t_{\max}/2} (v^\bot + tv))$ and the volume of $K$.
We note that the cone formed by $x_d$ and $K \cap v^\bot$ (the cross-section through the centroid) lies in $K$, having volume at least $\frac{1}{2n}$ times the volume of the inscribed ellipsoid (the factor $n$ comes from enlarging the cone to a cylinder, which has volume at least half of the inscribed ellipsoid).
This proves that the top $1/2$ height of the cone has volume at least $\frac{1}{2n \cdot 2^n}$ times the volume of the inscribed ellipsoid, which is at least $\frac{1}{2n \cdot 2^n \cdot \sqrt{n}^n}$ times the volume of $K$.
This proves the lemma.
\end{proof}
\paragraph{Log-concave Measures}
A probability measure $\mathsf{p}$ on $\mathbb{R}^n$ is called log-concave if for all non-empty measurable sets $\mathcal{A}, \mathcal{B} \subseteq \mathbb{R}^n$ and for all $0< \lambda < 1$, we have
\[ \mathsf{p}(\lambda \mathcal{A} + (1-\lambda)\mathcal{B}) \ge \mathsf{p}(\mathcal{A})^\lambda\mathsf{p}(\mathcal{B})^{1-\lambda}.\]
It is well-known that the uniform distribution on a convex body is log-concave. Log-concavity is preserved under affine transformations and taking marginals (c.f. \cite{LV07} Theorem 5.1).
Log-concave distributions on the real line are sub-exponential (see Lemma 5.7 and Theorem 5.22 in \cite{LV07}) and in particular, \pref{prop:logconcave} also holds for log-concave probability distributions on $\mathbb{R}$.
\subsection{Potential Function and Algorithm}
Our algorithm will use a similar greedy strategy as before with a suitable test distribution $\mathsf{p}_z$ that will be chosen later. Let $(\Sigma, (\Pi_k)_k, (M_k)_k)$ be the covariance decomposition of $\mathsf{p}$. Define the noisy distribution $\mathsf{p}_x = \mathsf{p}/2 + \mathsf{p}_z/2$.
At any time step $t$, let $d_{t} = \chi_1 v_1 + \ldots + \chi_t v_t$ denote the current discrepancy vector after the signs $\chi_1, \ldots, \chi_t \in \{\pm1\}$ have been chosen. Set $\lambda^{-1} = 12\log^2(nT)$, and define the potential
\[ \Phi_t = \Phi(d_t) := \sum_{k=0}^{\kappa} \mathbb{E}_{x\sim \mathsf{p}_x}\left[\exp\left(\lambda ~d_{t}^\top M^+_k x \right)\right].\]
When the vector $v_t$ arrives, the algorithm chooses the sign $\chi_t$ that minimizes the increase $\Phi_t - \Phi_{t-1}$.
\paragraph{Test Distribution.} To complete the description of the algorithm, we need to choose a suitable distribution on test vectors to give us control on the norm $\|\cdot\|_K = \sup_{y \in K^\circ} \ip{\cdot}{y}$.
As before, let us denote by $H_k = \mathrm{im}(\Pi_k)$ the linear subspace that is the image of the projection matrix $\Pi_k$ where the subspaces $\{H_k\}_{k \le \kappa}$ are orthogonal and span $\mathbb{R}^n$. Let us denote by $K_k = K \cap H_k$ the slice of the convex body $K$ with the subspace $H_k$. \pref{prop:isotropic} implies that the dual polar bodies $K^\circ_k := (K_k)^\circ = \Pi_k (K^\circ)$ are also in the dual isotropic position with respect to the corresponding subspace $H_k$.
Pick the final test distribution as $\mathsf{p}_z = \mathsf{p}_\mathbf{\Sigma}/2 + \mathsf{p}_y/2$ where $\mathsf{p}_\mathbf{\Sigma}$ and $\mathsf{p}_y$ denote the distributions given in \figref{fig:iso-test}.
\begin{figure}[!h]
\begin{tabular}{|l|}
\hline
\begin{minipage}{\textwidth}
\vspace{1ex}
\begin{enumerate}[(a)]
\item $\mathsf{p}_\mathbf{\Sigma}$ is uniform over the eigenvectors of the covariance matrix $\mathbf{\Sigma}$.
\item $\mathsf{p}_y$ samples a random vector as follows: pick an integer $k$ uniformly from $[0,\kappa]$ and choose a uniform vector from $r nK^\circ_k$ where the polar body $nK^\circ$ is isotropic and $r=1/C\log(nT)$.
\end{enumerate}
\vspace{0.1ex}
\end{minipage}\\
\hline
\end{tabular}
\caption{Test distributions $\mathsf{p}_\mathbf{\Sigma}$ and $\mathsf{p}_y$}
\label{fig:iso-test}
\end{figure}
As before, adding the eigenvectors will allows us to control the Euclidean length of the discrepancy vectors in the subspaces $H_k$ as they form an orthonormal basis for these subspaces. And similar to previous section, the test vectors chosen above may have large Euclidean length. In particular, \pref{prop:kls} implies that any vector in the support of $y$ has Euclidean norm at most $O(n)$.
The test distribution $\mathsf{p}_z$ is useful because of the following lemma.
\begin{lemma} \label{lemma:isotropic}
At any time $t$, if $\Phi_t \le T$, then we have that
\[ \|\Pi_kd\|_2 \le \sqrt{\lambda^{-1}\dim(H_k)\log(nT)} ~\text{ and }~ \|d\|_K \le \log^6(nT).\]
\end{lemma}
\begin{proof}
The first statement is the same as that of \lref{lemma:chaining}. To see the bound on $\|d\|_K$, since $M^+_k M = \Pi_k$, our choice of the test distribution gives us the following for every $k$,
\[ \sup_{y \in K^\circ_k} \exp\left( \lambda r n \cdot |d^\top\Pi_k y |\right) ~\le~ n^{\dim(H_k)} 4\kappa \Phi,\]
where the second inequality follows from \lref{lemma:john}.
Therefore, $|d^\top \Pi_k y| \le (r\lambda)^{-1}\log^2(nT) \le \log^5(nT)$ for every $k$ and hence
\[ \|d\|_K = \sup_{y \in K^\circ} \ip{d}{y} \le \sum_{k=0}^\kappa \sup_{y \in K^\circ_k} \ip{d}{\Pi_k y} = \log^6(nT). \qedhere\]
\end{proof}
The next lemma shows that most of the times the expected increase (or drift) in the potential is small.
\begin{lemma}[Bounded Positive Drift]\label{lemma:iso-drift} If at any time $t$, it holds that $ \Phi_{t-1} \le T$, then there exists an event $\mathcal{E}_t$ such that
\[\mathbb{P}_{v_t \sim \mathsf{p}}(\mathcal{E}_t) \ge 1-T^{-4} ~\text{ and } ~\mathbb{E}_{v_t \sim \mathsf{p}}[\Phi_t \cdot \mathbf{1}_{\mathcal{E}_t}] - \Phi_{t-1} = O(1).\]
\end{lemma}
By a union bound and a truncation argument as used in the previous proofs, this implies that w.h.p. the potential $\Phi_t \le 3T^5$ for every $t \in [T]$. Combined with \lref{lemma:isotropic}, this proves \thmref{thm:iso-disc}.
We prove \lref{lemma:iso-drift} in the next section.
\subsection{Drift Analysis: Proof of \lref{lemma:iso-drift}} We first note the following tail bound analogous to \lref{lemma:tail} and \lref{lemma:gen-tail-ban}. Let $\mathcal G_t = \mathcal G(d_t)$ denote the set of vectors in the support of $\mathsf{p}$ such that $\lambda |d_t^\top M^+_k v| \le \log (4\kappa \Phi_t/\delta)$ for every $k \le \kappa$.
\begin{lemma}\label{lemma:iso-tail}
For any $\delta > 0$, the following holds at any time $t$, $\mathbb{P}_{v \sim \mathsf{p}}(v \notin \mathcal G_t) \le \delta$.
\end{lemma}
\begin{proof}[Proof of \lref{lemma:iso-drift}]
Let us fix a time $t$. To simplify the notation, let $\Phi = \Phi_{t-1}$ and $\Delta\Phi = \Phi_t - \Phi$, and let $d = d_{t-1}$ and $v = v_t$.
To bound the change $\Delta \Phi$, we use the following inequality, which follows from a modification of the Taylor series expansion of $\cosh(r)$ and holds for any $a,b \in \mathbb{R}$,
\begin{align}\label{eqn:taylor}
\ \cosh(\lambda a)-\cosh(\lambda b) & \le \lambda \sinh(\lambda b) \cdot (a - b) + \frac{\lambda^2}{2} \cosh(\lambda b) \cdot e^{|a-b|}(a - b)^2.
\end{align}
Note that when $|a-b| \ll 1$, then $e^{|a-b|} \le 2$, so one gets the first two terms of the Taylor expansion as an upper bound, but here we will also use it when $|a-b|\gg 1$.
Define the distribution $\mathsf{p}_w = \frac34 \mathsf{p} + \frac14p_\mathbf{\Sigma}$ and note that the distribution $\mathsf{p}_x$ appearing in the potential $\Phi$ satisfies $\mathsf{p}_x = \frac34 \mathsf{p}_w + \frac14 \mathsf{p}_y$. Furthermore, observe that every vector in the support of $\mathsf{p}_w$ has Euclidean length at most $1$ while $y \sim \mathsf{p}_y$ may have large Euclidean length.
After choosing the sign $\chi_t$, the discrepancy vector $d_t = d + \chi_t v$. Defining $s_{k}(x) = \sinh(\lambda \cdot d^\topM^+_k x)$ and $c_{k}(x) = \cosh(\lambda \cdot d^\topM^+_k x)$ for any $x \in \mathbb{R}^n$, \eqref{eqn:taylor} implies that $\Delta\Phi := \Delta\Phi_1 + \Delta\Phi_2$ where
\begin{align*}
\ \Delta\Phi_1 &\le \chi_t \cdot \frac34\left (\sum_{k = 0}^{\kappa} \lambda ~\mathbb{E}_{w}\left[ s_{k}(w) v^\topM^+_k w\right]\right) + \frac34\sum_{k = 0}^{\kappa} \lambda^2 ~\mathbb{E}_{w}\left[c_{k}(w) \cdot w^\topM^+_kvv^\topM^+_kw\right] : = \chi_t L_1 + Q_1, \text{ and },\\
\ \ \Delta\Phi_2 &\le \chi_t \cdot \frac14\left (\sum_{k = 0}^{\kappa} \lambda ~\mathbb{E}_{y}\left[ s_{k}(y) v^\topM^+_k y\right]\right) + \frac14\sum_{k = 0}^{\kappa} \lambda^2 ~\mathbb{E}_{y}\left[c_{k}(y) \cdot e^{\lambda |v^\top M^+_ky|} y^\topM^+_kvv^\topM^+_ky\right] : = \chi_t L_2 + Q_2.
\end{align*}
Since, our algorithm uses the greedy strategy, choosing $\chi_t$ to be the sign that minimizes the potential, and taking expectation over the incoming vector $v$, we get that for every $v$,
\begin{align*}
\ \Delta\Phi &\le -|L_1 + L_2| + Q_1 + Q_2.\\
\end{align*}
Unlike before, we will not be able to prove that this potential always remains small in expectation, so let us define a good event $\mathcal G:=\mathcal G_t$ that $\lambda |d^T\Pi_k v| \le 12\log T$ for every $k$. Lemma \ref{lemma:iso-tail} implies that $\mathbb{P}_v(\mathcal G_t) \le T^{-4}$.
Given $v \in \mathbb{R}^n$, let $P^\bot_v$ denote the projection on to the $(n-1)$-dimensional subspace orthogonal to $v$. We will prove the following upper bounds on the quadratic (in $\lambda$) terms $Q_1$ and $Q_2$.
\begin{claim}\label{claim:quadratic-ban}
$$\mathbb{E}_{v}[(Q_1+Q_2)\cdot \mathbf{1}_\mathcal G] \le 4\cdot \lambda^2 \cdot \frac34\sum_{k=0}^{\kappa} 2^{-k} ~\mathbb{E}_{w}[c_{k}(w)] + 4\cdot \lambda^2 \cdot \frac14\sum_{k=0}^{\kappa} ~\mathbb{E}_{vy}[c_{k}(P^\bot_v y)\|\Pi_k v\|_2^2].$$
\end{claim}
On the other hand, we will show that the linear (in $\lambda$) terms $L_1 + L_2$ is also large in expectation.
\begin{claim}\label{claim:linear-ban}
For a universal constant $C>0$ and some $B \le \kappa \log(\Phi^2 nT)$,
$$\mathbb{E}_{v}[|L_1 + L_2|\cdot \mathbf{1}_{\mathcal G}] \ge \frac{\lambda}{B}\cdot \frac34\sum_{k=0}^\kappa 2^{-k}~~\mathbb{E}_{w} [c_{k}(w)] + \frac{\lambda}{B}\cdot\frac14\sum_{k=0}^\kappa 2^{-k}~~\mathbb{E}_{y} [c_{k}(y) |\lambda d^T M^+_k y|] - C.$$
\end{claim}
\mnote{How to argue the following from the above claims?}
By our assumption $\Phi \le T$, so it follows that $2\lambda \le B^{-1}$. Therefore, combining the above two claims,
$$\mathbb{E}_{v}[\Delta \Phi \cdot \mathbf{1}_\mathcal G] \le C.$$
This finishes the proof of \lref{lemma:gen-drift-ban} assuming the claims which we prove next.
\end{proof}
\vspace*{8pt}
To bound the drift, we need the following consequence of log-concavity of the test distribution $\mathsf{p}_y$.
\begin{lemma}\label{lemma:logconcave}
There exists a constant $C>0$, such that for every integer $k \le \kappa$, and every $v \in \mathrm{im}(\Pi_k)$,
\[\mathbb{E}_{y}\left[c_k(P^\bot_v y) \cdot \exp\left(s |v^T y|\right)|v^T y|^2\right] \le C \cdot \mathbb{E}_{y}\left[c_k(P^\bot_v y) \cdot \|v\|_2^2\right], \text{ for all } s \le \frac{\lambda^{-1}}{2\|v\|_2^2}.\]
\end{lemma}
\begin{proof}
First note that as $v \in \mathrm{im}(\Pi_k)$, it suffices to assume that $y$ is uniformly drawn from $\lambda n K^\circ_k$ as otherwise the contribution is zero. Note that for any fixed value of $P^\bot_v y$, it still holds that $v^Ty$ is a scalar mean-zero log-concave random variable with variance
$$\sigma_v^2 := \mathbb{E}_y[|v^\top y|^2] = v (\mathbb{E}_y[yy^\top ]) v = \lambda\|v\|_2^2.$$
Using Cauchy-Schwarz and \pref{prop:logconcave}, we get that for any fixed value of $P_v y$,
\begin{align*}
\mathbb{E}\left[e^{s |v^\top y|} \cdot |v^\top y|^2\right] \le \sqrt{\mathbb{E}\left[e^{2s |v^\top y|}\right]} \cdot \sqrt{ \mathbb{E}\left[|v^\top y|^4\right]} \le C \cdot \mathbb{E}\left[|v^\top y|^2\right] \le C~\|v\|_2^2,
\end{align*}
where the expectation is over the scalar random variable $v^Ty$ conditioned on $P^\bot_v y$.
\end{proof}
\vspace*{8pt}
\begin{proof}[Proof of \clmref{claim:quadratic-ban}]
Recall that $\mathbb{E}_{v}[vv^\top] = \mathbf{\Sigma}$ which satisfies $M^+_k \mathbf{\Sigma} M^+_k = 2^{-k} M^+_k $ and $\Pi_k \preccurlyeq M^+_k \preccurlyeq 2\Pi_k$. Therefore, using linearity of expectation,
\begin{align}\label{eqn:int1}
\ \mathbb{E}_{v}[Q_1\cdot\mathbf{1}_\mathcal G] \le \mathbb{E}_{v}[Q_1] &=\frac34 \sum_{k=0}^{\kappa} \lambda^2 ~ \mathbb{E}_{w}[c_{k}(w) \cdot w^\topM^+_k \mathbf{\Sigma} M^+_k w] = \lambda^2 \cdot \frac34 \sum_{k=0}^{\kappa} 2^{-k} ~ \mathbb{E}_{w}[c_{k}(w) \cdot w^\top M^+_k w] \notag\\
\ & \le 2\lambda^2 \cdot \frac34 \sum_{k=0}^{\kappa} 2^{-k} ~ \mathbb{E}_{w}[c_{k}(w)\|w\|_2^2] \le 2\lambda^2 \cdot \frac34 \sum_{k=0}^{\kappa} 2^{-k} ~ \mathbb{E}_{w}[c_{k}(w)].
\end{align}
To bound the term $\mathbb{E}_v[Q_2 \cdot \mathbf{1}_\mathcal G]$, for a fixed $y \in \mathrm{im}(\Pi_k)$, let us decompose $y = \alpha_v \Pi_k v + P^\bot_v y$ where $\alpha_v = |v^T\Pi_ky|/{\|\Pi_k v\|^2_2}$. For any $v \in \mathcal G$, we have that $\lambda |d^T M^+_k v| \le 12\log T$. Recall that $c_k(y) = \cosh(\lambda d^TM^+_k y)$ for any $k$ and that $\Pi_k \preccurlyeq M^+_k \preccurlyeq 2\Pi_k$. Therefore, using that $\cosh(a+b)\le \cosh(a)\cdot e^{|b|}$ for any $a,b \in \mathbb{R}$, we obtain
\[ c_k(y)e^{\lambda|v^TM^+_ky|} \le c_k(P^\bot_v y)e^{2\lambda|d^T \Pi_k v|\alpha_v+2\lambda|v^T\Pi_k y|} \le c_k(P^\bot_v y) \cdot \exp\left(\frac{12\log T}{\|\Pi_k v\|_2^2}\cdot |v^T\Pi_k y|\right) .\]
Therefore, for any fixed $v \in \mathcal G$, using \lref{lemma:logconcave},
\begin{align*}
\ ~\mathbb{E}_{y}\left[c_{k}(y) \cdot e^{\lambda |v^\top M_ky|} y^\topM^+_kvv^\topM^+_ky\right] &\le 4~ \mathbb{E}_{y}\left[c_k(P^\bot_v y) \cdot \exp\left(\frac{12\log T}{\|\Pi_k v\|_2^2}\cdot |v^T\Pi_k y|\right)|v^T\Pi_k y|^2\right]\\
\ & \le 4~ \mathbb{E}_{y}\left[c_k(P^\bot_v y) \cdot \|\Pi_kv\|^2\right].
\end{align*}
Therefore, we may bound,
\begin{align}
\ \mathbb{E}_{v}[Q_2\cdot \mathbf{1}_\mathcal G] &\le 4 \cdot \lambda^2 \cdot \frac14\sum_{k=0}^{\kappa} \mathbb{E}_{y}[c_{k}(P^\bot_v y) \|\Pi_k v\|_2^2]. \notag\qedhere
\end{align}
\end{proof}
\vspace*{8pt}
\begin{proof}[Proof of \clmref{claim:linear-ban}]
Let $L = L_1 + L_2$. To lower bound the linear term, we proceed similar to the proof of \clmref{claim:linear} and use the fact that $|L(v)| \ge {\|f\|^{-1}_{\infty}} \cdot f(v) \cdot L(v)$ for any real-valued non-zero function $f$. We will choose the function $f(v) = d^\topM^+ v \cdot \mathbf{1}_{\mathcal G}(v)$.\\
Recalling that $M^+ = \sum_{k=0}^{\kappa} M^+_k$, when the event $v \in \mathcal G$ occurs, then $\lambda |d^\top M^+ v| \le \kappa \log (4 \kappa\Phi/\delta) := B$. Then, $f(v) = d^\topM^+ v \cdot \mathbf{1}_{\mathcal G}(v)$ satisfies $\|f\|_\infty \le \lambda^{-1} B$, and we can lower bound,
\begin{align}\label{eqn:lterm-iso}
\ \mathbb{E}_{v}[|L|\cdot \mathbf{1}_\mathcal G] &\ge \frac{\lambda}{\lambda^{-1} B} \cdot \frac34 \sum_{k=0}^{\kappa} \mathbb{E}_{vw} [s_{k}(w) \cdot d^\top M^+ v \cdot v^\top M^+_k w\cdot \mathbf{1}_{\mathcal G}(v)] + \frac{\lambda}{\lambda^{-1}B} \cdot \frac14\sum_{k=0}^{\kappa} \mathbb{E}_{vy} [s_{k}(y) \cdot d^\top M^+ v \cdot v^\top M^+_k y\cdot \mathbf{1}_{\mathcal G}(v)] \notag \\
\ &= \frac{\lambda^2}{B} \cdot \frac34 \sum_{k=0}^{\kappa} \mathbb{E}_{w} [s_{k}(w) \cdot d^\top M^+ \mathbf{\Sigma} M^+_k w] ~-~ \frac{\lambda^2}{B} \cdot \frac34 \sum_{k=0}^{\kappa} \mathbb{E}_{w} [s_{k}(w) \cdot d^\top M^+ \mathbf{\Sigma}_\mathsf{err} M^+_k w] \notag \\
\ &\qquad + \frac{\lambda^2}{B} \cdot \frac14 \sum_{k=0}^{\kappa} \mathbb{E}_{y} [s_{k}(y) \cdot d^\top M^+ \mathbf{\Sigma} M^+_k y] ~-~\frac{\lambda^2}{B}\cdot \frac14 \sum_{k=0}^{\kappa} \mathbb{E}_{y} [s_{k}(y) \cdot d^\top M^+ \mathbf{\Sigma}_\mathsf{err} M^+_k y],
\end{align}
where $\mathbf{\Sigma}_\mathsf{err} = \mathbb{E}_{v}[vv^\top (1-\mathbf{1}_{\mathcal G}(v))]$ satisfies $\|\mathbf{\Sigma}_\mathsf{err}\|_{\mathsf{op}} \le \mathbb{P}_{v \sim \mathsf{p}}(v \notin \mathcal G) := \delta$ where $\delta \le T^{-4}$ using \lref{lemma:tail}.\\
To bound the terms involving $\mathbf{\Sigma}$ in \eqref{eqn:lterm-iso}, we recall that $s_k(x) = \sinh(\lambda d^\top M^+_k x)$. Using $M^+ \Sigma M^+_k = 2^{-k} M^+_k$ and the fact that $\sinh(a)a \ge \cosh(a)|a| - 2$ for any $a \in \mathbb{R}$, we have
\begin{align*}
\lambda ~\mathbb{E}_{w} [s_{k}(w) \cdot d^\top M^+ \mathbf{\Sigma} M^+_k w] ~=~ 2^{-k} ~\mathbb{E}_{w} [s_{k}(w) \cdot \lambda d^\top M^+_k w] ~\ge~ 2^{-k}~ \left(\mathbb{E}_{w} [c_{k}(w)|\lambda d^\topM^+_k w|] - 2\right),
\end{align*}
and similarly for $y$.
While the terms involving $\mathbf{\Sigma}_\mathsf{err}$ can be upper bounded by using the bounds on the operator norm of the matrices $M^+_k$ and $\mathbf{\Sigma}_\mathsf{err}$. In particular, we have that
$$|d^\top M^+ \mathbf{\Sigma}_\mathsf{err} M^+_k x | \le 2\|M^+ d\|_2 \|\mathbf{\Sigma}_\mathsf{err}\|_{\mathsf{op}}\|x\|_2 \le 2\delta \|M^+ d\|_2 \|x\|_2.$$
Since $M^+ \preccurlyeq \sum_{k=0}^\kappa \Pi_k$, using \lref{lemma:chaining}, it follows that $\|M^+ d\|_2 \le \kappa \sqrt{\lambda^{-1} n \log(nT)}$ while $\|x\|_2 \le n$ because of \pref{prop:kls}. Then, by our choice of $\delta$, $$\lambda ~|d^\top M^+ \mathbf{\Sigma}_\mathsf{err} M^+_k x| \le \Phi^{-1}.$$
Plugging the above bounds in \eqref{eqn:lterm},
\begin{align}\label{eq:int1}
\mathbb{E}_{p(v)}[|L|\cdot\mathbf{1}_{\mathcal G}] &\ge \frac{\lambda}{B} \cdot \frac34 \sum_{k=0}^\kappa 2^{-k} ~\mathbb{E}_{w} [c_{k}(w)|\lambda d^\topM^+_k w|] + \frac{\lambda}{B} \cdot \frac14 \sum_{k=0}^\kappa 2^{-k} ~ \mathbb{E}_{y} [c_{k}(y)|\lambda d^\topM^+_k y|] - 4
\end{align}
where the second inequality follows since $\sum_{k=0}^\kappa \mathbb{E}_x[|s_k(x)|] \le \Phi.$\\
To finish the proof, we use the inequality that $\cosh(a)a \ge \cosh(a) - 2$ for all $a \in \mathbb{R}$ so that the first term is
\begin{align*}
\mathbb{E}_{w} [c_{k}(w)|\lambda d^\topM^+_k w|] ~\ge~ \mathbb{E}_{w} [c_{k}(w)] - 2.
\end{align*}
\end{proof}
\section{Reduction to $\kappa$-Dyadic Covariance}\label{sec:dyadiccov}
For all our problems, we may assume without loss of generality that the distribution $\mathsf{p}$ has zero mean, i.e. $\mathbb{E}_{v \sim \mathsf{p}}[v] = 0$, since our algorithm can toss an unbiased random coin and work with either $v$ or $-v$. Now the covariance matrix $\mathbf{\Sigma}$ of the input distribution $\mathsf{p}$ is given by $\mathbf{\Sigma} = \mathbb{E}_{v \sim \mathsf{p}}[vv^\top]$. Since $\|v\|_2 \le 1$, we have that $0 \preccurlyeq \mathbf{\Sigma} \preccurlyeq I$ and $\mathrm{Tr}(\mathbf{\Sigma})\le 1$.
However, it will be more convenient for the proof to assume that all the non-zero eigenvalues of the covariance matrix $\mathbf{\Sigma}$ are of the form $2^{-k}$ for an integer $k$. In this section, by slightly rescaling the input distribution and the test vectors, we show that one can assume this without any loss of generality.
Consider the spectral decomposition of $\mathbf{\Sigma} = \sum_{i=1}^n \sigma_i u_iu_i^\top$, where $0 \le \sigma_n \le \ldots \le \sigma_1 \le 1$ and $u_1, \ldots, u_n$ form an orthonormal basis of $\mathbb{R}^n$. Moreover, since we only get $T$ vectors, we can essentially ignore all eigenvalues smaller than, say $(nT)^{-8}$, as this error will not affect the discrepancy too much.
For a positive integer $\kappa$ denoting the number of different scales, we say that $\mathbf{\Sigma}$ is $\kappa$-dyadic if every non-zero eigenvalue $\sigma$ is $2^{-k}$ for some $k \in [\kappa]$.
\begin{lemma} \label{lem:covariance_reduction}
Let $\mathscr{E} \subseteq \mathbb{R}^n$ be an arbitrary set of test vectors with Euclidean norm at most $nT$ and $v \sim \mathsf{p}$ with covariance $\mathbf{\Sigma} = \sum_i \sigma_i u_iu_i^\top$. Then, there exists a positive-semi definite matrix $M$ with $\|M\|_{\mathsf{op}}\le 1$ such that the covariance of $Mv$ is $\kappa$-dyadic for $\kappa = \lceil8\log (nT)\rceil$. Moreover, there exists a test set $\mathscr{E}'$ consisting of vectors with Euclidean norm at most $\max_{y \in \mathscr{E}} \|y\|$, such that for any signs $(\chi_t)_{t \in T}$, the discrepancy vector $d_t = \sum_{\tau=1}^t \chi_\tau v_\tau$ satisfies
\[ \max_{y\in \mathscr{E}} |d_t^\top y| = 2\cdot\max_{z \in \mathscr{E}'} |(Md_t)^\top z| + O(1).\]
\end{lemma}
\begin{proof}
For notational simplicity, we use $d$ to denote $d_t$.
We construct matrix $M$ to be postive semi-definite with eigenvectors $u_1, \ldots, u_n$.
For any $i \in [n]$ such that $\sigma_i \in (2^{-k}, 2^{-k+1}]$ for some $k \in [\kappa]$, we set $M u_i = (2^{k}\sigma_i)^{-1/2} \cdot u_i$, and for every $i \in [n]$ such that $\sigma_i \leq 2^{-\kappa}$, we set $M u_i = 0$.
It is easy to check that the covariance of $Mv$ for $v \sim \mathsf{p}$ is $\kappa$-dyadic.
We define the new test set to be $\mathcal{S}' = \{\frac12M^+ y\mid y \in \mathcal{S}\}$ where $M^+$ is the pseudo-inverse of $M$. Note that $\|M^+\|_{\mathsf{op}} \le 2$, so every $z\in \mathcal{S}'$ satisfies $\|z\|_2 \le \max_{y \in \mathscr{E}} \|y\| \le nT$. To upper bound the discrepancy with respect to the test set, let $\Pi_\mathsf{err}$ be the projector onto the span of eigenvectors $u_i$ with $\sigma_i \le 2^{-\kappa}$ and let $\Pi$ be the projector onto its orthogonal subspace. Then, for any $y \in \mathcal{S}$, we have
\[ |d^\top y| \le |d^\top \Pi y| + |d^\top \Pi_\mathsf{err} y| \le |(Md)^\top (M^+ y)| + nT\cdot \|\Pi_\mathsf{err} d\|_2.\]
By Markov's inequality, with probability at least $1 - (nT)^{-4}$, we have that $\|\Pi_{\mathsf{err}}d\|_2 \le (nT)^{-1}$ and hence, $|d^\top\Pi_{\mathsf{err}}y| = O(1)$ for every $y \in \mathscr{E}$. It follows that
\[ \max_{y \in \mathscr{E}}|d^\top y| \le 2\cdot \max_{z\in \mathscr{E}'}|(Md)^\top z| + O(1). \qedhere\]
\end{proof}
For all applications in this paper, the test vectors will always have Euclidean norm at most $nT$, so we can always assume without loss of generality that the input distribution $\mathsf{p}$, which is supported over vectors with Euclidean norm at most one, has mean $\mathbb{E}_{v\sim \mathsf{p}}[v]=0$, and its covariance $\mathbf{\Sigma} = \mathbb{E}_v[vv^\top]$ is $\kappa$-dyadic for $\kappa = 8\lceil \log(nT) \rceil$. We will make this assumption in the rest of this paper without stating it explicitly sometimes.
\section{Discrepancy for Arbitrary Test Vectors} \label{sec:arbitTestVectors}
In this section, we consider discrepancy minimization with respect to an arbitrary set of test vectors with Euclidean length at most $1$.
\test*
Before getting into the details of the proof, we first give two important applications of \thmref{thm:gen-disc} to the Koml\'os problem in \secref{subsec:komlos} and to the Tusnady's problem in \secref{subsec:tusnady}.
The proof of \thmref{thm:gen-disc} will be discussed in \secref{subsec:test_theorem_proof}.
\subsection{Discrepancy for Online Koml{\'o}s Setting}
\label{subsec:komlos}
\komlos*
\begin{proof}[Proof of \thmref{thm:komlos}]
Taking the set of test vectors $\mathscr{E} = \{e_1, \cdots, e_n\}$ where $e_i$'s are the standard basis vectors in $\mathbb{R}^n$, \thmref{thm:gen-disc} implies an algorithm that w.h.p. maintains a discrepancy vector $d_t$ such that $\|d_t\|_{\infty} = O(\log^4(nT))$ for all $t \in [T]$.
\end{proof}
\subsection{An Application to Online Tusnady's Problem}
\label{subsec:tusnady}
\tusnady*
Firstly, using the probability integral transformation along each dimension, we may assume without loss of generality that the marginal of $\mathsf{p}$ along each dimension $i \in [d]$, denoted as $\mathsf{p}_i$, is the uniform distribution on $[0,1]$.
More specifically, we replace each incoming point $x \in [0,1]^d$ by $(F_1(x_1), \cdots, F_d(x_d))$, where $F_i$ is the cumulative density function for $\mathsf{p}_i$.
Note that $F_i(x_i)$ is uniform on $[0,1]$ when $x_i \sim \mathsf{p}_i$.
We make such an assumption throughout this subsection.
A standard approach in tackling Tusn\'ady's problem is to decompose the unit cube $[0,1]^d$ into a canonical set of boxes known as dyadic boxes~(see \cite{Matousek-Book09}).
Define dyadic intervals $I_{j,k} = [k2^{-j}, (k+1)2^{-j})$ for $j \in \mathbb{Z}_{\ge 0}$ and $0\le k <2^j$. A dyadic box is one of the form
\[ B_{\mathbf{j},\mathbf{k}} := I_{\mathbf{j}(1),\mathbf{k}(1)} \times \ldots \times I_{\mathbf{j}(d),\mathbf{k}(d)},\]
with $\mathbf{j},\mathbf{k} \in \mathbb{Z}^d$ such that $0\le \mathbf{j}$ and $0 \le \mathbf{k} < 2^{\mathbf{j}}$, and each side has length at least $1/T$. One can handle the error from the smaller dyadic boxes separately since few points will land in each such box.
Denoting the set of dyadic boxes as $\mathcal{D} = \{ B_{\mathbf{j},\mathbf{k}} \mid 0 \le \mathbf{j} \le (\log T) \mathbf{1} ~,~ 0 \le \mathbf{k} <2^{\mathbf{j}}\}$, where $\mathbf{1} \in \mathbb{R}^d$ is the all ones vector, we note that $|\mathcal{D}| = O_d(T^d)$.
Usually, one proves a discrepancy upper bound on the set of dyadic boxes, which implies a discrepancy upper bound on all axis-parallel boxes since each axis-parallel box can be expressed roughly as the disjoint union of $O_d(\log^d T)$ dyadic boxes.
This was precisely the approach used for the online Tusn\'ady's problem in~\cite{BJSS20}.
However, such an argument has a fundamental barrier. Since each arrival lands in approximately $O_d(\log^d T)$ boxes in $\mathcal{D}$,
one can at best obtain a discrepancy upper bound of $O_d(\log^{d/2} T)$ for the set of dyadic boxes, which leads to $O_d(\log^{3d/2} T)$ discrepancy for all boxes.
Using the idea of test vectors in \thmref{thm:gen-disc}, we can save a factor of $O_d(\log^{d/2} T)$ over the approach above.
Roughly, this saving comes from the discrepancy of dyadic boxes accumulates in an $\ell_2$ manner as opposed to directly adding up.
A similar idea was previously exploited by~\cite{BansalG17} for the offline Tusn\'ady's problem.\\
\vspace{5pt}
\begin{proof}[Proof of \thmref{thm:tusnady}]
We view Tusn\'ady's problem as a vector balancing problem in $|\mathcal{D}|$-dimensions with coordinates indexed by dyadic boxes, where we define $v_t(B) = \mathbf{1}_B(x_t)$ for each arrival $t \in [T]$ and every dyadic box $B \in \mathcal{D}$.
Each coordinate $B$ of the discrepancy vector $d_t = \sum_{i=1}^t \chi_i v_i$ is exactly $\mathsf{disc}_t(B)$.
Notice that $\| v_t \|_2 \leq O_d(\log^{d/2} T)$ since $v_t$ is $O_d(\log^d T)$-sparse. Note that $v_t$'s are the input vectors for the vector balancing problem.
Now we define the set of test vectors $\mathscr{E}$ that will allow us to bound the discrepancy of any axis-parallel box. For every box $B$ that can be exactly expressed as the disjoint union of several dyadic boxes, i.e. $B = \cup_{B' \in \mathcal{D}'} B'$ for some subset $\mathcal{D}' \subseteq \mathcal{D}$ of disjoint dyadic boxes, we create a test vector $z_B \in \{0,1\}^{|\mathcal{D}|}$ with $z_B(B') = 1$ if and only if $B' \in \mathcal{D}'$.
We call such box $B$ a {\em dyadic-generated} box. Since there are multiple choices of $\mathcal{D}'$ that give the same dyadic-generated box $B$, we only take $\mathcal{D}'$ to be the one that contains the smallest number of dyadic boxes. $\mathscr{E}$ will be the set of all such dyadic-generated boxes.
Recalling that $|\mathcal{D}|\le 2T$, it follows that $|\mathscr{E}| = O_d(T^d)$ as each coordinate of a box in $\mathscr{E}$ corresponds to an endpoint of one of the dyadic intervals in $\mathcal{D}$. Moreover, every test vector $z_B \in \mathscr{E}$ is $O_d(\log^d T)$-sparse and thus $\| z_B \|_2 \leq O_d(\log^{d/2} T)$.
Using \thmref{thm:gen-disc} with both the input and test vectors scaled down by $O_d(\log^{d/2} T)$, we obtain an algorithm that w.h.p. maintains discrepancy vector $d_t$ such that for all $t \in [T]$,
\begin{align*}
\max_{z_B \in \mathscr{E}} |d_t^\top z_B| \leq O_d(\log^{d + 4} T) .
\end{align*}
Since $d_t^\top z_B = \mathsf{disc}_t(B)$ which follows from $B$ being a disjoint union of dyadic boxes, we have $\mathsf{disc}_t(B) \leq O_d(\log^{d + 4} T)$ for any dyadic-generated box $B$.
To upper bound the discrepancy of arbitrary axis-parallel boxes, we first introduce the notion of {\em stripes}.
A stripe in $[0,1]^d$ is an axis-parallel box that is of the form $I_1 \times \cdots \times I_d$ where exactly one of the intervals $I_i$ is allowed to be a proper sub-interval $[a,b] \subseteq [0,1]$. The width of such a stripe is defined to be $b-a$. Stripes whose projection is $[a,b]$ in dimension $i$ satisfying $b - a = 1/T$ correspond to the smallest dyadic interval in dimension $i$.
We call such stripes {\em minimum dyadic} stripes.
There are exactly $T$ minimum dyadic stripes for each dimension $i \in [d]$. Since minimum dyadic stripes have width $1/T$ and the marginal of $\mathsf{p}$ along any dimension is the uniform distribution over $[0,1]$, a standard application of Chernoff bound implies that w.h.p. the total number of points in all the minimum dyadic stripes is at most $O_d(\log(T))$ points.
For a general axis-parallel box $\widetilde{B}$, it is well-known that $\widetilde{B}$ can be expressed as the disjoint union of a dyadic-generated box $B$ together with at most $k \leq 2d$ boxes $B_1, \ldots, B_{k}$ where each $B_i \subseteq S_i$ is a subset of a minimum dyadic stripe. We can thus upper bound
\[
\mathsf{disc}_t(\widetilde{B}) \leq \mathsf{disc}_t(B) + \sum_{i=1}^k \mathsf{disc}_t(B_i) \leq \mathsf{disc}_t(B) + \sum_{i=1}^k r_i.
\]
where $r_i$ is the total number of points in the stripe $S_i$.
As mentioned, w.h.p. we can upper bound $\sum_{i=1}^k r_i = O_d(
\log(T))$ and thus one obtains $\mathsf{disc}_t(\widetilde{B}) = O_d(\log^{d + 4} T)$ for any axis-parallel box $\widetilde{B}$.
This proves the theorem.
\end{proof}
\subsection{Proof of \thmref{thm:gen-disc}}
\label{subsec:test_theorem_proof}
\paragraph{Potential Function and Algorithm.}
By \lref{lem:covariance_reduction}, it is without loss of generality to assume that $\mathsf{p}$ is $\kappa$-dyadic, where $\kappa = 8 \lceil \log(nT)\rceil$. For any $k \in [\kappa]$, we use $\Pi_k$ to denote the projection matrix onto the eigenspace of $\mathbf{\Sigma}$ corresponding to the eigenvalue $2^{-k}$ and define $\Pi = \sum_{k=1}^{\kappa} \Pi_k$ to be the sum of these projection matrices.
Let $\Pi_\mathsf{err}$ be the projection matrix onto the subspace spanned by eigenvectors corresponding to eigenvalues of $\mathbf{\Sigma}$ that are at most $2^{-\kappa}$.
The algorithm for \thmref{thm:gen-disc} will use a greedy strategy that chooses the next sign so that a certain potential function is minimized. To define the potential, we first define a distribution where some noise is added to the input distribution $\mathsf{p}$ to account for the test vectors.
Let $\mathsf{p}_z$ be the uniform distribution over the set of test vectors $\mathscr{E}$.
We define the noisy distribution $\mathsf{p}_x$ to be $\mathsf{p}_x := \mathsf{p}/2 + \mathsf{p}_z/2$, i.e., a random sample from $\mathsf{p}_x$ is drawn with probability $1/2$ each from $\mathsf{p}$ or $\mathsf{p}_z$. Note that any vector $x$ in the support of $\mathsf{p}_x$ satisfies $\|x\|_2 \le 1$ since both the input distribution $\mathsf{p}$ and the set of test vectors $\mathscr{E}$ lie inside the unit Euclidean ball.
At any time step $t$, let $d_{t} = \chi_1 v_1 + \ldots + \chi_t v_t$ denote the current discrepancy vector after the signs $\chi_1, \ldots, \chi_t \in \{\pm1\}$ have been chosen. Set $\lambda^{-1} = 100 {\kappa} \log(nT)$ and define the potential
\[ \Phi_t ~~=~~ \Phi(d_t) ~~:= ~~ \sum_{k=1}^{\kappa} \mathbb{E}_{x \sim \mathsf{p}_x}\left[\cosh\left(\lambda d_{t}^\top \Pi_k x\right)\right]. \]
When the vector $v_t$ arrives, the algorithm greedily chooses the sign $\chi_t$ that minimizes the increase $\Phi_t - \Phi_{t-1}$.
\paragraph{Analysis.}
The above potential is useful because it allows us to give tail bounds on the length of the discrepancy vectors in most directions given by the distribution $\mathsf{p}$ while simultaneously controlling the discrepancy in the test directions. In particular, let $\mathcal G_t$ denote the set of {\em good} vectors $v$ in the support of $\mathsf{p}$ that satisfy $\lambda|d_t^\top \Pi v| \le {\kappa} \cdot \log (4 \Phi_t/\delta)$. Then, we have the following lemma.\\
\begin{lemma}\label{lemma:tail}
For any $\delta > 0$ and any time $t$, we have
\begin{enumerate}[label=({\alph*})]
\item $\mathbb{P}_{v \sim \mathsf{p}}(v \notin \mathcal G_t) \le \delta$.
\item $|d_t^\top \Pi_k z| \le \lambda^{-1}\log (4 |\mathscr{E}| \Phi_t)$ for all $z \in \mathscr{E} \text{ and } k \in [k]$.
\end{enumerate}
\end{lemma}
\begin{proof}
\vspace*{1ex}
\begin{enumerate}[label=({\alph*})]
\itemsep1em
\item Recall that with probability $1/2$ a sample from $\mathsf{p}_x$ is drawn from the input distribution $\mathsf{p}$. Using this and the fact that $0 \leq \exp(x) \le 2\cosh(x)$ for any $x \in \mathbb{R}$, we have
$ \sum_{k\in [\kappa]} \mathbb{E}_{v \sim \mathsf{p}}\left[\exp(\lambda |d_t^\top\Pi_k v|)\right] \le 4 \Phi_t$.
Note that for any $v \notin \mathcal G_t$, we have $\lambda|d_t^\top \Pi v| \le {\kappa} \cdot \log (4 \Phi_t/\delta)$ by definition, so it follows that $\lambda|d_t^\top \Pi_k v| > \log(4 \Phi_t / \delta)$ for at least one $k \in [\kappa]$.
Thus, applying Markov's inequality we get that $\mathbb{P}_{v \sim \mathsf{p}}(v \notin \mathcal G_t) \le \delta$.
\item Similarly, a random sample from $\mathsf{p}_x$ is drawn from the uniform distribution over $\mathscr{E}$ with probability $1/2$, so $\exp\left(\lambda |d^\top \Pi_k z|\right) \le 4 |\mathscr{E}| \Phi_t$
for every $z \in \mathscr{E}$ and $k \in [\kappa]$.
This implies that $|d^\top\Pi_k z| \le \lambda^{-1} \log (4 |\mathscr{E}| \Phi_t)$. \qedhere
\end{enumerate}
\end{proof}
\vspace*{8pt}
The next lemma shows that the expected increase in the potential is small on average.
\begin{lemma}[Bounded positive drift]\label{lemma:drift-komlos} At any time step $t \in [T]$, if $\Phi_{t-1} \leq 3T^5$, then $\mathbb{E}_{v_t}[\Phi_t] - \Phi_{t-1} \le 2$.
\end{lemma}
Using \lref{lemma:drift-komlos}, we first finish the proof of \thmref{thm:gen-disc}.
\begin{proof}[Proof of \thmref{thm:gen-disc}]
We first use \lref{lemma:drift-komlos} to prove that with probability at least $1-T^{-4}$, the potential $\Phi_t \le 3T^5$ for every $t \in [T]$.
Such an argument is standard and has previously appeared in~\cite{JiangKS-arXiv19,BJSS20}.
In particular, we consider a truncated random process $\widetilde{\Phi}_t$ which is the same as $\Phi_t$ until $\Phi_{t_0} > 3T^5$ for some time step $t_0$; for any $t$ from time $t_0$ to $T$, we define $\widetilde{\Phi}_t = 3T^5$. It follows that $\mathbb{P}[\widetilde{\Phi}_t \geq 3T^5] = \mathbb{P}[\Phi_t \geq 3T^5]$.
\lref{lemma:drift-komlos} implies that for any time $t \in [T]$, the expected value of the truncated process $\widetilde{\Phi}_t$ over the input sequence $v_1, \ldots, v_T$ is at most $3T$. By Markov's inequality, with probability at least $1-T^{-4}$, the potential $\Phi_t \le 3T^5$ for every $t \in [T]$.
When the potential $\Phi_t \le 3T^5$, part (b) of \lref{lemma:tail} implies that $|d^\top \Pi_k z| = O(\lambda^{-1} \cdot (\log(|\mathscr{E}|) + \log T))$ for any $z \in \mathscr{E}$ and $k \in [\kappa]$. Thus, it follows that for every $z \in \mathscr{E}$,
\[ |d^\top z| ~\le~~ {\sum_{k \in [\kappa]} |d^\top \Pi_k z|} = O({\kappa}\lambda^{-1}(\log(|\mathscr{E}|) + \log T)) = O((\log(|\mathscr{E}|) + \log T)\cdot \log^3(nT)),
\]
which completes the proof of the theorem.
\end{proof}
To finish the proof, we prove the remaining \lref{lemma:drift-komlos} next.
\begin{proof}[Proof of \lref{lemma:drift-komlos}]
Let us fix a time $t$. To simplify the notation, let $\Phi = \Phi_{t-1}$ and $\Delta\Phi = \Phi_t - \Phi$, and let $d = d_{t-1}$ and $v = v_t$.
To bound the change $\Delta \Phi$, we use Taylor expansion. Since $\cosh'(a) = \sinh(a)$ and $\sinh'(a) = \cosh(a)$, for any $a, b \in \mathbb{R}$ satisfying $|a-b| \le 1$, we have
\begin{align*}
\ \cosh(\lambda a) - \cosh(\lambda b) &= \lambda \sinh(\lambda b) \cdot (a-b) + \frac{\lambda^2}{2!} \cosh(\lambda b) \cdot (a-b)^2 + \frac{\lambda^3}{3!} \sinh(\lambda b)\cdot (a-b)^3 + \cdots , \\[0.8ex]
\ & \le \lambda \sinh(\lambda b) \cdot(a-b) + \lambda^2 \cosh(\lambda b) \cdot(a-b)^2,\\[1.1ex]
\ & \le \lambda \sinh(\lambda b) \cdot(a-b) + \lambda^2 |\sinh(\lambda b)| \cdot(a-b)^2 + \lambda^2(a-b)^2,
\end{align*}
where the first inequality follows since $|\sinh(a)| \le \cosh(a)$ for all $a \in \mathbb{R}$, and since $|a-b|\le 1$ and $\lambda < 1$, so the higher order terms in the Taylor expansion are dominated by the first and second order terms. The second inequality uses that $\cosh(a) \le |\sinh(a)|+1$ for $a \in \mathbb{R}$.
After choosing the sign $\chi_t$, the discrepancy vector $d_t = d + \chi_t v$. Defining $s_{k}(x) = \sinh(\lambda \cdot d^\top\Pi_k x)$ and noting that $|v^\top \Pi_k x| \le 1$, the above upper bound on the Taylor expansion gives us that
\begin{align*}
\ \Delta\Phi &= \sum_{k\in [\kappa]} \mathbb{E}_{x}\left[\cosh\left(\lambda (d + \chi_t v)^\top \Pi_k x\right)\right] - \sum_{k\in [\kappa]} \mathbb{E}_{x}\left[\cosh\left(\lambda d^\top \Pi_k x\right)\right] \\
&\le \underbrace{ \chi_t \left (\sum_{k\in [\kappa]} \lambda ~\mathbb{E}_{x}\left[ s_{k}(x) v^\top\Pi_k x\right]\right)}_{:=~\chi_t L} + \underbrace{\sum_{k\in [\kappa]} \lambda^2 ~\mathbb{E}_{x}\left[|s_{k}(x)| \cdot x^\top\Pi_kvv^\top\Pi_kx\right]}_{:=~Q} + \underbrace{\sum_{k\in [\kappa]} \lambda^2~\mathbb{E}_{x}\left[ x^\top\Pi_kvv^\top\Pi_kx\right]}_{:=~Q_*},
\end{align*}
where $\chi_t L, Q$, and $Q_*$ denote the first, second, and third terms respectively.
Recall that our algorithm uses the greedy strategy by choosing $\chi_t$ to be the sign that minimizes the potential.
Taking expectation over the random incoming vector $v \sim \mathsf{p}$, we get
\begin{align*}
\ \mathbb{E}_{v}[\Delta\Phi] &\le -\mathbb{E}_{v}[|L|] + \mathbb{E}_{v}[Q] + \mathbb{E}_{v}[Q_{*}].
\end{align*}
We will prove the following upper bounds on the quadratic (in $\lambda$) terms $Q$ and $Q_*$.
\begin{claim}\label{claim:quadratic}
$ \mathbb{E}_{v}[Q] \le 2\lambda^2 \sum_{k\in [\kappa]} 2^{-k} ~\mathbb{E}_{x}[|s_{k}(x)|]$ and $\mathbb{E}_{v}[Q_*] \le 4\lambda^2.$
\end{claim}
On the other hand, we will show that the linear (in $\lambda$) term $L$ is also large in expectation.
\begin{claim}\label{claim:linear}
$ \mathbb{E}_{v}[|L|] \ge \lambda B^{-1} \sum_{k\in [\kappa]} 2^{-k} ~ \mathbb{E}_{x}[|s_{k}(x)|] - 1$ for some value $B \leq 2 {\kappa} \cdot \log(\Phi^2 \kappa n)$.
\end{claim}
By our assumption that $\Phi \leq 3T^5$, we have that $2\lambda \le B^{-1}$. Therefore, combining the above two claims, we get that
\[\mathbb{E}_{v}[\Delta \Phi] \le (2\lambda^2-\lambda B^{-1}) \left(\sum_{k\in [\kappa]} 2^{-k} ~ \mathbb{E}_{x}[|s_{k}(x)|]\right) + 1 + 4\lambda^2 \le 2.\]
This finishes the proof of \lref{lemma:drift-komlos} assuming the claims which we prove next.
\end{proof}
\vspace*{10pt}
\begin{proof}[Proof of \clmref{claim:quadratic}]
Recall that $\mathbb{E}_{v}[vv^\top] = \mathbf{\Sigma}$ and that $\Pi_k \mathbf{\Sigma} \Pi_k = 2^{-k} \Pi_k $.
Using linearity of expectation,
\begin{align*}
\ \mathbb{E}_{v}[Q] ~~=~~ \sum_{k\in [\kappa]} \lambda^2 ~ \mathbb{E}_{x}[|s_{k(x)}| \cdot x^\top\Pi_k \mathbf{\Sigma} \Pi_k x] ~~&=~~ \lambda^2 \sum_{k\in [\kappa]} 2^{-k} ~ \mathbb{E}_{x}[|s_{k}(x)| \cdot x^\top \Pi_k x]\\
&\le ~~ 2\lambda^2 \sum_{k\in [\kappa]} 2^{-k} ~ \mathbb{E}_{x}[|s_{k}(x)|],
\end{align*}
where the last inequality uses that $\norm{x}_2 \leq 1$.
Similarly,
\[ \mathbb{E}_{v}[Q_{*}] ~~=~~ \sum_{k\in [\kappa]} \lambda^2~\mathbb{E}_{x}\left[ x^\top\Pi_k\mathbf{\Sigma}\Pi_kx\right] ~~\le ~~ 2\lambda^2 \sum_{k\in [\kappa]} 2^{-k} \le 4\lambda^2.\qedhere\]
\end{proof}
\vspace*{10pt}
\begin{proof}[Proof of \clmref{claim:linear}]
To lower bound the linear term, we use the fact that $|L(v)| \ge {\|f\|^{-1}_{\infty}} \cdot f(v) \cdot L(v)$ for any real-valued non-zero function $f$. We will choose the function $f(v) = d^\top\Pi v \cdot \mathbf{1}_{\mathcal G}(v)$ where $\mathcal G$ will be the event that $|d^\top\Pi v|$ is small, which we know is true because of \lref{lemma:tail}.\\
In particular, set $\delta^{-1} = \lambda \Phi T$ and let $\mathcal G$ denote the set of vectors $v$ in the support of $\mathsf{p}$ such that $\lambda|d^\top \Pi v| \le {\kappa} \cdot \log (4 \Phi/\delta) := B$.
Then, $f(v) = d^\top\Pi v \cdot \mathbf{1}_{\mathcal G}(v)$ satisfies $\|f\|_\infty \le \lambda^{-1} B$, and we can lower bound,
\begin{align} \label{eqn:lterm}
\mathbb{E}_{v}[|L|] &\ge \frac{\lambda}{\lambda^{-1} B} \sum_{k\in [\kappa]} \mathbb{E}_{v,x} [s_{k}(x) \cdot d^\top \Pi v \cdot v^\top \Pi_k x\cdot \mathbf{1}_{\mathcal G}(v)] \nonumber \\
&= \frac{\lambda^2}{B} \sum_{k\in [\kappa]} \mathbb{E}_{x} [s_{k}(x) \cdot d^\top \Pi \mathbf{\Sigma} \Pi_k x] - \frac{\lambda^2}{B} \sum_{k\in [\kappa]} \mathbb{E}_{x} [s_{k}(x) \cdot d^\top \Pi \mathbf{\Sigma}_\mathsf{err} \Pi_k x],
\end{align}
where $\mathbf{\Sigma}_\mathsf{err} = \mathbb{E}_{v}[vv^\top (1 - \mathbf{1}_{\mathcal G}(v))]$ satisfies $\|\mathbf{\Sigma}_\mathsf{err}\|_{\mathsf{op}} \le \mathbb{P}_{v \sim \mathsf{p}}(v \notin \mathcal G) \le \delta$ using \lref{lemma:tail}.
To bound the first term in \eqref{eqn:lterm}, recall that $s_k(x) = \sinh(\lambda d^\top \Pi_k x)$. Using $\Pi \mathbf{\Sigma} \Pi_k = 2^{-k} \Pi_k$ and the fact that $\sinh(a)a \ge |\sinh(a)| - 2$ for any $a \in \mathbb{R}$, we have
\begin{align*}
\lambda ~\mathbb{E}_{x} [s_{k}(x) \cdot d^\top \Pi \mathbf{\Sigma} \Pi_k x] ~~=~~ 2^{-k} ~\mathbb{E}_{x} [s_{k}(x) \cdot \lambda d^\top \Pi_k x] ~~\ge~~ 2^{-k}~ \left(\mathbb{E}_{x} [|s_{k}(x)|] - 2\right).
\end{align*}
For the second term, we use the bound $\|\mathbf{\Sigma}_\mathsf{err}\|_\mathsf{op} \leq \delta$ to obtain
\begin{align*}
|d^\top \Pi \mathbf{\Sigma}_\mathsf{err} \Pi_k x | ~~\le ~~ \| \mathbf{\Sigma}_\mathsf{err} \|_{\mathsf{op}} \cdot \| d\|_2 \cdot \|x\|_2 ~~\le ~~ \delta \|d\|_2.
\end{align*}
Since $\|d\|_2 \le T$ always holds, by our choice of $\delta$,
\begin{align*}
\lambda |d^\top \Pi \mathbf{\Sigma}_\mathsf{err} \Pi_k x| \le \Phi^{-1}.
\end{align*}
Plugging the above bounds in \eqref{eqn:lterm},
\begin{align*}
\mathbb{E}_{v}[|L|] &\ge \frac{\lambda}{B} \sum_{k \in [\kappa]} 2^{-k} ~\left(\mathbb{E}_{x} [|s_{k}(x)|] - 2\right) - \frac{\lambda}{B} \cdot \Phi^{-1} \left( \sum_{k \in [\kappa]} \mathbb{E}_x[|s_k(x)|] \right) \\
&\ge \frac{\lambda}{B} \sum_{k \in [\kappa]} 2^{-k} ~\mathbb{E}_{x} [|s_{k}(x)|] - \frac{\lambda}{B} \sum_{k \in [\kappa]} 2^{-k+1} - \frac{\lambda}{B}\\
\ & \ge \frac{\lambda}{B} \sum_{k \in [\kappa]} 2^{-k} ~\mathbb{E}_{x} [|s_{k}(x)|] - 1,
\end{align*}
where the second inequality follows since $\sum_{k \in [\kappa]} \mathbb{E}_x[|s_k(x)|] \le \Phi$.
\end{proof}
\section{Generalization to Weighted Multi-Color Discrepancy}
\label{sec:multicolor}
In this section, we prove \thmref{thm:multicolor-intro} which we restate below for convenience.
\multicolor*
\thmref{thm:multicolor-intro} follows from a black-box way of converting an algorithm for the signed discrepancy setting to the multi-color setting.
In particular, for a parameter $0\le \lambda \le 1$, let $\Phi: \mathbb{R}^n \to \mathbb{R}_+$ be a potential function satisfying
\begin{equation}\label{eqn:potential}
\begin{aligned}
\ & \Phi(d+\alpha v) \le \Phi(d) + \lambda\alpha L_d(v) + \lambda^2\alpha^2 Q_d(v) ~~~~~~~\text{ for every } d,v \in \mathbb{R}^n \text{ and } |\alpha|\le 1, \text{ and}, \\
\ & -\lambda \cdot\mathbb{E}_{v \sim \mathsf{p}}[|L_d(v)|] + \lambda^2\cdot \mathbb{E}_{v \sim \mathsf{p}}[Q_d(v)] = O(1) ~~~\text{ for any } d \text{ such that } \Phi(d) \le 3T^5,
\end{aligned}
\end{equation}
where $L_d: \mathbb{R}^n \to \mathbb{R}$ and $Q_d: \mathbb{R}^n \to \mathbb{R}_+$ are arbitrary functions of $v$ that depend on $d$.
One can verify that the first condition is always satisfied for the potential functions used for proving \thmref{thm:gen-disc} and \thmref{thm:gen-disc-ban}, while the second condition holds for $\lambda = O(1/\log^{2}(nT))$ because of \lref{lemma:drift-komlos} and \lref{lemma:gen-drift-ban}.
Moreover, for parameters $n$ and $T$, let $B_{\arbnorm{\cdot}}$ be such that if the potential $\Phi(d) = \Phi$, then the corresponding norm $\arbnorm{d} \le B_{\arbnorm{\cdot}} \log(nT\Phi)$. Part (b) of \lref{lemma:tail} implies that for any test set $\mathscr{E}$ of $\mathrm{poly}(nT)$ vectors contained in the unit Euclidean ball, if the norm $\|\cdot\|_* = \max_{z \in \mathscr{E}} |\ip{\cdot}{z}|$, then $B_{\arbnorm{\cdot}} = O(\log^3(nT))$. Similarly, if $\arbnorm{\cdot}$ is given by a symmetric convex body with Gaussian measure at least $1/2$, then \lref{lemma:chaining} implies that $B_{\arbnorm{\cdot}}= O(\log^4(nT))$.
We will use the above properties of the potential $\Phi$ to give a greedy algorithm for the multi-color discrepancy setting.
\subsection{Weighted Binary Tree Embedding}
We first show how to embed the weighted multi-color discrepancy problem into a binary tree $\mathcal{T}$ of height $O(\log(R\eta)$.
For each color $c$, we create $\lfloor w_c \rfloor $ nodes with weight $w_c/\lfloor w_c \rfloor \in [1,2]$ each.
The total number of nodes is thus $M_{\ell} = \sum_{c \in [R]} \lfloor w_c \rfloor = O(R \eta)$.
In the following, we place these nodes as the leaves of an (incomplete) binary tree.
Take the height $h = O(\log (R \eta))$ to be the smallest exponent of $2$ such that $2^h \geq M_{\ell}$.
We first remove $2^h - M_{\ell} < 2^{h-1}$ leaves from the complete binary tree of height $h$ such that none of the removed leaves are siblings.
Denote the set of remaining leaves as $\mathcal{L}(\mathcal{T})$.
Then from left to right, assign the leaves in $\mathcal{L}(\mathcal{T})$ to the $R$ colors so that leaves corresponding to the same color are consecutive.
For each leaf node $\ell \in \mathcal{L}(\mathcal{T})$ that is assigned the color $c \in [R]$, we assign it the weight $w_\ell = w_c/\lfloor w_c \rfloor$.
We index the internal nodes of the tree as follows: for integers $0 \le j \le h-1$ and $0\le k \le 2^{j}$, we use $(j,k)$ to denote the $2^k$-th node at depth $j$. Note that the left and right children of a node $(j,k)$ are the nodes $(j+1,2k)$ and $(j+1,2k+1)$. The weight $w_{j,k}$ of an internal node $(j,k)$ is defined to be sum of weights of all the leaves in the sub-tree rooted at $(j,k)$. This way of embedding satisfies certain desirable properties which we give in the following lemma.
\begin{lemma}[Balanced tree embedding] \label{lem:balanced_tree}
For the weighted (incomplete) binary tree $\mathcal{T}$ defined above, for any two nodes $(j,k)$ and $(j,k')$ in the same level,
\begin{align*}
1/4 \leq w_{j,k}/ w_{j,k'} \leq 4.
\end{align*}
\end{lemma}
\begin{proof}
Observe that each leaf node $\ell \in \mathcal{L}(\mathcal{T})$ has weight $w_\ell \in [1,2]$. Moreover, for each internal node $(h-1,k)$ in the level just above the leaves, at least one of its children is not removed in the construction of $\mathcal{T}$. Therefore, it follows that $w_{j,k} = a_{j,k} 2^{h-j}$ for some $a_{j,k} \in [1/2,2]$ and similarly for $(j,k')$. The lemma now immediately follows from these observations.
\end{proof}
\paragraph{Induced random walk on the weighted tree.}
Randomly choosing a leaf with probability proportional to its weight induces a natural random walk on the tree $\mathcal{T}$: the walk starts from the root and moves down the tree until it reaches one of the leaves. Conditioned on the event that the walk is at some node $(j,k)$ in the $j$-th level, it goes to left child $(j+1, 2k)$ with probability $q^l_{j,k} = w_{j+1,2k}/w_{j,k}$ and to the right child $(j+1,2k+1)$ with probability $q^r_{j,k} = w_{j+1,2k+1}/w_{j,k}$. Note that by Lemma~\ref{lem:balanced_tree} above, we have that both $q^l_{j,k}, q^r_{j,k} \in [1 / 20 , 19/20]$ for each internal node $(j,k)$ in the tree. Note that $w_{j,k}/w_{0,0}$ denotes the probability that the random walk passes through the vertex $j,k$
\subsection{Algorithm and Analysis}
Recall that each leaf $\ell \in \mathcal{L}(T)$ of the tree $T$ is associated with a color. Our online algorithm will assign each arriving vector $v_t$ to one of the leaves $\ell \in \mathcal{L}(T)$ and its color will then be the color of the corresponding leaf.
For a leaf $\ell \in \mathcal{L}(T)$, let $d_\ell(t)$ denote the sum of all the input vectors that are associated with the leaf $\ell$ at time $t$. For an internal node $(j,k)$, we define $d_{j,k}(t)$ to be the sum $\sum_{\ell \in \mathcal{L}(T_{j,k})} d_\ell(t)$ where $\mathcal{L}(
T_{j,k})$ is the set of all the leaves in the sub-tree rooted at $(j,k)$. Also, let $d^l_{j,k}(t) = d_{j+1,2k}(t)$ and $d^r_{j,k}(t) = d_{j+1,2k+1}(t)$ be the vectors associated with the left and right child of the node $(j,k)$.
Finally let,
$$d^-_{j,k}(t) = \frac{d^l_{j,k}(t)/q^l_{j,k} - d^r_{j,k}(t)/q^r_{j,k}}{1/q^l_{j,k} + 1/q^r_{j,k}} = q^r_{j,k} d^l_{j,k}(t) - q^l_{j,k}d^r_{j,k},$$
denote the weighted difference between the two children vectors for the $(j,k)$-th node of the tree.
\paragraph{Algorithm.} For $\beta = 1/(400h)$, consider the following potential function
\begin{align*}
\Psi_t = \sum_{j,k \in T} \Phi(\beta~ d^-_{j,k}(t)),
\end{align*}
where the sum is over all the internal nodes $(j,k)$ of $T$.
The algorithm assigns the incoming vector $v_t$ to the leaf $\ell \in \mathcal{L}(T)$, so that the increase in the potential $\Psi_t - \Psi_{t-1}$ is minimized. The color assigned to the vector $v_t$ is then the color of the corresponding leaf $\ell$.
We show that if the potential $\Phi$ satisfies \eqref{eqn:potential}, then the drift for the potential $\Psi$ can be bounded.
\begin{lemma} \label{clm:driftmulti}
If at any time $t$, if $\Psi_{t-1} \le T^5$, then the following holds
$$ \mathbb{E}_{v_t \sim \mathsf{p}}[\Delta\Psi_t] := \mathbb{E}_{v_t \sim \mathsf{p}}[\Psi_t - \Psi_{t-1}] = O(1).$$
\end{lemma}
Using standard arguments as used in the proof of \thmref{thm:gen-disc}, this implies that with high probability $\Psi_t \le T^5$ at all times $t$.
Moreover, the above potential also gives a bound on the discrepancy because of the following lemma.
\begin{lemma} \label{lem:node_prop_to_weights}
If $\Psi_t \le T^5$, then
$\mathsf{disc}_t = O(\beta^{-1} h \cdot B_{\arbnorm{\cdot}} \cdot \log (nT \Psi_t)) = O(h^2 \cdot B_{\arbnorm{\cdot}} \cdot \log (nT))$.
\end{lemma}
Combined with part (b) of \lref{lemma:tail} and \lref{lemma:chaining}, the above implies \thmref{thm:multicolor-intro}. Next we prove \lref{lem:node_prop_to_weights} and \lref{clm:driftmulti} in that order.
\subsubsection*{Bounded Potential Implies Low Discrepancy}
For notational simplicity, we fix a time $t$ and drop the time index below.
\begin{proof}[Proof of \lref{lem:node_prop_to_weights}]
First note that $\Phi(\beta \cdot d^-_{j,k}) \le \Psi$, and therefore, $\arbnorm{d^-_{j,k}} \le \beta^{-1} B(\arbnorm{\cdot}) := U$ for every internal node $(j,k)$.
We next claim by induction that the above implies the following for every internal node $(j,k)$,
\begin{align} \label{eq:disc_from_root}
\arbnorm{d_{j,k} - d_{0,0} \cdot \frac{w_{j,k}}{w_{0,0}}} \leq \beta_j U,
\end{align}
where $\beta_j = 1+19/20+\cdots+(19/20)^j$.
The claim is trivially true for the root. For an arbitrary node $(j+1,2k)$ at depth $j$ that is the left child of some node $(j,k)$, we have that
\begin{align*}
\arbnorm{{d_{j+1,2k}} - d_{0,0}\cdot \frac{w_{j+1,2k}}{w_{0,0}}} &\le \arbnorm{{d_{j+1,2k}} - d_{j,k}\cdot \frac{w_{j+1,2k}}{w_{j,k}}} + q^l_{j,k} \cdot \arbnorm{d_{j,k} - d_{0,0} \cdot \frac{w_{j,k}}{w_{0,0}}} \\
\ & \le \arbnorm{{d^l_{j,k}} - d_{j,k}\cdot q^l_{j,k}} + q^l_{j,k} \beta_j U,
\end{align*}
since $w_{j+1,2k}/w_{j,k}=q^l_{j,k}$ and $q^l_{j,k},q^r_{j,k} \in[1/20,19/20]$. Note that $d_{j,k} = d^l_{j,k} + d^r_{j,k}$, so the first term above equals $\arbnorm{d^-_{j,k}}$. Therefore, it follows that $\arbnorm{{d_{j+1,2k}} - d_{0,0}\cdot ({w_{j+1,2k}}/{w_{0,0}})} \le \beta_{j+1} U$. The claim follows analogously for all nodes that are the right children of its parent.
To see the statement of the lemma, consider any color $c \in [R]$. We say that an internal node has color $c$ if all its leaves are assigned color $c$. A maximal color-$c$ node is a node that has color $c$ but its ancestor doesn't have color $c$. We denote the set of maximal $c$-color node to be $\mathcal{M}_c$.
Notice that $|\mathcal{M}_c| \leq 2 h$ since $c$-color leaves are consecutive. Also, note that $\sum_{(j,k) \in \mathcal{M}_c} w_{j,k} = w_c$ and that $\sum_{(j,k) \in \mathcal{M}_c} d_{j,k} = d_c$ is exactly the sum of vectors with color $c$.
Therefore, we have
\begin{align*}
\arbnorm{d_c/w_c - d_{0,0}/w_{0,0}} \le \arbnorm{d_c - d_{0,0} \cdot \frac{w_c}{w_{0,0}}} \le \sum_{(j,k) \in \mathcal{M}_c} \arbnorm{ d_{j,k} - d_{0,0} \cdot\frac{w_{j,k} }{w_{0,0}} } = O(h \cdot U),
\end{align*}
where the first inequality follows since $w_c \ge 1$ and the last follows from \eqref{eq:disc_from_root}.
Thus, for any two colors $c \neq c'$, we have
\begin{align*}
\mathsf{disc}_t(c,c') = \arbnorm{ \frac{d_c / w_c - d_{c'} / w_{c'}}{1/ w_c + 1/w_{c'}}}
\le \arbnorm{ \frac{d_c / w_c - d_{0,0} / w_{0,0}}{1/ w_c + 1/w_{c'}}} + \arbnorm{ \frac{d_{c'} / w_{c'} - d_{0,0} / w_{0,0}}{1/ w_c + 1/w_{c'}}} = O(h \cdot U).
\end{align*}
This finishes the proof of the lemma.
\end{proof}
\subsubsection*{Bounding the Drift}
\begin{proof}[Proof of \clmref{clm:driftmulti}]
We fix the time $t$ and write $d^{-}_{j,k} = d^{-}_{j,k}(t-1)$. Let $X_{j,k}(\ell) \cdot v_t$ denote the change of $d^-_{j,k}$ when the leaf chosen for $v_t$ is $\ell$. More specifically,
$X_{j,k}(\ell)$ is $q^r_{j,k}$ if the leaf $\ell$ belongs to the left sub-tree of node $(j,k)$, is $-q^l_{j,k}$ if it belongs to the right sub-tree, and is $0$ otherwise. Then, $d^-_{j,k}(t) = d^-_{j,k} + X_{j,k}(\ell)\cdot v_t$ if the leaf $\ell$ is chosen.
By our assumption on the potential, we have that $\Delta\Psi_t \leq \beta \lambda L + \beta^2 \lambda^2 Q$ where
\begin{align*}
L &= \sum_{(j,k) \in \ensuremath{\mathcal{P}}(\ell)} X_{j,k}(\ell) \cdot L_{j,k}(v_t) \\
Q &= \sum_{(j,k) \in \ensuremath{\mathcal{P}}(\ell)} X_{j,k}(\ell)^2 \cdot Q_{j,k}(v_t) ,
\end{align*}
and $\ensuremath{\mathcal{P}}(\ell)$ is the root-leaf path to the leaf $\ell$.
Consider choosing leaf $\ell$ (and hence the root-leaf path $\ensuremath{\mathcal{P}}(\ell)$) randomly in the following way:
First pick a uniformly random layer $j^* \in \{0,1, \cdots, h-1\}$ (i.e., level of the tree), then starting from the root randomly choose a child according to the random walk probability for all layers except $j^*$; for layer $j^*$, suppose we arrive at node $(j^*,k)$, we pick the left child if $L_{j^*,k}(v_t) \leq 0$, and the right child otherwise. Note that conditioned on a fixed value of $j_*$, this ensures that $\mathbb{E}_\ell[X_{j,k} L_{j,k}(v_t)]$ is always negative if $j=j_*$ and is zero otherwise.
Since we follow the random walk before layer $j^*$, for a fixed choice of $j^*$ we get a node in its layer proportional to their weights. Let us write $\mathcal{N}_j$ for the set of all nodes at depth $j$. In expectation over the randomness of the input vector $v_t$ and our random choice of leaf $\ell$, we have
\begin{align*}
\mathbb{E}_{v_t,\ell}[L]
&\leq - \frac{1}{h} \cdot \sum_{j=0}^{h-1} \sum_{k \in \mathcal{N}_j} \frac{w_{j,k}}{\sum_{j \in \mathcal{N}_j} w_{j,k}} \cdot \min\{q^l_{j,k},q^r_{j,k}\} \cdot \mathbb{E}_{v_t}[|L_{j,k}|].
\end{align*}
For the $Q$ term, recall that one is randomly picking a child until layer $j^*$, in which one picks a child depending on $L_{j^*,k}$, and then we continue randomly for the remaining layers. Note that since $Q$ is always positive, this can be at most $20$ times a process that always picks a random root-leaf path, since we have $q^l_{j,k}, q^r_{j,k} \in [1/20,19/20]$. Therefore, we have
\begin{align*}
\mathbb{E}_{v_t, \ell}[Q] \leq 20 \cdot \sum_{j=0}^{h-1} \sum_{k \in \mathcal{N}_j} \frac{w_{j,k}}{\sum_{j \in \mathcal{N}_j} w_{j,k}} \cdot \mathbb{E}_{v_t}[Q_{j,k}].
\end{align*}
By our choice of $\beta=1/(400h)$, the above implies that
\begin{align*}
\mathbb{E}_{v_t}[\Delta\Psi_t] &\leq - \sum_{j=0}^{h-1} \sum_{k \in \mathcal{N}_j} \frac{w_{j,k}}{\sum_{j \in \mathcal{N}_j} w_{j,k}} \cdot \left(-\frac{\beta \lambda }{20h}\mathbb{E}_{v_t}[|L_{j,k}|] + 20\beta^2\lambda^2\mathbb{E}_{v_t}[Q_{j,k}]\right)\\
\ & \leq - \sum_{j=0}^{h-1} \sum_{k \in \mathcal{N}_j} \frac{w_{j,k}}{\sum_{j \in \mathcal{N}_j} w_{j,k}} \cdot \frac{1}{8000h^2}\cdot \left(-\lambda\mathbb{E}_{v_t}[|L_{j,k}|] + \lambda^2\mathbb{E}_{v_t}[Q_{j,k}]\right) = O(1).
\end{align*}
Since the algorithm is greedy, the leaf $\ell$ it assigns to the incoming vector $v$ produces an even smaller drift, so this completes the proof.
\end{proof}
\section{Proof Overview} \label{sec:proofOverview}
Recall the setting: the input vectors $(v_\tau)_{\tau\le T}$ are sampled i.i.d. from $\mathsf{p}$ and satisfy $\|v\|_2 \le 1$, and we need to assign signs $\chi_1,\ldots,\chi_T$ in an online manner so as to minimize some target norm of the discrepancy vectors $d_t = \sum_{\tau \leq t} \chi_\tau v_\tau$. Moreover, we may also assume, without loss of generality that the distribution is mean-zero as the algorithm can toss a coin and work with either $v$ or $-v$. This means that the covariance matrix $\mathbf{\Sigma} = \mathbb{E}_v[vv^\top]$ satisfying $0 \preccurlyeq \mathbf{\Sigma} \preccurlyeq I_n$.
\subsection{Komlos Setting} Here our goal is to minimize $\|d_t\|_\infty$. First, consider the potential function $\mathbb{E}_{v\sim \mathsf{p}}[\cosh(\lambda ~{d_t^\top v})]$ where $\cosh(a)=\frac12\cdot({e^a+e^{-a}})$. This however only puts anti-concentration constraints on the discrepancy vector and does not track the discrepancy in the coordinate directions. It is natural to add a potential term to enforce discrepancy constraints. In particular, let $\mathsf{p}_x = \frac12 \mathsf{p} + \frac12 \mathsf{p}_y$, where $\mathsf{p}_y$ is uniform over the standard basis vectors $(e_i)_{i \le n}$, then the potential \begin{align}
\ \Phi_t = \mathbb{E}_{x\sim \mathsf{p}_x}[\cosh(\lambda~ {d_t^\top x})],
\end{align}
allows us to control the exponential moments of $\ip{d_{t-1}}{v_t}$ as well as the discrepancy in the target test directions. In particular, if the above potential $\Phi_t \le \mathrm{poly}(T)$, then we get a bound of $O(\lambda^{-1}\log T)$ on $\|d_t\|_{\infty}$. Next we sketch a proof that for the greedy strategy using the above potential, one can take $\lambda = 1/\log T$, so that the potential remains bounded by $\mathrm{poly}(T)$ at all times.
\begin{claim}[Informal: Bounded Drift] If $\Phi_{t-1} \le T^2$, then $\mathbb{E}_{v_t}[\Delta\Phi_t] := \mathbb{E}_{v_t}[\Phi_t - \Phi_{t-1}] \le 2$.
\end{claim}
The above implies using standard martingale arguments, that the potential remain bounded by $T^2$ with high probability and hence $\|d_t\|_\infty = \mathrm{polylog}(T)$ at all times $t \in [T]$.
Let us first make a simplifying assumption that $\mathbf{\Sigma} = I_n/n$ and that at time $t$, the condition $ \lambda |{d^\top_{t-1}} {v_t}| \le 2\log T$ holds with probability $1$. We give an almost complete proof below under these conditions. The first condition can be dealt with by an appropriate decomposition of the covariance matrix as sketched below. The second condition only holds with high probability ($1-1/\mathrm{poly}(T)$), because we have a bound on the exponential moment, but the error event can be handled straightforwardly.
By Taylor expansion, we have that for all $a$,
\begin{equation}\label{eqn:taylor}
\cosh(\lambda (a+\delta)) - \cosh(\lambda a) ~~\le~~ \lambda \sinh(\lambda a)\cdot\delta + \lambda^2|\sinh(\lambda a)|\cdot\delta^2 \qquad \text{ for all } |\delta| \le 1,
\end{equation}
where $\sinh(a) = \frac12\cdot({e^{a} - e^{-a}})$ and we used the approximation that $\cosh(a)\approx |\sinh(a)|$. Therefore, since $d_t = d_{t-1} + \chi_t v_t$, by the above inequality we have
\begin{align*}
\ \Delta \Phi_{t} ~~\le~~ \chi_t \cdot \lambda \mathbb{E}_x\left[\sinh(\lambda d_{t-1}^\top x)\cdot x^\top v_t\right] + \lambda^2\mathbb{E}_x\left[|\sinh(\lambda d_{t-1}^\top x)|\cdot |x^\top v_t|^2\right] ~~:=~~ \chi_t \lambda L + \lambda^2Q.
\end{align*}
Since the algorithm chooses $\chi_t$ to minimize the potential, we have that $\mathbb{E}_{v_t}[\Delta \Phi_{t}] \le -\lambda \mathbb{E}_{v_t}[|L|] + \lambda^2 \mathbb{E}_{v_t}[Q]$.
\paragraph{Upper bounding the quadratic term:}
Using that $\mathbf{\Sigma} = \mathbb{E}_{v_t}[v_tv_t^\top]=I_n/n$, we have
\begin{align*}
\ \mathbb{E}_{v_t}[Q] & ~~=~~ \mathbb{E}_{v_tx}[|\sinh(\lambda d^\top_{t-1}x)|\cdot x^T v_t v_t^\top x] ~~=~~ \mathbb{E}_{x}[|\sinh(\lambda d^\top_{t-1}x)| \cdot x^T \mathbf{\Sigma} x] \\
\ &~~=~~ \frac1n \cdot \mathbb{E}_{x}[|\sinh(\lambda d^\top_{t-1}x)| \cdot \|x\|^2] ~~\le~~ \frac1n \cdot \mathbb{E}_{x}[|\sinh(\lambda d^\top_{t-1}x)|],
\end{align*}
where the last inequality used that $\|x\|_2 \le 1$.
\paragraph{Lower bounding the linear term:} For this we use the aforementioned coupling trick: $\mathbb{E}_{v_t}[|L|] \ge \mathbb{E}_{v_t}[LY]/\|Y\|_{\infty}$ for any coupled random variable $Y$ \footnote{Here $\|Y\|_\infty$ denotes the largest value of $Y$ in its support.}. Taking $Y=|d^\top_{t-1}v_t|$, we have that $\|Y\|_{\infty} \le \log T$. Therefore,
\begin{align*}
\mathbb{E}_{v_t}[|L|] &~~=~~ \mathbb{E}_{v_t}\Big|\mathbb{E}_x\left[\sinh(\lambda d^\top_{t-1}x)\cdot x^\top v_t\right]\Big| ~\ge~ \frac{1}{\log T}\cdot \mathbb{E}_{v_tx}\left[\sinh(\lambda d^\top_{t-1}x)|\cdot x^\top v_t v^\top d_{t-1}\right] \\
\ &~~=~~ \frac1{2n\log T} \cdot \mathbb{E}_{x}[\sinh(\lambda d^\top_{t-1}x) \cdot d^\top_{t-1}x] ~\ge~ \frac{1}{2n \lambda \log T}\cdot \mathbb{E}_{x}[|\sinh(\lambda d^\top_{t-1}x)|] - 2,
\end{align*}
using that $\sinh(a)a \ge |\sinh(a)|-2$ for all $a \in \mathbb{R}$.
Therefore, if $\lambda = 1/(2\log T)$, we can bound the drift in the potential
\[ \mathbb{E}_{v_t}[\Delta \Phi_{t}] ~~\le~~ -\frac{\lambda}{2n\log T}\cdot \mathbb{E}_{x}[|\sinh(\lambda d^\top_{t-1}x)|] + \frac{\lambda^2}{n} \cdot \mathbb{E}_{x}[|\sinh(\lambda d^\top_{t-1}x)|] + 2 ~~\le~~ 2.\]
\paragraph{Non-Isotropic Covariance.}
To handle the general case when the covariance $\mathbf{\Sigma}$ is not isotropic, let us assume that all the non-zero eigenvalues are of the form $2^{-k}$ for integers $k\ge 0$. One can always rescale the input vectors and any potential set of test vectors, so that the covariance satisfies the above, while the discrepancy is affected only by a constant factor. See Section \ref{sec:dyadiccov} for details.
With the above assumption $\mathbf{\Sigma} = \sum_{k} 2^{-k}\Pi_k$ where $\Pi_k$ is the orthogonal projection on to the subspace with eigenvalues $2^{-k}$. Since, we only get $T$ vectors, we can ignore the eigenvalues smaller than $(nT)^{-4}$ and only need to consider $O(\log (nT))$ different scales. Then, one can work with the following potential which imposes the alignment constraint in each such subspace:
$$\Phi_t = \sum_{k} \mathbb{E}_{x\sim \mathsf{p}_x}[\cosh(\lambda~ {d_t^\top \Pi_k x})].$$
As we have $O(\log (nT))$ pairwise orthogonal subspaces, we can still choose $\lambda=1/\mathrm{polylog}(nT)$ and with some care, the drift can be bounded using the aforementioned ideas. Once the potential is bounded, we can bound $\|d_t\|_\infty$ as before along with triangle inequality.
\subsection{Banaszczyk Setting}
Recall that here we are given a convex body $K$ with Gaussian volume at least $1/2$ and our goal is to bound $K$-norm of the discrepancy vector $\|d_t\|_K$. Here, $\|d\|_K$ intuitively is the minimum scaling $\gamma$ of $K$ so that $d \in \gamma K$. To this end, we will use the dual characterization of $K$: Let $K^\circ = \{y: \sup_{x \in K} |\ip{x}{y}| \leq 1\}$, then $\|d\|_K = \sup_{y \in K^\circ} |\ip{d}{y}|$.
To approach this first note that the arguments from previous section allow us not only to bound $\|d_t\|_\infty$ but also $\max_{z \in \mathscr{E}} \ip{d_t}{z}$ for an arbitrary set of \emph{test directions} $\mathscr{E}$ (of norm at most $1$). As long as $|\mathscr{E}| \leq \mathrm{poly}(nT)$, we can bound $\max_{z \in \mathscr{E}} \ip{d_t}{z} = \mathrm{poly}(\log(nT))$.
However, to handle a norm given by an arbitrary convex body $K$, one needs exponentially many test vectors, and the previous ideas are not enough. To design a suitable test distribution for an arbitrary convex body $K$, we use \emph{generic chaining} to bound $\|d_t\|_K = \sup_{z \in K^\circ} \ip{d_t}{z}$ by choosing epsilon-nets\footnote{We remark that one can also work with admissible nets that come from Talagrand's majorizing measures theorem and probably save a logarithmic factor, but for simplicity we work with epsilon-nets at different scales.} of $K^\circ$ at geometrically decreasing scales. Again let us assume that the $\mathbf{\Sigma}=I_n/n$ for simplicity.
First, assuming Gaussian measure of $K$ is at least $1/2$, it follows that $\mathsf{diam}(K^\circ)=O(1)$ (see \secref{sec:prelims}). So, one can choose the coarsest epsilon-net at $O(1)$-scale while the finest epsilon-net can be taken at scale $\approx 1/\sqrt{n}$ since by adding the standard basis vectors to the test set, one can control $\|d_t\|_2 \le \sqrt{n}$ (ignoring polylog factors) by using the previous ideas in the Koml\"os setting.
\begin{figure}[h!]
\centering
{\includegraphics[width=\textwidth]{chaining.pdf}}%
\caption{\footnotesize The chaining graph $\mathcal{G}$ showing epsilon-nets of the convex body at various scales. The edges connect near neighbors at two consecutive scales. Note that any point $z \in K^\circ$ can be expressed as the sum of the edge vectors $w_\ell$ where $w_\ell = v_\ell - v_{\ell-1}$, and $(v_{\ell-1}, v_\ell)$ is an edge between two points at scale $2^{-(\ell-1)}$ and $2^{-\ell}$.
}%
\label{fig:chaining}%
\end{figure}
Now, one can use generic chaining as follows: define the directed layered graph $\mathcal{G}$ (see \figref{fig:chaining}) where the vertices $\mathcal{T}_\ell$ in layer $\ell$ are the elements of an optimal $\epsilon_\ell$-net of $K^\circ$ with $\epsilon_\ell=2^{-\ell}$. We add a directed edge from a vertex $u \in \mathcal{T}_{\ell}$ to vertex $v \in \mathcal{T}_{\ell+1}$ if $\|u-v\|_2 \le \epsilon_\ell$ and identify the corresponding edge with the vector $v-u$. The length of any such edge $v-u$, defined as $\|v-u\|_2$, is at most $\epsilon_\ell$.
Let us denote the set of edges between layer $\ell$ and $\ell+1$ by $\mathscr{E}_\ell$. Now, one can express any $z \in K^\circ$ as $\sum_\ell w_\ell + w_\mathsf{err}$ where $w_\ell \in \mathscr{E}_\ell$ and $\|w_\mathsf{err}\|_2 \le 1/\sqrt{n}$. Then, since we can control $\|d_t\|_2 \le \sqrt{n}$, we have
\[ \sup_{z \in K^\circ} \ip{d_t}{z} \le \sum_{\ell} \max_{w \in \mathscr{E}_\ell} \ip{d}{w} + \max_{\|w\|_2 \le n^{-1/2}} \ip{d}{w_\mathsf{err}} = O(\log n) \cdot \max_\ell \max_{w \in \mathscr{E}_\ell} \ip{d}{w}.\]
Thus, it suffices to control $\max_{w \in \mathscr{E}_\ell} \ip{d}{w}$ for each scale using a suitable test distribution in the potential.
For example, suppose we knew that $\mathbb{E}_{\widetilde{w}}[\cosh(\lambda d^\top \widetilde{w})] \le T$ for $\widetilde{w}$ uniform in $r^2\cdot\mathscr{E}_\ell$ for a scaling factor $r^2$. Then, it would follow that $\max_{w \in \mathscr{E}_\ell} \ip{d}{w} = O(\lambda^{-1}r^{-2} \log |\mathscr{E}_\ell| \cdot \log T)$. Standard results in convex geometry (see \secref{sec:prelims}) imply that $|\mathscr{E}_\ell| \le e^{O(1/\epsilon_\ell^2)}$, so to obtain a $\mathrm{polylog}(nT)$ bound, one needs to scale the vectors $w \in \mathscr{E}_\ell$ by a factor of $r = 1/\epsilon_\ell$. This implies that the $\ell_2$-norm of scaled vector $r^2 \cdot w$ could be as large as $\sqrt{n}$.
This makes the drift analysis for the potential more challenging because now the Taylor expansion in \eqref{eqn:taylor} is not always valid as the update $\delta$ could be as large as $\sqrt{n}$. This is where the sub-exponential tail of the input distribution is useful for us. Since the input distribution is $1/n$-isotropic and sub-exponential tailed, we know that if $\|w\|_2 \le \sqrt{n}$ , then for a typical choice of $v \sim \mathsf{p}$, the following holds
$$\ip{v_t}{w} \approx \mathbb{E}_{v_t}[\ip{v_t}{w}^2] = \mathbb{E}_{v_t}[w^\top vv^\top w] = \frac{\|w\|_2^2}{n} \le 1.$$
Thus, with some work one can show that, the previous Taylor expansion essentially holds "on average" and the drift can be bounded. The case of general covariances can be handled by doing a decomposition as before. Although the full analysis becomes somewhat technical, all the main ideas are presented above.
\subsection{Multi-color Discrepancy}
For the multi-color discrepancy setting, we show that if there is an online algorithm that uses a greedy strategy with respect to a certain kind of potential $\Phi$, then one can adapt the same potential to the multi-color setting in a black-box manner.
In particular, let the number of colors $R=2^h$ for an integer $h$ and all weights be unit. Let us identify the leaves of a complete binary tree $\mathcal{T}$ of height $h$ with a color. Our goal is then to assign the incoming vector to one of the leaves. In the offline setting, this is easy to do with a logarithmic dependence of $R$ --- we start at the root and use the algorithm for the signed discrepancy setting to decide to which sub-tree the vector be assigned and then we recurse until the vector is assigned to one of the leaves. Such a strategy in the online stochastic setting is not obvious, as the distribution of the incoming vector might change as one decides which sub-tree it belongs to.
By exploiting the idea used in \cite{BJSS20} and \cite{DFGR19} of working with the Haar basis, we can implement such a strategy if the potential $\Phi$ satisfies certain requirements. Let us define $d_\ell(t)$ to be the sum of all the input vectors assigned to that leaf at time $t$. In the same way, for an internal node $u$ of $\mathcal{T}$, we can define $d_u(t)$ to be the sum of the vectors $d_\ell(t)$ for all the leaves $\ell$ in the sub-tree rooted at $u$. The crucial insight is then, one can track the difference of the discrepancy vectors of the two children $d^{-}_u(t)$ for every internal node $u$ of the tree $\mathcal{T}$. In particular, one can work with the potential
$$\Psi_t = \sum_{u \in \mathcal{T}} \Phi\big(\beta\:d^-_u(t)\big),$$ for some parameter $\beta$, and assign the incoming vector to the leaf that minimizes the increase in $\Psi_t$. Then, essentially we show that the analysis for the potential $\Phi$ translates to the setting of the potential $\Psi_t$ if $\Phi$ satisfies certain requirements (see \secref{sec:multicolor}).
\section{Preliminaries}
\subsection{Notation}
Throughout this paper, $\log$ denotes the natural logarithm unless the base is explicitly mentioned. We use $[k]$ to denote the set $\{1,2,\dotsc, k\}$. Sets will be denoted by script letters (e.g. $\mathcal{T}$).
Random variables are denoted by capital letters (e.g.\ $A$) and values they attain are denoted by lower-case letters possibly with subscripts and superscripts (e.g.\ $a,a_1,a'$, etc.). Events in a probability space will be denoted by calligraphic letters (e.g.\ $\mathcal{E}$). We also use $\mathbf{1}_\mathcal{E}$ to denote the indicator random variable for the event $\mathcal{E}$. We write $\lambda \mathsf{p} + (1-\lambda) \mathsf{p}'$ to denote the convex combination of the two distributions.
Given a distribution $\mathsf{p}$, we use the notation $x \sim \mathsf{p}$ to denote an element $x$ sampled from the distribution $\mathsf{p}$. For a real function $f$, we will write $\mathbb{E}_{x \sim \mathsf{p}}[f(x)]$ to denote the expected value of $f(x)$ under $x$ sampled from $\mathsf{p}$. If the distribution is clear from the context, then we will abbreviate the above as $\mathbb{E}_{x}[f(x)]$.
For a symmetric matrix $M$, we use $M^+$ to denote the Moore-Penrose pseudo-inverse, $\|M\|_{\mathsf{op}}$ for the operator norm of $M$ and $\mathrm{Tr}(M)$ for the trace of $M$.
\subsection{Sub-exponential Tails}
Recall that a subexponential distribution $\mathsf{p}$ on $\mathbb{R}$ satisfies the following for every $r>0$, $\mathbb{P}_{x\sim \mathsf{p}}[|x - \mu| \ge \sigma r] \le e^{-\Omega(r)}$ where $\mu=\mathbb{E}_x[x]$ and $\sigma^2=\mathbb{E}_x[(x-\mu)^2]$. A standard property of a distribution with a sub-exponential tail is \emph{hypercontractivity} and a bound on the exponential moment (c.f. \S2.7 in~\cite{V18}).
\begin{proposition}
\label{prop:logconcave}
Let $\mathsf{p}$ be a distribution on $\mathbb{R}$ that has a sub-exponential tail with mean zero and variance $\sigma^2$. Then, for a constant $C>0$, we have that $\mathbb{E}_{x \sim \mathsf{p}}[e^{s |x|}] \le C$ for all $|s| \le 1/2\sigma$. Moreover, for every $k>0$, we have $\mathbb{E}_{x \sim \mathsf{p}}[|x|^k]^{1/k} \le C \cdot k \sigma$.
\end{proposition}
\subsection{Convex Geometry
\label{sec:prelims}
Given a convex body $K \subseteq \mathbb{R}^n$, its \emph{polar} convex body is defined as $K^\circ = \{ y \mid \sup_{x \in K} |\ip{x}{y}| \le 1\}$. If $K$ is symmetric, then it defines a norm $\|\cdot\|_K$ which is defined as $\|\cdot\|_K = \sup_{y \in K^\circ} \ip{\cdot}{y}$.
For a linear subspace $H \subseteq \mathbb{R}^n$, we have that $(K \cap H)^\circ = \Pi_H(K^\circ)$ where $\Pi_H$ is the orthogonal projection on to the subspace $H$.
\paragraph{Gaussian Measure.} We denote by $\gamma_n$ the $n$-dimensional standard Gaussian measure on $\mathbb{R}^n$. More precisely, for any measurable set $\mathcal{A}\subseteq \mathbb{R}^n$, we have
\[ \gamma_n(\mathcal{A}) = \frac{1}{(\sqrt{2\pi})^n} \int_\mathcal{A} e^{-\|x\|_2^2/2} dx .\]
For a $k$-dimensional linear subspace $H$ of $\mathbb{R}^n$ and a set $\mathcal{A} \subseteq H$, we denote by $\gamma_k(\mathcal{A})$ the Gaussian measure of the set $\mathcal{A}$ where $H$ is taken to be the whole space. For convenience, we will sometimes write $\gamma_H(\mathcal{A})$ to denote $\gamma_{\dim(H)}(\mathcal{A} \cap H)$.
The following is a standard inequality for the Gaussian measure of slices of a convex body. For a proof, see Lemma 14 in \cite{DGLN16}.
\begin{proposition}\label{prop:slicemeasure}
Let $K \subseteq \mathbb{R}^n$ with $\gamma_n(K) \ge 1/2$ and $H \subseteq \mathbb{R}^n$ be a linear subspace of dimension $k$. Then, $\gamma_k(K \cap H) \ge \gamma_n(K)$.
\end{proposition}
\paragraph{Gaussian Width.}
For a set $\mathcal{T} \subseteq \mathbb{R}^n$, let $w(\mathcal{T}) = \mathbb{E}_g[\sup_{x \in \mathcal{T}} \ip{g}{x}]$ denote the \emph{Gaussian width} of $\mathcal{T}$ where $g \in \mathbb{R}^n$ is sampled from the standard normal distribution. Let $\mathsf{diam}(\mathcal{T}) = \sup_{x,y \in \mathcal{T}} \|x-y\|_2$ denote the diameter of the set $\mathcal{T}$.
The following lemma is standard up to the exact constants. For a proof, see Lemmas 26 and 27 in \cite{DGLN16}.
\begin{proposition}\label{prop:width}
Let $K \subseteq \mathbb{R}^n$ be a symmetric convex body with $\gamma_n(K) \ge 1/2$. Then, $w(K^\circ) \le \frac{3}{2}$ and $\mathsf{diam}(K^\circ) \le 4$.
\end{proposition}
To prevent confusion, we remark that the Gaussian width is $\Theta(\sqrt{n})$ factor larger than the \emph{spherical width} defined as $\mathbb{E}_\theta[\sup_{x \in \mathcal{T}} \ip{\theta}{x}]$ for a randomly chosen $\theta$ from the unit sphere $\mathbb{S}^{n-1}$. So the above proposition implies that the spherical width of $K^\circ$ is $O(1/\sqrt{n})$.
For a linear subspace $H \subseteq \mathbb{R}^n$ and a subset $\mathcal{T} \subseteq H$, we will use the notation $w_H(\mathcal{T}) = \mathbb{E}_g[\sup_{x \in \mathcal{T}} \ip{g}{x}]$ to denote the Gaussian width of $\mathcal{T}$ in the subspace $H$, where $g$ is sampled from the standard normal distribution on the subspace $H$. \pref{prop:slicemeasure} and \pref{prop:width} also imply that $w_H(\mathcal{T}) \le 3/2$.
\paragraph{Covering Numbers.} For a set $\mathcal{T} \subseteq \mathbb{R}^n$, let $N(\mathcal{T}, \epsilon)$ denote the size of the smallest $\epsilon$-net of $\mathcal{T}$ in the Euclidean metric, \emph{i.e.}, the smallest number of closed Euclidean balls of radius $\epsilon$ whose union covers $\mathcal{T}$. Then, we have the following inequality (c.f. \cite{W19}, \S5.5).
\begin{proposition}[Sudakov minoration] \label{prop:sudakov}
For any set $\mathcal{T} \subseteq \mathbb{R}^n$ and any $\epsilon > 0$
\[ w(\mathcal{T}) \ge \frac{\epsilon}{2} \sqrt{\log N(\mathcal{T},\epsilon)}, ~\text{ or equivalently, } ~ N(\mathcal{T}, \epsilon) \le e^{4w(\mathcal{T})^2/\epsilon^2}.\]
\end{proposition}
Analogously, for a linear subspace $H \subseteq \mathbb{R}^n$ and a subset $\mathcal{T} \subseteq H$, we also have $w_H(\mathcal{T}) \ge \frac\eps2 \sqrt{\log N_H(\mathcal{T},\epsilon)}$, where $N_H(\mathcal{T},\epsilon)$ denote the covering numbering of $\mathcal{T}$ when $H$ is considered the whole space. | proofpile-arXiv_065-307 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro}
Super-resolution aims at producing high-resolution (HR) images from the corresponding low-resolution (LR) ones by filling in missing details.
For single image super-resolution, an HR image is estimated by exploring natural image priors and self-similarity within the image.
For video super-resolution, both spatial information across positions and temporal information across frames can be used to enhance details for an LR frame.
Recently the task of video super-resolution has drawn much attention in both the research and industrial communities. For example, video super-resolution is required when videos recorded for surveillance are zoomed in to recognize a person's identity or a car's license, or when videos are projected to a high definition display device for visually pleasant watching.
Most video super-resolution methods~\cite{kappeler2016video,caballero2017real,tao2017detail,xue2019video,liu2017robust} adopt the following pipeline: motion estimation, motion compensation, fusion and upsampling.
They estimate optical flow between a reference frame and other frames in either an offline or online manner, and then align all other frames to the reference with backward warping. However, this is not optimal for video SR. Methods with explicit motion compensation rely heavily on the accuracy of motion estimation. Inaccurate motion estimation and alignment, especially when there is occlusion or complex motion, results in distortion and errors, deteriorating the final super-resolution performance. Besides, per-pixel motion estimation such as optical flow often suffers a heavy computational load.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{img/calendar_37_intro.pdf}
\caption{VSR results for the \textit{Calender} clip in Vid4~\cite{caballero2017real}. Our method produces result with more details ({\color{cyan}cyan} arrow), and fewer artifacts ({\color{red}red} arrow) than DUF~\cite{jo2018deep} and the recent proprosed EDVR~\cite{wang2019edvr}.
} \vspace{-5mm}
\label{inter-group}
\end{figure}
Recently Jo~\emph{et al.}~\cite{jo2018deep} proposed the DUF method which implicitly utilizes motion information among LR frames to recover HR frames by means of dynamic upsampling filters. It is less influenced by the accuracy of motion estimation but its performance is limited by the size of the dynamic upsampling filters. In addition, the temporal information integration process from other frames to the reference frame is conducted without explicitly taking the reference frame into consideration. This leads to ineffective information integration for border frames in an input sequence.
In this work, we propose a novel deep neural network which hierarchically utilizes motion information in an implicit manner and is able to make full use of complementary information across frames to recover missing details for the reference frame. Instead of aligning all other frames to the reference frame with optical flow or applying 3D convolution to the whole sequence,we propose to divide a sequence into several groups and conduct information integration in a hierarchical way, that is, first integrating information in each group and then integrate information across groups. The proposed grouping method produces groups of subsequences with different frame rates, which provide different kinds of complementary information for the reference frame. Such different complementary information is modeled with an attention module and the groups are deeply fused with a 3D dense block and a 2D dense block to generate a high-resolution version of the reference frame. Overall, the proposed method follows a hierarchical manner. It is able to handle various kinds of motion and adaptively borrow information from groups of different frame rates. For example, if an object is occluded in one frame, the model would pay more attention to frames in which the object is not occluded.
However, the capability of the proposed method is still limited in dealing with video sequences of large motion since the receptive field is finite. To address this issue, a fast homography based method is proposed for rough motion compensation among frames. The resulting warped frames are not perfectly aligned but they suffer less distortion artifacts compared to existing optical flow based methods. Appearance difference among frames is indeed reduced such that the proposed neural network model can focus on object motion and produce better super-resolution result.
The proposed method is evaluated on several video super-resolution benchmarks and achieves state-of-the-art performance. We conduct further analysis to demonstrate its effectiveness.
To sum up, we make the following contributions:
\begin{itemize}
\item We propose a novel neural network which efficiently fuses spatio-temporal information through frame-rate-aware groups in a hierarchical manner.
\item We introduce a fast spatial alignment method to handle videos with large motion.
\item The proposed method achieves state-of-the-art performance on two popular VSR benchmarks
\end{itemize}
\section{Related Work}
\label{related}
\subsection{Single Image Super Resolution}
Single image super-resolution (SISR) has benefited greatly from progress in deep learning. Dong~\cite{dong2014learning} first proposed to use a three-layer CNN for SISR and showed impressive potential in super-resolving LR images. New architectures have been designed since then, including a very deep CNN with residual connections~\cite{kim2016accurate}, a recursive architecture with skip-connections~\cite{kim2016deeply}, a architecture with a sub-pixel layer and multi-channel output to directly work on LR images as input~\cite{shi2016real}. More recent networks, including EDSR~\cite{lim2017enhanced}, RDN~\cite{zhang2018residual}, DBPN~\cite{haris2018deep}, RCAN~\cite{zhang2018image}, outperformed previous works by a large margin when trained on the novel large dataset DIV2K~\cite{timofte2017ntire}. More discussions can be found in the recent survey \cite{yang2019deep}.
\subsection{Video Super Resolution}
Video super resolution relies heavily on temporal alignment, either explicitly or implicitly, to make use of complementary information from neighboring low-resolution frames. VESCPN \cite{caballero2017real} is the first end-to-end video SR method that jointly trains optical flow estimation and spatial-temporal networks. SPMC \cite{tao2017detail} proposed a new sub-pixel motion compensation layer for inter-frame motion alignment, and achieved motion compensation and upsampling simultaneously. \cite{xue2019video} proposed to jointly train the motion analysis and video super resolution in an end-to-end manner through a proposed task-oriented flow. \cite{haris2019recurrent} proposed to use a recurrent encoder-decoder module to exploit spatial and temporal information, where explicit inter-frame motion were estimated. Methods using implicit temporal alignment showed superior performance on several benchmarks. \cite{kim20183dsrnet} exploited the 3DCNN's spatial-temporal feature representation capability to avoid motion alignment, and stacked several 3D convolutional layers for video SR. \cite{jo2018deep} proposed to use 3D convolutional layers to compute dynamic filters~\cite{jia2016dynamic} for implicit motion compensation and upsampling. Instead of image level motion alignment, TDAN~\cite{tian2018tdan} and EDVR~\cite{wang2019edvr} worked in the feature level motion alignment. TDAN \cite{tian2018tdan} proposed a temporal deformable alignment module to align features of different frames for better performance. EDVR \cite{wang2019edvr} extended TDAN in two aspects by 1) using deformable alignment in a coarse-to-fine manner and 2) proposing a new temporal and spatial attention attention fusion module, instead of naively concatenating the aligned LR frames as TDAN does.
The work most related with ours is~\cite{liu2017robust}, which also re-organized the input frames to several groups. However, in~\cite{liu2017robust}, groups are composed of different number of input frames. In addition, that method generates an super-resolution result for each group and computes an attention map to combine these super-resolution results, which takes much computation and is not very effective. Our method divides input frames into several groups based on frame rate and effectively integrates temporal information in a hierarchical way.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{img/pipeline.pdf}
\caption{The proposed method with temporal group attention. }
\vspace{-2mm}
\label{pipeline}
\end{figure*}
\section{Methodology}
\label{method}
\subsection{Overview}
Given a consecutive low-resolution video frame sequence consisting of one reference frame $I_t^L$ and $2N$ neighboring frames \{$I_{t-N}^L:I_{t-1}^L,I_{t+1}^L:I_{t+N}^L$\}, the goal of VSR is to reconstruct a high-resolution version of reference frame $\hat{I_{t}}$ by fully utilizing the spatio-temporal information across the sequence.
The overall pipeline of the proposed method is shown in Fig.~\ref{pipeline}. It's a generic framework suitable for processing sequences of different input lengths. Take seven frames $\{I_1^L, I_2^L, ..., I_7^L\}$ for example, we denote the middle frame $I_4^L$ as the reference frame, and the other frames as neighboring ones.
The seven input frames are divided into three groups based on decoupled motion, with each one representing a certain kind of frame rate. An intra-group fusion module with shared weights is proposed to extract and fuse spatio-temporal information within each group. Information across groups is further integrated through an attention-based inter-group fusion module. Finally, the output high-resolution frame $\hat{I_4}$ is generated by adding the network produced residual map and the bicubic upsampling of the input reference frame.
Additionally, a fast spatial alignmemt module is proposed to further help deal with video sequences of large motion.
\subsection{Temporal Group Attention}
The crucial problem with implicit motion compensation lies on the inefficient fusion of temporal fusion in neighboring frames. In~\cite{jo2018deep}, input frames are stacked along the temporal axis and 3D convolutions are directly applied to the stacked frames. Such distant neighboring frames are not explicitly guided by the reference frame, resulting in insufficient information fusion, and this impedes the reference frame from borrowing information from distant frames. To address this issue, we propose to split neighboring $2N$ frames into N groups based on their temporal distances from the reference frame. Later, spatial-temporal information is extracted and fused in a hierarchical manner: an intra-group fusion module integrates information within each group, followed by an inter-group fusion module which effectively handles group-wise features.
\textbf{Temporal Grouping.} In contrast to the previous work, the neighboring $2N$ frames are split to $N$ groups based on the temporal distance to the reference frame.
The original sequence is reordered as $\{G_1,...,G_n\}$, $n\in[1:N]$, where $G_n=\{I^L_{t-n}, I_t^L,I^L_{t+n}\}$ is a subsequence consisting of a former frame $I_{t-n}^L$, the reference frame $I_{t}^L$ and a latter frame $I_{t+n}^L$. Notice that the reference frame appears in each group. It is noteworthy that our method can be easily generalized to arbitrary frames as input.
The grouping allows explicit and efficient integration of neighboring frames with different temporal distance for two reasons:
1) The contributions of neighboring frames in different temporal distances are not equal, especially for frames with large deformation, occlusion and motion blur.
When a region in one group is (for example by occlusion), the missing information can be recovered by other groups. That is, information of different groups complements each other. 2)
The reference frame in each group guides the model to extract beneficial information from neighboring frames, allowing efficient information extraction and fusion.
\textbf {Intra-group Fusion.} For each group, an intra-group fusion module is deployed for feature extraction and fusion within each group.
The module consists of three parts. The first part contains three units as the spatial features extractor, where each unit is composed of a $3\times 3$ convolutional layer followed by a batch normalization (BN) \cite{normalization2015accelerating} and a ReLU \cite{glorot2011deep}. All convolutional layers are equipped with dilation rate to model the motion level associated with a group. The dilation rate is determined according to the frame rate in each group with the assumption that distant group has large motion and near group has small motion. Subsequently, for the second part, an additional 3D convolutional layer with $3\times3\times3$ kernel is used to perform spatio-temporal feature fusion. Finally, group-wise features $F_n^g$ are produced by applying eighteen 2D units in the 2D dense block to deeply integrate information within each group.
The weights of the intra-group fusion module are shared for each group for efficiency. The effectiveness of the proposed temporal grouping are presented in Sec.4.3.
\textbf{Inter-group Fusion with Temporal Attention.} To better integrate features from different groups, a temporal attention module is introduced. Temporal attention has been widely used in video-related tasks~\cite{song2017end,zanfir2016spatio,zang2018attention,yan2019stat}. In this work, we show that temporal attention also benefits the task of VSR by enabling the model to pay different attention across time. In the previous section, a frame sequence is categorized into groups according to different frame rates. These groups contain complementary information. Usually, a group with slow frame rate is more informative because the neighboring frames are more similar to the reference one. Simultaneously, groups with fast frame rate may also capture information about some fine details which are missing in the nearby frames. Hence, temporal attention works as a guidance to efficiently integrate features from different temporal interval groups.
\begin{figure}[thbp]
\label{2}
\centering
\includegraphics[width=1\columnwidth]{img/softmax.pdf}
\caption{Computation of group attention maps. $F_n^a$ corresponds to group-wise features while $M_n$ is the attention mask.}
\vspace{-1mm}
\label{softmax}
\end{figure}
For each group, a one-channel feature map $F^a_n$ is computed after applying a $3\times3$ convolutional layer on top of the corresponding feature maps $F^g_n$. They are further concatenated and a softmax function along temporal axis is applied to each position across channels to compute attention maps, as shown in Fig.~\ref{softmax}. Each group's intermediate map is concatenated and the attention maps $M(x,y)$ are computed by applying softmax along temporal axis, as shown in Fig.~\ref{softmax}.
\begin{equation}
\label{4}
M_n(x,y)_j=\dfrac{e^{F^{a}_{n}(x,y)_j}}{\sum_{i=1}^N e^{F_i^a(x,y)_j}}
\end{equation}
Attention weighted feature for each group $\widetilde{F}^g_n$ is calculated as:
\begin{equation}
\label{5}
\widetilde{F}^g_n = M_n \odot {F_n^g}, n\in[1:N]
\end{equation}
where $M_n(x,y)_j$ represents the weight of the temporal group attention mask at location $(x,y)_j$. ${F_n^g}$ represents the group-wise features produced by intra-group fusion module. `$\odot$' denotes element-wise multiplication.
The goal of the inter-group fusion module is to aggregate information across different temporal groups and produce a high-resolution residual map.
In order to make full use of attention weighted feature over temporal groups, we first aggregate those features by concatenating them along the temporal axis and feed it into a 3D dense block. Then a 2D dense block is on top for further fusion, as shown in Fig.~\ref{inter-group}. 3D unit has the same structure as 2D unit which is used in intra-group fusion module. A convolution layer with $1\times3\times3$ kernel is inserted in the end of the 3D dense block to reduce channels. The design of 2D and 3D dense blocks are inspired by RDN~\cite{zhang2018residual} and DUF~\cite{jo2018deep}, which is modified in an efficient way to our pipeline.
Finally, similar to several single image super-resolution methods, sufficiently aggregated features are upsampled with a depth-to-space operation~\cite{shi2016real} to produce high-resolution residual map $R_t$. The high-resolution reconstruction $\hat{I}_t$ is computed as the sum of the residual map $R_t$ and a bicubic upsampled reference image $I^{\uparrow}_t$.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{img/modulate.pdf}
\caption{Structure of the inter-group fusion module.
}
\vspace{-3mm}
\label{inter-group}
\end{figure}
\subsection{Fast Spatial Alignment}
\begin{figure*}[t]
\label{1}
\centering
\includegraphics[width=\textwidth]{img/flow_homo.pdf}
\caption{Fast spatial alignment compared with optical flow. (a) Original 5 consecutive frames, of which \textbf{\color{red}frame 3} is the reference frame. (b) Alignment with optical flow. The flow for each neighboring frame is estimated independently. (c) The proposed alignment only estimates basic homographies for consecutive frames. The frame-level alignment suppresses pixel-level distortion. Zoom in for better visualization.}
\vspace{-2mm}
\label{flow_homo}
\end{figure*}
Although the proposed model is able to effectively use temporal information across frames, it has difficulty in dealing with videos with large motion. To improve the performance of the proposed model in case of large motion, we further propose a fast spatial alignment module. Different from previous methods~\cite{ma2015handling,caballero2017real,xue2019video} which either use offline optical flow or an integrated optical flow network for motion estimation and compensation, we estimate homography between every two consecutive frames and warp neighboring frames to the reference frame, which can be shown in Fig.~\ref{flow_homo}.
Interest points could be detected by feature detectors such as SIFT~\cite{lowe2004distinctive} or ORB~\cite{rublee2011orb}, and point correspondences are computed to estimate homography.
Homography from frame $A$ and $C$ can be computed as a product of the homography from $A$ to $B$ and the one from $B$ to $C$:
\begin{equation}
H_{A\to C} = H_{A\to B}\cdot H_{B\to C}
\end{equation}
For a homography, the inverse transform can be represented by the inverse of the matrix:
\begin{equation}
H_{B\to A} = H_{A\to B}^{-1}
\end{equation}
Since optical flow is computed for each pixel, imperfect optical flow estimation would introduce much unexpected pixel-level distortion into warping, destroying structure in original images. In addition, most optical-flow-based methods~\cite{liao2015video,caballero2017real,tao2017detail,xue2019video} estimate optical flow between each neighboring frame and the reference frame independently, which would bring a lot of redundant computation when super-resolving a long sequence. In our method, since homography transformation is a global, it keeps the structure better and introduces little artifact. In addition, the associative composition nature of homography allows to decompose a homography between two frames into a product of homographies between every two consecutive ones in that interval, which avoids redundant computation and speeds up pre-alignment. Note that the pre-alignment here does not need to be perfect. As long as it does not introduce much pixel-level distortion, the proposed VSR network can give good performance. We also introduce exit mechanism for pre-alignment for robustness. That is, in case that few interest points are detected or there is much difference between a frame and the result after applying $H$ and $H^{-1}$, the frames are kept as they are without any pre-alignment. In other words, a conservative strategy is adopt in pre-alignment procedure.
\section{Experiments}
\label{experiment}
\begin{table*}[t]
\centering
\scalebox{0.9}{
\begin{tabular}{lccccccc}
\toprule
Method &\# Frames & Calendar (Y) & City (Y) & Foliage (Y) & Walk (Y) &Average (Y) & Average (RGB)
\\
\midrule
Bicubic & 1 &18.83/0.4936 &23.84/0.5234 &21.52/0.4438 &23.01/0.7096 & 21.80/0.5426 & 20.37/0.5106
\\
SPMC $^\dagger$~\cite{tao2017detail} & 3 & - &- &- &- & 25.52/0.76~~~~ &-
\\
Liu$^\dagger$~\cite{liu2017robust} & 5 &21.61/~~~~~-~~~~~ &26.29/~~~~~-~~~~~ &24.99/~~~~~-~~~~~ &28.06/~~~~~-~~~~~ &25.23/~~~~~-~~~~~ & -
\\
TOFlow~\cite{xue2019video} & 7 & 22.29/0.7273 & 26.79/0.7446 & 25.31/0.7118 & 29.02/0.8799 & 25.85/0.7659 &24.39/0.7438
\\
FRVSR~$^\dagger$\cite{sajjadi2018frame} &recurrent & - &- &- &- & 26.69/0.822~~ &-
\\
DUF-52L~\cite{jo2018deep} & 7 &24.17/0.8161 &28.05/0.8235 &26.42/0.7758 & 30.91/ \textbf{\color{blue}0.9165} &27.38/0.8329 &\textbf{\color{blue} 25.91}/\textbf{\color{blue}0.8166}
\\
RBPN~\cite{haris2019recurrent} & 7 &24.02/0.8088 &27.83/0.8045 &26.21/0.7579 &30.62/0.9111 &27.17/0.8205 &25.65/0.7997
\\
EDVR-L$^\dagger$~\cite{wang2019edvr} & 7 & 24.05/0.8147 &28.00/0.8122 &26.34/0.7635 &\textbf{\color{red}31.02}/0.9152 & 27.35/0.8264 &25.83/0.8077
\\
PFNL$^\dagger$~\cite{yi2019progressive} &7 &\textbf{\color{blue} 24.37}/\textbf{\color{blue}0.8246} &\textbf{\color{blue} 28.09}/\textbf{\color{blue}0.8385} &\textbf{\color{blue}26.51}/\textbf{\color{blue}0.7768} &30.65/0.9135 &\textbf{\color{blue}27.40}/\textbf{\color{blue}0.8384} &-
\\
TGA (Ours) & 7 &\textbf{\color{red}24.47}/\textbf{\color{red}0.8286} &\textbf{\color{red}28.37}/\textbf{\color{red}0.8419} & \textbf{\color{red}26.59}/\textbf{\color{red}0.7793} &\textbf{\color{blue}30.96}/\textbf{\color{red}0.9181} &\textbf{\color{red}27.59}/\textbf{\color{red}0.8419} &\textbf{\color{red}26.10}/\textbf{\color{red}0.8254}
\\
\bottomrule
\end{tabular}
}
\vspace{3mm}
\caption{Quantitative comparison (PSNR(dB) and SSIM) on \textbf{Vid4} for $4\times$ video super-resolution. {\color{red}Red} text indicates the best and {\color{blue} blue} text indicates the second best performance. Y and RGB indicate the luminance and RGB channels, respectively. `$\dagger$' means the values are taken from original publications or calculated by provided models. Best view in color.}
\label{vid4_table}
\end{table*}
\begin{table*}[t]
\centering
\scalebox{0.92}{
\begin{tabular}{lcccccc}
\toprule
&Bicubic &TOFlow~\cite{xue2019video} &DUF-52L ~\cite{jo2018deep} &RBPN~\cite{haris2019recurrent} &EDVR-L$^\dagger$~\cite{wang2019edvr} &TGA(Ours)
\\
\midrule
\# Param. &N/A &1.4M &5.8M &12.1M &20.6M &5.8M
\\
FLOPs &N/A &0.27T &0.20T & 3.08T &0.30T &0.07T
\\
Y Channel &31.30/0.8687 &34.62/0.9212 &36.87/0.9447 &37.20/0.9458 &\textbf{\color{red} 37.61}/\textbf{\color{blue}0.9489} &\textbf{\color{blue}37.59}/\textbf{\color{red}0.9516}
\\
RGB Channels &29.77/0.8490 &32.78/0.9040 &34.96/0.9313 &35.39/0.9340 &\textbf{\color{red} 35.79}/\textbf{\color{blue} 0.9374} &\textbf{\color{blue}35.57}/\textbf{\color{red} 0.9387}
\\
\bottomrule
\end{tabular}
}
\vspace{3mm}
\caption{Quantitative comparison (PSNR(dB) and SSIM) on \textbf{Vimeo-90K-T} for $4\times$ video super-resolution. {\color{red}Red} text indicaktes the best result and {\color{blue} blue} text indicates the second best. FLOPs are calculated on an LR image of size 112$\times$64. `$\dagger$' means the values are taken from original publications. Note that the deformation convolution and offline pre-alignment are not included in calculating FLOPs. Best view in color.}
\vspace{-5mm}
\label{vimeo_table}
\end{table*}
\begin{figure*}[thbp]
\centering
\includegraphics[width=\textwidth]{img/vid4_v2.jpg}
\caption{Qualitative comparison on the \textbf{Vid4} for 4$\times$SR. Zoom in for better visualization.}
\vspace{-2mm}
\label{vid_figure}
\end{figure*}
\begin{figure*}[thbp]
\centering
\includegraphics[width=\textwidth]{img/vimeo.jpg}
\caption{Qualitative comparison on the \textbf{Vimeo-90K-T} for 4$\times$SR. Zoom in for better visualization.}
\vspace{-3mm}
\label{vimeo_figure}
\end{figure*}
To evaluate the proposed method, a series of experiments are conducted and results are compared with existing state-of-the-art methods. Subsequently, a detailed ablation study is conducted to analyze the effectiveness of the proposed temporal grouping, group attention and fast spatial alignment. Results demonstrate the effectiveness and superiority of the proposed method.
\subsection{Implementation Details}
\textbf{Dataset.}
Similar to \cite{haris2019recurrent,xue2019video}, we adopt Vimeo-90k~\cite{xue2019video} as our training set, which is a widely used for the task of video super-resolution.
We sample regions with spatial resolution 256$\times$256 from high resolution video clips. Similar to \cite{jo2018deep,xue2019video,yi2019progressive} low-resolution patches of $64\times64$ are generated by applying a Gaussian blur with a standard deviation of $\sigma=1.6$ and $4\times$ downsampling. We evaluate the proposed method on two popular benchmarks: Vid4~\cite{liu2013bayesian} and Vimeo-90K-T\cite{xue2019video}. Vid4 consists of four scenes with various motion and occlusion. Vimeo-90K-T contains about 7$k$ high-quality frames and diverse motion types.
\textbf{Implementation details.}
In the intra-group fusion module, three 2D units are used for spatial features extractor, which is followed by a 3D convolution and eighteen 2D units in the 2D dense block to integrate information within each group.
For the inter-group fusion module, we use four 3D units in the 3D dense block and twenty-one 2D units in the 2D dense block. The channel size is set to 16 for convolutional layers in the 2D and 3D units. Unless specified otherwise, our network takes seven low resolution frames as input.
The model is supervised by pixel-wise $L1$ loss and optimized with Adam \cite{kingma2014adam} optimizer in which $\beta_1=0.9$ and $\beta_2=0.999$. Weight decay is set to $5\times10^{-4}$ during training.
The learning rate is initially set to $2\times10^{-3}$ and later down-scaled by a factor of 0.1 every 10 epoches until 30 epochs.
The size of mini-batch is set to 64. The training data is augmented by flipping and rotating with a probability of 0.5.
All experiments are conducted on a server with Python 3.6.4, PyTorch 1.1 and Nvidia Tesla V100 GPUs.
\subsection{Comparison with State-of-the-arts}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{img/temporal_profile.pdf}
\caption{Visualization of temporal consistency for \textit{calendar} sequence. Temporal profile is produced by recording a single pixel line ({\color{green}green} line) spanning time and stacked vertically.
}
\vspace{-5mm}
\label{temporal-profile}
\end{figure}
We compare the proposed method with six state-of-the-art VSR approaches, including TOFlow~\cite{xue2019video}, SPMC~\cite{tao2017detail}, Liu~\cite{liu2017robust}, DUF~\cite{jo2018deep}, RBPN~\cite{haris2019recurrent}, EDVR~\cite{wang2019edvr} and PFNL~\cite{yi2019progressive}. Both TOFlow and SPMC apply
explicit pixel-level motion compensation with optical flow estimation, while RBPN uses pre-computed optical flow as additional input. DUF, EDVR and PFNL conduct VSR with implicit motion compensation. We carefully implement TOFlow and DUF on our own, and rebuild RBPN and EDVR based on the publicly available code. We reproduce the performance of most of these methods as reported in the paper except for EDVR.
Tab.~\ref{vid4_table} and Tab.~\ref{vimeo_table} give quantitative results of state-of-the-art methods on Vid4 and Vimeo-90K-T, which are either reported in the original papers or computed by us. In the evaluation, we take all frames into account except for the DUF method~\cite{jo2018deep} which crop 8 pixels on four borders of each frame since it suffer from severe border artifacts. In addition, we also include the number of parameters and FLOPs for most methods on an LR image of size $112\times64$ in Tab.~\ref{vimeo_table}.
On Vid4 test set, the proposed method achieves a result of 27.59dB PSNR in the Y channel and 26.10dB PSNR in RGB channel, which outperforms other state-of-the-art methods by a large margin.
Qualitative result in Fig.~\ref{vid_figure} also validates the superiority of the proposed method. Attributed to the proposed temporal group attention, which is able to make full use of complementary information among frames, our model produces sharper edges and finer detailed texture than other methods.
In addition, we extract temporal profiles in order to evaluate the performance on temporal consistency in Fig.\ref{temporal-profile}. A temporal profile is produced by taking the same horizontal row of pixels from consecutive frames and stacking them vertically. The temporal profiles show that the proposed method gives temporally consistent results, which suffer less flickering artifacts than other approaches.
Vimeo-90K-T is a large and challenging dataset covering scenes with large motion and complicated illumination changes. The proposed method is compared with several methods including TOFlow, DUF, RBPN and EDVR.
As shown in Tab.~\ref{vimeo_table} and Fig.~\ref{vimeo_figure}, the proposed method also achieves very good performance on this challenging dataset. It outperforms most state-of-the-art methods such as TOFlow, DUF and RBPN by a large margin both in PSNR and SSIM. The only exception is EDVR-L whose model size and computation is about four times larger than our method. In spite of this, our method is still rather comparable in PSNR and a little better in SSIM.
\subsection{Ablation Study}
In this section, we conduct several ablation study on the proposed temporal group attention and fast spatial alignment to further demonstrate the effectiveness of our method.
\textbf{Temporal Group Attention.} First we experiment with different ways of organizing the input sequence.
One baseline method is to simply stack input frames along temporal axis and directly feed that to several 3D convolutional layers, similar to DUF~\cite{jo2018deep}. Apart from our grouping method $\{345, 246, 147\}$, we also experiment with other ways of grouping: $\{123, 345, 567\}$ and $\{345, 142, 647\}$. As shown in Tab.~\ref{group_mannar}, DUF-like input performs worst among these methods. That illustrate that integrating temporal information in a hierarchical manner is a more effective way in integrating information across frames. Both $\{345, 246, 147\}$ and $\{345, 142, 647\}$ are better than $\{123, 345, 567\}$, which implies the advantage of adding the reference frame in each group. Having the reference in the group encourages the model to extract complementary information that is missing in the reference frame. Another 0.05dB improvement of our grouping method $\{345, 246, 147\}$ could be attributed to the effectiveness of motion-based grouping in employing temporal information.
\begin{table}[t]
\centering
\scalebox{0.65}{
\begin{tabular}{lcccc}
\toprule
Model & DUF-like &$\{123,345,567\}$ &$\{345,142,647\}$ &$\{345,246,147\}$
\\
TG? &\XSolidBrush &\Checkmark &\Checkmark &\Checkmark
\\
\midrule
Vid4 &27.18/0.8258 &27.47/0.8384 &27.54/0.8409 & \textbf{27.59}/\textbf{0.8419}
\\
Vimeo-90K-T &37.06/0.9465 &37.46/0.9487 &37.51/0.9509 &\textbf{37.59}/\textbf{0.9516}
\\
\bottomrule
\end{tabular}
}
\vspace{1mm}
\caption{Ablation on: different grouping strategies.}
\vspace{-5mm}
\label{group_mannar}
\end{table}
In addition, we also evaluate a model which removes the attention module from our whole model.
As shown in Tab.~\ref{frames}, this model performs a little worse than our full model.
We also train our full model with a sequence of 5 frames as input. The result in Tab.~\ref{frames} shows that the proposed method can effectively borrow information from additional frames. We notice that the proposed method outperforms DUF even with 2 fewer frames in the input.
In addition, we conduct a toy experiment where a part of a neighboring frame is occluded and visualize the maps of temporal group attention. As shown in Fig.~\ref{occlusion}, the model does attempt to borrow more information from other groups when a group can not provide complementary information to recover the details of that region.
\begin{table}[th]
\begin{center}
\scalebox{0.8}{
\begin{tabular}{lccc}
\toprule
Model & Model 1 & Model 2 & Model 3
\\
\midrule
\# Frames & 7 & 5 & 7
\\
GA? &\XSolidBrush &\Checkmark &\Checkmark
\\
\midrule
Vid4 & 27.51/0.8394 &27.39/0.8337 & \textbf{27.59}/\textbf{0.8419}
\\
Vimeo-90K-T & 37.43/0.9506 & 37.34/0.9491 &\textbf{37.59}/\textbf{0.9516}
\\
\bottomrule
\end{tabular}
}
\end{center}
\vspace{1mm}
\caption{Ablations on: group attention (GA) module and the influence of the different input frames in our hierarchical information aggregation way.}
\vspace{-5mm}
\label{frames}
\end{table}
~~\\
\textbf{Fast Spatial Alignment.} To investigate the effectiveness and efficiency of the proposed fast spatial alignment, we equip the proposed TGA model with three different pre-alignment strategies: TGA without alignment, TGA with PyFlow~\cite{pathak2017learning}, and TGA with FSA. The evaluation is conducted on Vimeo-90K-T where there is various motion in the video clips.
Tab.~\ref{pre-align} shows the performance of TGA with PyFlow is significantly inferior than the TGA model without any pre-alignment. It implies that imperfect optical flow estimation leads to inaccurate motion compensation such as distortion on the regions with large motion (see the green box in Fig.~\ref{flow_homo}), which confuses the model during training and hurts the final video super-resolution performance. In contrast, the proposed FSA boosts the performance of the TGA model from 37.32dB to 37.59dB. This demonstrates that the proposed FSA, which although does not perfectly align frames, is capable of reducing appearance differences among frames in a proper way. We also compute time cost of this module on Vimeo-90K-T dataset and present it in Tab.~\ref{pre-align}. Our FSA method is much more efficient than the PyFlow method. Note that since every sequence in Vimeo-90K-T only contains 7 frames, the advantage of FSA in reducing redundant computation is not fully exployed. Both PyFlow and our FSA are run on CPU, and FSA could be further accelerated with optimized GPU implementation.\\
\begin{table}[t]
\centering
\scalebox{0.8}{
\begin{tabular}{lccc}
\toprule
Pre-alignment & w/o & w/ PyFlow~\cite{pathak2017learning} & w/ FSA
\\
\midrule
PSNR/SSIM & 37.32/0.9482 &35.14/0.9222 &\textbf{37.59}/\textbf{0.9516}
\\
Time (CPU+GPU) & 0+70.8ms &760.2+70.8ms & 18.6+70.8ms
\\
\bottomrule
\end{tabular}
}
\vspace{1mm}
\caption{Ablation on: the effectiveness and efficiency of the fast spatial alignment module. The elapsed time are calculated on processing a seven frame sequence with LR size of 112$\times$64.}
\vspace{-1mm}
\label{pre-align}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{img/occlusion_2_v2.jpg}
\caption{Visualization of group attention masks under occlusion settings. $G_1, G_2$ and $G_3$ denote three groups.
}
\vspace{-5mm}
\label{occlusion}
\end{figure}
\section{Conclusion}
\label{conclusion}
In this work, we proposed a novel deep neural network which hierarchically integrates temporal information in an implicit manner. To effectively leverage complementary information across frames, the input sequence is reorganized into several groups of subsequences with different frame rates. The grouping allows to extract spatio-temporal information in a hierarchical manner, which is followed by an intra-group fusion module and inter-group fusion module. The intra-group fusion module extracts features within each group, while the inter-group fusion module borrows complementary information adaptively from different groups. Furthermore, an fast spatial alignment is proposed to deal with videos in case of large motion. The proposed method is able to reconstruct high-quality HR frames and also maintain the temporal consistency. Extensive experiments on several benchmark datasets demonstrate the effectiveness of the proposed method.
| proofpile-arXiv_065-308 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\IEEEPARstart{P}{erson} re-identification (re-id) is a cross-camera instance retrieval problem which aims at searching for persons across multiple non-overlapping cameras \cite{ye2020purifynet,chen2019spatial,chen2019self,chen2020learning,meng2019weakly,yu2020weakly,ye2019dynamic,ye2017dynamic}.
This problem has attracted extensive research, but most of the existing works focus on supervised learning approaches \cite{chen2019spatial,rao2019learning,sun2018beyond,ye2020purifynet,si2018dual}. While these techniques are extremely effective, they require a substantial amount of annotations which becomes infeasible to obtain for large camera networks.
Aiming to reduce this huge requirement of labeled data, unsupervised methods have drawn a great deal of attention \cite{lin2019bottom,fan2018unsupervised,qi2019novel,yu2019unsupervised,li2018unsupervised,li2019cross}. However, the performance of these methods is significantly weaker compared to supervised alternatives, as the absence of labels makes it extremely challenging to learn a generalizable model.
\begin{figure}[]
\centerline{\includegraphics[width=\columnwidth]{Figures/fig1.pdf}}
\caption{An illustrative example of video person re-id data with multi-level supervisions. (a) shows some raw videos tagged by video-level labels, such as person \{A, B, C, D, E\} for video 1; (b) illustrates the strong labeling setting. The annotators label and associate the person images with the same identity in each video. So, each person image in the video is labeled by their corresponding identity; (c) shows weakly labeled samples {\bf (OURS)}, in which each bag contains all person images obtained in the corresponding video clip and is annotated by the video label without data association and precise data annotations. (d) demonstrates some semi-weakly labeled samples used in \cite {meng2019weakly}, in which the strongly labeled tracklets (one for each identity) in addition to the weakly labeled data are required.
}
\label{fig:1}
\vspace{-0.3cm}
\end{figure}
To bridge this gap in performance, some recent works have focused on the broad area of learning with limited labels. This includes settings such as the one-shot, the active learning and the intra-camera labeling scenarios. The one-shot setting \cite{wu2018exploit,wu2019unsupervised,wu2019progressive,bak2017one} assumes a singular labeled tracklet for each identity along with a large pool of unlabeled tracklets, the active learning strategy \cite{roy2018exploiting,liu2019deep,wang2016human} tries to select the most informative instances for annotation, and the intra-camera setting \cite{zhu2019intra,wang2019weakly} works with labels which are provided only for tracklets within an individual camera view. All of these methods assume smaller proportions of labeling in contrast to the fully supervised setting, but assume \emph{strong labeling} in the form of identity labels similar to the supervised scenario. In this paper, we focus on the problem of learning with \emph{weak labels} - labels which are obtained at a higher level of abstraction, at a much lower cost compared to strong labels. In the context of video person re-id, weak labels correspond to video-level labels instead of the more specific labels for each image/tracklet within a video.
\begin{figure*}[t]
\centerline{\includegraphics[width=1.99\columnwidth]{Figures/fig2.pdf}}
\caption{A brief illustration of our proposed multiple instance attention learning framework for video person re-id with weak supervision. For each video, we group all person images obtained by pedestrian detection and tracking algorithms in a bag and use it as the inputs of our framework. The bags are passed through a backbone CNN to extract features for each person image. Furthermore, a fully connected (FC) layer and an identity projection layer are used to obtain identity-wise activations. On top of that, the MIL loss based on \emph k-max-mean-pooling strategy is calculated for each video. For a pair of videos $(i,j)$ with common person identities, we compute the CPAL loss by using high and low attention region for the common identity. Finally, the model is optimized by jointly minimizing the two loss functions.
}
\label{fig:2}
\end{figure*}
To illustrate this further, consider Figure \ref{fig:1} which shows some video clips which are annotated with the video-level labesls, such as video 1 with \{A, B, C, D, E\}. This indicates that Person A, B, C, D and E appear in this clip. By using pedestrian detection and tracking algorithms \cite{lin2017feature,ren2015faster,jin2019multi}, we can obtain the person images (tracklets) for this video clip, but can make no direct correspondence between each image (tracklet) and identity due to the weak nature of our labels. Specifically, we group all person images obtained in one video clip into a bag and tag it with the video label as shown in Figure 1(c). On the contrary, strong supervision requires identity labels for each image (tracklet) in a video clip and thus, annotation is a more tedious procedure compared to our setting.
Thus, in weakly labeled person re-id data, we are given bags, with each such bag containing all person images in a video and the video's label; our goal is to train a person re-id model using these bags which can perform retrieval during test time at two different levels of granularity. The first level of granularity, which we define as \emph{Coarse-Grained Re-id}, involves retrieving the videos (bags) that a given target person appears in. The second level entails finding the exact tracklets with the same identity as the target person in all obtained gallery tracklets - this is defined as \emph{Fine-Grained Re-id}. Moreover, we also consider a more practical scenario where the weak labels are not reliable - the annotators may not tag the video clip accurately.
In order to achieve this goal, we propose a multiple instance attention learning framework for video person re-id which utilizes pairwise bag similarity constraints via a novel co-person attention mechanism. Specifically, we first cast the video person re-id task into a multiple instance learning (MIL) problem which is a general idea that used to solve weakly-supervised problems \cite{meng2019weakly,bilen2016weakly}, however, in this paper, a novel {\em k}-max-mean-pooling strategy is used to obtain a probability mass function over all person identities for each bag and the cross-entropy between the estimated distribution and the ground truth identity labels for each bag is calculated to optimize our model. The MIL considers each bag in isolation and does not consider the correlations between bags. We address this by introducing the Co-Person Attention Loss (CPAL), which is based on the motivation that a pair of bags having at least one person identity {\em e.g. Person A} in common should have similar features for images which correspond to that identity ({\em A}). Also, the features from one bag corresponding to {\em A} should be different from features of the other bag (of the pair) not corresponding to {\em A}. We jointly minimize these two complementary loss functions to learn our multiple instance attention learning framework for video person re-id as shown in Figure \ref{fig:2}.
To the best of our knowledge, this is the first work in video person re-id which solely utilizes the concept of weak supervision. A recent work \cite{meng2019weakly} presents a weakly supervised framework to learn re-id models from videos. However, they require strong labels, one for each identity, in addition to the weak labels, resulting in a semi-weak supervision setting. In contrast, our setting is much more practical forgoing the need for \emph{any} strong supervision. A more detailed discussion on this matter is presented in Section \ref{sec:cv_miml}, where we empirically evaluate the dependence of \cite{meng2019weakly} on the strong labels and demonstrate the superior performance of our framework.
\emph{Main contributions.} The contributions of our work are as follows:
\begin{itemize}
\item[$\bullet$] We introduce the problem of learning a re-id model from videos with weakly labeled data and propose a multiple instance attention learning framework to address this task.
\item[$\bullet$] By exploiting the underlying characteristics of weakly labeled person re-id data, we present a new co-person attention mechanism to utilize the similarity relationships between videos with common person identities.
\item[$\bullet$] We conduct extensive experiments on two weakly labeled datasets and demonstrate the superiority of our method on coarse and fine-grained person re-id tasks. We also validate that the proposed method is promising even when the weak labels are not reliable.
\end{itemize}
\section{Related Works}
Existing person re-id works can be summarized into three categories, such as learning from strongly labeled data (supervised and semi-supervised), learning from unlabeled data (unsupervised) and learning from weakly labeled data (weakly supervised) depending on the level of supervision. This section briefly reviews some person re-id works, which are related with this work.
{\bf Learning from strongly labeled data.} Most studies for person re-id are supervised learning-based methods and require the fully labeled data \cite{chen2018improving,sun2018beyond,chen2019spatial,rao2019learning,si2018dual,zheng2016mars,chen2018video}, {\em i.e.}, the identity labels of all the images/tracklets from multiple cross-view cameras. These fully supervised methods have led to impressive progress in the field of re-id; however, it is impractical to annotate very large-scale surveillance videos due to the dramatically increasing annotation cost.
To reduce annotation cost, some recent works have focused on the broad area of learning with limited labels, such as the one-shot settings \cite{wu2018exploit,wu2019unsupervised,wu2019progressive,bak2017one}, the active learning strategy \cite{roy2018exploiting,liu2019deep,wang2016human} and the intra-camera labeling scenarios \cite{zhu2019intra,wang2019weakly}. All of these methods assume smaller proportions of labeling in contrast to the fully supervised setting, but assume strong labeling in the form of identity labels similar to the supervised scenario.
{\bf Learning from unlabeled data.} Researchers developed some unsupervised learning-based person re-id models \cite{lin2019bottom,fan2018unsupervised,qi2019novel,yu2019unsupervised,li2018unsupervised,li2019cross} that do not require any person identity information.
Most of these methods follow a similar principle - alternatively assigning pseudo labels to unlabeled data with high confidence and updating model using these pseudo-labeled data. It is easy to adapt this procedure to large-scale person re-id task since the unlabeled data can be captured automatically by camera networks.
However, most of these approaches perform weaker than those supervised alternatives due to lacking the efficient supervision.
{\bf Learning from weakly labeled data.}
The problem of learning from weakly labeled data has been addressed in several computer vision tasks, including object detection \cite{heidarivincheh2019weakly,yu2019temporal,bilen2016weakly}, segmentation \cite{khoreva2017simple,ahn2018learning}, text and video moment retrieval \cite{mithun2019weakly}, activity classification and localization \cite{chen2017attending,paul2018w,nguyen2019weakly}, video captioning \cite{shen2017weakly} and summarization \cite{panda2017weakly,cai2018weakly}.
There are three weakly supervised person re-id models have been proposed. Wang {\em et al.} introduced a differentiable graphical model \cite{wang2019weakly} to capture the dependencies from all images in a bag and generate a reliable pseudo label for each person image.
Yu {\em et al.} introduced the weakly supervised feature drift regularization \cite{yu2020weakly} which employs the state information as weak supervision to iteratively refine pseudo labels for improving the feature invariance against distractive states.
Meng {\em et al.} proposed a cross-view multiple instance multiple label learning method \cite{meng2019weakly} that exploits similar instances within a bag for intra-bag alignment and mine potential matched instances between bags. However, our weak labeling setting is more practical than these three works for video person re-id. First, we do not require any strongly labeled tracklets and state information for model training.
Second, we consider a scenario that the weak labels are not reliable in training data.
Our task of learning person re-id models from videos with weak supervision is also related to the problem of person search \cite{yan2019learning,xiao2019ian,han2019re} whose objective is to simultaneously localize and recognize a person from raw images. The difference lies in the annotation requirement for training - the person search methods assume large amounts of manually annotated bounding boxes for model training. Thus, these approaches utilize strong supervision in contrast to our weak supervision.
\section{Methodology}
In this section, we present our proposed multiple instance attention learning framework for video person re-id. We first present an identity projection layer we use to obtain the identity-wise activations for input person images in one bag. Thereafter, two learning tasks: multi-instance learning and co-person attention mechanism are introduced and jointly optimized to learn our model. The overview of our proposed method is shown in Figure \ref{fig:2} and it may be noted that only the video-level labels of training data are required for model training. Before going into the details of our multiple instance attention learning framework, let us first compare the annotation cost between weakly labeled and strongly labeled video person re-id data, and then define the notations and problem statement formally.
\subsection{Annotation Cost}
We focus on person re-id in videos, where labels can be collected in two ways:
\begin{itemize}
\item[$\bullet$]\emph {Perfect tracklets}: The annotators label each person in each video frame with identities and associate persons with the same identity (DukeMTMC-VideoReID \cite{wu2018exploit}). Then, the tracklets are perfect and one tracklet contains one person identity. However, they are more time-consuming than ours which requires only video-level labels.
\item[$\bullet$]\emph{Imperfect tracklets}: The tracklets are obtained automatically by pedestrian detection and tracking algorithms \cite{lin2017feature,ren2015faster,jin2019multi} (MARS \cite{zheng2016mars}). They are bound to have errors of different kinds, like wrong associations, missed detection, etc. Thus, human intervention is required to segregate individual tracklets into the person identities.
\end{itemize}
Our method uses only video-level annotations, reducing the labeling efforts in both the above cases. We put all person images in a video to a bag and label the bag with the video-level labels obtained from annotators. We develop our algorithm without any idea of the tracklets, but rather a bag of images. Further, we do not use any intra-tracklet loss, as one tracklet can have multiple persons in case of imperfect tracking. Table \ref{table:coarse-grained re-id} and Table \ref{table:fine-grained re-id} show our method is robust against the missing annotation scenario where a person might be there in the video, but not labeled by annotators. Hence, our framework has remarkable real-world value where intra-camera tracking is almost surely to happen with an automated software and will be prone to errors.
Next, we present an approximate analysis of the reduction in annotation cost by utilizing weak supervision. Assume that the cost to label a person in an image is $b$. Also, let the average number of persons per image be $p$ and the average number of frames per video be $f$. The total number of videos from all cameras is $n$. So, the annotation cost for strong supervision is $fpnb$. Now, let the cost for labeling a video with video-level labels be $b'$, where $b'<<b$. Thus, the annotation cost for weak supervision amounts to $nb'$. This results in an improvement in the annotation efficiency by $fpb/b'\times 100\%$.
\subsection{Problem Statement}
Assume that we have {\em C } known identities that appear in $N$ video clips. In our weakly labeling settings, each video clip is conceptualized as a bag of person images detected in the video, and assigned a label vector indicating which identities appear in the bag.
Therefore, the training set can be denoted as $\mathcal D=\{(\mathcal X_i,y_i)|i=1,...,N\}$, where $\mathcal X_i=\{I_i^1,I_i^2,...,I_i^{n_i}\}$ is the $i$th bag (video clip) containing $n_i$ person images. Using some feature extractors, we can obtain the corresponding feature representations for these images, which we stack in the form of a feature matrix $X_i\in \mathbb R^{d\times n_i}$;
$y_i=\{y_i^1,y_i^2,...,y_i^C\}\in\{0,1\}^C$ is the label vector of bag {\em i} containing {\em C} identity labels, in which $y_i^c=1$ if the $c$th identity is tagged for $\mathcal X_i$ (person {\em c} appears in video {\em i}) and $y_i^c=0$ otherwise. For the testing probe set, each query is composed of a set of detected images with the same person identity (a person tracklet) in a video clip.
We define two different settings for the testing gallery set as follows:
\begin{itemize}
\item[$\bullet$] {\bf Coarse-grained person re-id} tries to retrieve the videos that the given target person appears in. The testing gallery set should have the same settings as the training set - each testing gallery sample is a bag with one or multiple persons.
\item[$\bullet$] {\bf Fine-grained person re-id} aims at finding the exact tracklets with the same identity as the target person among all obtained tracklets. It has the same goal as the general video person re-id - each gallery sample is a tracklet with a singular person identity.
\end{itemize}
\subsection{Multiple Instance Attention Learning for Person Re-id}
\subsubsection{Identity Space Projection}
In our work, feature representation $X_i$ is used to identify person identities in bag $i$. We project $X_i$ to the identity space ($\mathbb R^{C}$, {\em C} is the number of person identities in training set).
Thereafter, the identity-wise activations for bag {\em i} can be represented as follows:
\begin{equation}
\mathcal {W}_i=f(X_i;\theta)
\end{equation}
where $f(\cdot;\theta)$ is a $C$ dimensional fully connected layer. $\mathcal W_i\in\mathbb R^{C\times n_i}$ is an identity-wise activation matrix.
These identity-wise activations represent the possibility that each person image in a bag is predicted to a certain identity.
\subsubsection{Multiple Instance Learning} In weakly labeled person re-id data, each bag contains multiple instances of person images with person identities. So the video person re-id task can be turned into a multiple instance learning problem. In MIL, the estimated label distribution for each bag is expected to eventually approximate the ground truth weak label (video label); thus, we need to represent each bag using a single confidence score per identity.
In our case, for a given bag, we compute the activation score corresponding to a particular identity as the average of top {\em k} largest activations for that identity ({\em k}-max-mean-pooling strategy).
For example, the identity-$j$ confidence probability for the bag $i$ can be represented as,
\begin{equation}
p_i^j=\frac{1}{k}\textup{topk}(\mathcal W_i[j,:])
\end{equation}
where topk is an operation that selects the top {\em k} largest activations for a particular identity. $\mathcal W_i[j,:]$ denotes the activation score corresponding to identity {\em j} for all person images in bag {\em i}.
Thereafter, a softmax function is applied to compute the probability mass function (pmf) over all the identities for bag {\em i} as follows, $\hat y_i^j=\frac{\exp(p_i^j)}{\sum\limits_{k=1}^{C}\exp (p_i^k)}$.
The MIL loss is the cross-entropy between the predicted pmf $\hat y_i$ and the normalized ground-truth $y_i$, which can then be represented as follows,
\begin{figure}
\centerline{\includegraphics[width=\columnwidth]{Figures/fig3.pdf}}
\caption{This figure illustrates the procedure of the co-person attention mechanism. We assume that bag {\em m} and {\em n} have person identity {\em j} in common. We first obtain the feature representations $X_m$ and $X_n$, and identity-{\em j} activation vectors $\hat{\mathcal W}_m[j,:]$ and $\hat{\mathcal W}_n[j,:]$ by passing the bags through our model. Thereafter, high and low identity-{\em j} attention features $^Hf_m^j$ and $^Lf_m^j$ can be obtained for each bag.
Finally, we want the features with high identity-{\em j} attention region to be close to each other, otherwise push them to be away from each other.
}
\label{fig:3}
\end{figure}
\begin{equation}
\mathcal L_{MIL}=\frac{1}{N_b}\sum\limits_{i=1}^{N_b}\sum\limits_{j=1}^{C}-y_i^j\log(\hat y_i^j)
\end{equation}
where $y_i$ is the normalized ground truth label vector and $N_b$ is the size of training batch.
The MIL only considers each bag in isolation. Next, we present a Co-Person Attention Mechanism for mining the potential relationships between bags.
\subsubsection{Co-Person Attention Mechanism}
In a network of cameras, the same person may appear at different times and different cameras, so there may be multiple video clips (bags) containing common person identities. That motivates us to explore the similarity correlations between bags. Specifically, for those bags with at least one person identity in common, we may want the following properties in the learned feature representations: first, a pair of bags with {\em Person j} in common should have similar feature representations in the portions of the bag where the {\em Person j} appears in;
second, for the same bag pair, feature representation of the portion where {\em Person j} occurs in one bag should be different from that of the other bag where {\em Person j} does not occur.
We introduce Co-Person Attention Mechanism to integrate the desired properties into the learned feature representations. In the weakly labeled data, we do not have frame-wise labels, so the identity-wise activation matrix obtained in Equation 1 is employed to identify the required person identity portions.
Specifically, for bag {\em i}, we normalize the bag identity-wise activation matrix $\mathcal W_i$ along the frame index using softmax function as follows:
\begin{equation}
\hat{\mathcal W_i}[j,t]=\frac{\exp(\mathcal W_i[j,t])}{\sum_{t^{'}=1}^{n_i}\exp(\mathcal W_i[j,t^{'}])}
\end{equation}
Here {\em t} indicates the indexes of person images in bag {\em i} and $j\in\{1,2,...,C\}$ denotes person identity. $\hat{\mathcal W}_i$ could be referred as {\em identity attention}, because it indicates the probability that each person image in a bag is predicted to a certain identity.
Specifically, a high value of attention for a particular identity indicates its high occurrence-probability of that identity.
Under the guidance of the identity attention, we can define the identity-wise feature representations of regions with high and low identity attention for a bag as follows:
\begin{equation}
\left\{
\begin{array}{lr}
^H f_i^j=X_i\hat{\mathcal W}_i[j,:]^T, & \\
^L f_i^j=\frac{1}{n_i-1}X_i(\boldsymbol{1}-\hat{\mathcal W}_i[j,:]^T)&
\end{array}
\right.
\end{equation}
where $^Hf_i^j, ^Lf_i^j\in\mathbb R^{d}$ represent the aggregated feature representations of bag {\em i} with high and low identity-{\em j} attention region, respectively.
It may be noted that in Equation 5, the low attention feature is not defined if a bag contains only one person identity and the number of person images is 1, {\em i.e.} $n_i = 1$.
This is also conceptually valid and in such cases, we cannot compute the CPAL loss.
We use ranking hinge loss to enforce the two properties discussed above. Given a pair of bags {\it m} and {\it n} with person identity {\em j} in common, the co-person attention loss function may be represented as follows:
\begin{equation}
\begin{aligned}
&&\mathcal L_{m,n}^j=\frac{1}{2}\{\max(0,s(^Hf_m^j,^Hf_n^j)-s(^Hf_m^j,^Lf_n^j)+\delta)\\
&&+\max(0,s(^Hf_m^j,^Hf_n^j)-s(^Lf_m^j,^Hf_n^j)+\delta)\}
\end{aligned}
\end{equation}
where $\delta=0.5$ is the margin parameter in our experiment. $s(\cdot,\cdot)$ denotes the cosine similarity between two feature vectors. The two terms in the loss function are equivalent in meaning, and they represent that the features with high identity attention region in both the bags should be more similar than the high attention region feature in one bag and the low attention region feature in the other bag as shown in Figure \ref{fig:3}.
The total CPAL loss for the entire training set may be represented as follows:
\begin{equation}
\mathcal L_{\textit {CPAL}}=\frac{1}{C}\sum_{j=1}^{C}\frac{1}{\binom{|\mathcal S^j|}{2}}\sum_{m,n\in \mathcal S^j}\mathcal L_{m,n}^j
\end{equation}
where $\mathcal S^j$ is a set that contains all bags with person identity {\em j} as one of its labels. $\binom{|\mathcal S^j|}{2}=\frac{|\mathcal S^j| \cdot (|\mathcal S^j|-1)}{2}$. $m,n$ are indexes of bags.
\subsubsection{Optimization}
The MIL considers each bag in isolation but ignores the correlations between bags, and CPAL mines the similarity correlations between bags. Obviously, they are complementary. So, we jointly minimize these two complementary loss functions to learn our multiple instance attention learning framework for person re-id. It can be represented as follows:
\begin{equation}
\mathcal L=\lambda\mathcal L_{MIL}+(1-\lambda)\mathcal L_{CPAL}
\end{equation}
where $\lambda$ is a hyper-parameter that controls contribution of $\mathcal L_{MIL}$ and $\mathcal L_{CPAL}$ for model learning. In Section \ref{sec:lambda}, we discuss the contributions of each part for recognition performance.
\subsection{Coarse and Fine-Grained Person Re-id}
In the testing phase, each query is composed of a set of detected images in a bag with the same person identity (a person tracklet). Following our goals, we have two different settings for testing gallery set.
Coarse-Grained Person Re-id finds the bags (videos) that the target person appears in. So, the testing gallery set is formed in the same manner as the training set.
We define the distance between probe and gallery bags using the minimum distance between average pooling feature of the probe bag and frame features in the gallery bag. Specifically, we use average pooling feature $x_p$ to represent bag {\em p} in the testing probe set and $x_{g,r}$ denotes the feature of $r$th frame in $g$th testing gallery bag. Then, the distance between the bag $p$ and bag $g$ may be represented as follows:
\begin{equation}
D(p,g)=\min\{d(x_p,x_{g,1}),d(x_p,x_{g,2}),...,d(x_p,x_{g,n_g})\}
\end{equation}
where $d(\cdot,\cdot)$ is the Euclidean distance operator. $n_g$ is the number of person images in bag $g$.
Fine-Grained Person Re-id finds the tracklets with the same identity as the target person. This goal is same as the general video person re-id, so testing gallery samples are all person tracklets. We evaluate the fine-grained person re-id performance following the general person re-id setting.
\section{Experiments}
\begin{table*}
\caption{Detailed information of two weakly labeled person re-id datasets.}
\centering
\begin{threeparttable}
\setlength{\tabcolsep}{2mm}
\begin{tabular}{lllllllllll}
\toprule
\multirow{3}{*}{Dataset} & \multirow{3}{*}{Settings} & \multicolumn{3}{c}{Training Set} & \multicolumn{6}{c}{Testing Set} \\ \cline{3-11}
& & \multirow{2}{*}{IDs} & \multirow{2}{*}{Tracks} & \multirow{2}{*}{Bags} & \multicolumn{3}{c}{Probe Set} & \multicolumn{3}{c}{Gallery Set} \\ \cline{6-11}
& & & & & IDs & Tracks & Bags & IDs & Tracks & Bags \\ \hline
\multirow{2}{*}{WL-MARS} & Coarse & 625 & - & 2081 & 626 & - & 626 & 634 & - & 1867 \\
& Fine & 625 & - & 2081 & 626 & 1980 & - & 636 & 12180 & - \\ \hline
\multirow{2}{*}{WL-DukeV} & Coarse & 702 & - & 3842 & 702 & - & 702 & 1110 & - & 483 \\
& Fine & 702 & - & 3842 & 702 & 702 & - & 1110 & 2636 & - \\ \bottomrule
\end{tabular}
IDs, Tracks and Bags denote the number of identities, tracklets and bags. Coarse and Fine represent coarse-grained person re-id and fine-grained setting.
\end{threeparttable}
\label{table:dataset}
\end{table*}
\begin{table*}[t]
\caption{Coarse-grained person re-id performance comparisons. $\downarrow$ represents the decreased recognition performance compared to perfect annotation.}
\centering
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{lllllllll}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{Methods}} & \multicolumn{4}{c}{WL-MARS} & \multicolumn{4}{c}{WL-DukeV} \\ \cmidrule{2-9}
& R-1 & R-5 & R-10 & mAP & R-1 & R-5 & R-10 & mAP \\ \midrule
WSDDN \cite{bilen2016weakly} & 63.4 & 81.9 & 86.6 &30.3 &72.4 & 89.6 & 93.6 & 62.2 \\
HSLR \cite{dong2019single} & 69.6 & 85.9 & 89.8 & 35.4 & 77.5 & 93.0 & 95.2 & 66.0 \\
SSLR \cite{dong2019single} & 66.6 & 82.7 & 86.6 & 31.8 & 76.2 & 90.5 & 93.6 & 64.2 \\
MIL &73.2 & 89.9 & 93.3 & 41.3 &80.8 &93.4 & 95.6 & 69.1\\
OURS (MIL+CPAL) & \bf{78.6} & \bf{90.1} & \bf{93.9} & \bf{47.1} & \bf{82.6} & \bf{93.6} & \bf{95.6} & \bf{72.1} \\
OURS* & 78.1$\downarrow$\tiny{0.5} & 88.3$\downarrow$\tiny{1.8} &91.5$\downarrow$\tiny{2.4} & 42.7$\downarrow$\tiny{4.4} & 79.3$\downarrow$\tiny{3.3} & 92.7$\downarrow$\tiny{0.9} &95.4$\downarrow$\tiny{0.2} & 68.3$\downarrow$\tiny{3.8} \\
\bottomrule
\multicolumn{9}{l}{OURS* represents the proposed method under missing annotation.}
\end{tabular}}
\label{table:coarse-grained re-id}
\end{table*}
\subsection{Datasets and Settings}
\subsubsection{Weakly Labeled Datasets} We conduct experiments on two weakly labeled person re-id datasets - Weakly Labeled MARS (WL-MARS) dataset and Weakly Labeled DukeMTMC-VideoReID (WL-DukeV) dataset. These two weakly labeled datasets are based on the existing video-based person re-id datasets - MARS \cite{zheng2016mars} and DukeMTMC-VideoReID \cite{wu2018exploit} datasets, respectively. They are formed as follows: first, 3 - 6 tracklets from the same camera are randomly selected to form a bag; thereafter, we tag it with the set of tracklet labels. It may be noted that only bag-level labels are available and the specific label of each individual is unknown. More detailed information of these two weakly labeled datasets are shown in Table \ref{table:dataset}.
We also consider a more practical scenario that the annotator may miss some labels for a video clip, namely, \emph {missing annotation}. For example, one person may only appear for a short time and missed by the annotator. It will lead to a situation that weak labels are not reliable. To simulate this circumstance, for each weakly labeled bag, we randomly add 3 - 6 short tracklets with different identities into it and each tracklet contains 5 - 30 person images. So, the new bags will contain the original person images and the new added ones, but the labels are still the original bag labels. In Section \ref{sec:sota}, we evaluate the proposed method under this situation.
\begin{table*}[t]
\centering
\caption{Fine-grained person re-id performance comparisons. $\downarrow$ represents the decreased recognition performance compared to perfect annotation.}
\begin{threeparttable}
\setlength{\tabcolsep}{1.2mm}
\begin{tabular}{llllllllll}
\toprule
\multirow{2}{*}{Settings} &\multicolumn{1}{c}{\multirow{2}{*}{Methods}} & \multicolumn{4}{c}{WL-MARS} & \multicolumn{4}{c}{WL-DukeV} \\ \cline{3-10}
\multicolumn{1}{c}{} & & R-1 & R-5 & R-10 & mAP & R-1 & R-5 & R-10 & \multicolumn{1}{l}{mAP} \\ \hline
\multirow{5}{*}{\begin{tabular}[c]{@{}l@{}}Weak sup.\end{tabular}}
&WSDDN \cite{bilen2016weakly} &59.2 &76.4 &82.4 &41.7 &65.4 &84.0 &90.2 &60.7 \\
&HSLR \cite{dong2019single} & 56.4 & 72.6 & 78.3 & 35.8 & 61.7 & 79.8 & 85.0 & 54.7 \\
&SSLR \cite{dong2019single} & 51.9 & 69.3 & 75.7 & 31.2 & 56.3 & 76.1 & 83.0 & \multicolumn{1}{l}{50.0} \\
& MIL &63.6 &79.1 & 84.2 &43.7 &69.1 &83.3 &89.5 &62.0\\
&OURS (MIL+CPAL) &\bf{65.0} & \bf{81.5} & \bf{86.1} & \bf{46.0} & 70.5 & \bf{87.2} & \bf{92.2} & \multicolumn{1}{l}{\bf{64.9}} \\
&OURS* & 59.8$\downarrow$\tiny{5.2} & 77.3$\downarrow$\tiny{4.2} & 82.8$\downarrow$\tiny{3.3}& 40.6$\downarrow$\tiny{5.4} & 69.5$\downarrow$\tiny{1.0} & 86.2$\downarrow$\tiny{1.0} &90.9$\downarrow$\tiny{1.3}& \multicolumn{1}{l}{63.7$\downarrow$\tiny{1.2}} \\\hline
Unsup. &BUC \cite{lin2019bottom} & 61.1 & 75.1 & 80.0 & 38.0 & 69.2 & 81.1 & 85.8 & 61.9 \\
One-shot &EUG \cite{wu2018exploit} & 62.6 & 74.9 & - & 42.4 & \bf{72.7} & 84.1 & - & 63.2 \\
Intra &UGA \cite{wu2019unsupervised} & 59.9 & - & - & 40.5 & - & - & - & - \\
Fully sup. &Baseline & 78.4 & - & - & 65.5 & 86.4 & - & - & 82.0 \\ \bottomrule
\end{tabular}
OURS* represents the proposed method under missing annotation. Full sup. denotes fully supervised. Intra indicates Intra-camera supervised.
\end{threeparttable}
\label{table:fine-grained re-id}
\vspace{-0.3cm}
\end{table*}
\subsubsection{Implementation Details}
In this work, an ImageNet \cite{deng2009imagenet} pre-trained ResNet50 network \cite{he2016deep}, in which we replace its last average pooling layer with a {\em d}-dimensional fully connected layer $(d=2048)$, is used as our feature extractor.
Stochastic gradient descent with a momentum of 0.9 and a batch size of 10 is used to optimize our model. The learning rate is initialized to 0.01 and changed to 0.001 after 10 epochs. We create each batch in a way such that it has a minimum of three pairs of bags and each pair has at least one identity in common.
We train our model end-to-end on two Tesla K80 GPU using Pytorch. We set $k=5$ in Equation 2 for both datasets.
The number of person images in each training bag is set to a fixed value 100. If the number is greater than that, we randomly select 100 images from the bag and assign the labels of the bag to the selected subset. It may be noted that for WL-DukeV dataset, we split each original person tracklet into 7 parts to increase the number of weakly labeled training samples. To evaluate the performance of our method, the widely used cumulative matching characteristics (CMC) curve and mean average precision (mAP) are used for measurement.
\subsection{Comparison with the Related Methods}
\label{sec:sota}
\subsubsection{Coarse-Grained Person Re-id}
We compare the performance of our method (MIL and MIL+CPAL) to the existing state-of-the-art multiple instance learning methods - weakly supervised deep detection network (WSDDN) \cite{bilen2016weakly} (section 3.3 of their paper which is relevant for our case), multi-label learning-based hard selection logistic regression (HSLR) \cite{dong2019single} and soft selection logistic regression (SSLR) \cite{dong2019single} for the task of coarse-grained person re-id. It should be noted that we use the same network architecture for all five methods for fair comparison. From Table \ref{table:coarse-grained re-id}, it can be seen that the proposed {\emph k}-max-mean-pooling based MIL method performs much better than other compared methods. Comparing to WSDDN, the rank-1 accuracy is increased by 9.8\% and 11.0\% for mAP score on WL-MARS dataset. When combining with CPAL (OURS) the recognition performance is further improved. Especially, compared to WSDDN, the rank-1 accuracy and mAP score are improved by $15.2\%$ and $16.8\%$ on WL-MARS dataset, similarly, $10.2\%$ and $9.9\%$ on WL-DukeV dataset.
In this subsection, we also evaluate our method under \emph{missing annotation} scenario. As shown in Table \ref{table:coarse-grained re-id}, we can see that when testing our method under missing annotation situation, for WL-MARS dataset, the rank-1 accuracy and mAP score decrease 0.5\% (78.6\% to 78.1\%) and 4.4\% (47.1\% to 42.7\%), respectively, and for WL-DukeV dataset, it decreases 3.3\% and 3.8\% accordingly. Our method is not very sensitive to missing annotation situation for coarse-grained re-id task. Furthermore, we find that the proposed method with missing annotation still performs significantly better than others with perfect annotation (annotator labels all appeared identities). For example, comparing to HSLR, on WL-MARS dataset, the rank-1 accuracy and mAP score are improved by 8.5\% and 7.3\%, respectively.
\subsubsection{Fine-Grained Person Re-id}
In Table \ref{table:fine-grained re-id}, we compare our framework against methods which utilize strong labels, as well as other weakly supervised methods for fine-grained person re-id. It can be seen that the proposed {\emph k}-max-mean-pooling-based MIL method performs much better than most of the other compared methods and when combining with CPAL (OURS) the recognition performance is further improved. Especially, comparing to HSLR, our method can obtain 8.6\% and 10.2\% improvement for rank-1 accuracy and mAP score respectively, on WL-MARS, and similarly, 8.8\% and 10.2\% improvement on the WL-DukeV dataset.
The efficacy of using weak labels is strengthened by the improvement over methods which use strong labels, such as EUG (strong labeling: one-shot setting) \cite{wu2018exploit} and UGA (strong labeling: intra-camera supervision) \cite{wu2019unsupervised}. Weak labels also improve performance compared to unsupervised methods such as BUC \cite{lin2019bottom}, with gains of 6.4\% and 8.0\% in rank-5 accuracy and mAP score on WL-MARS dataset, and similarly, 6.1\% and 3.0\% on WL-DukeV dataset. Compared to EUG, the recognition performance is improved from 74.9\% to 81.5\% (6.6\% difference) for rank-5 accuracy on WL-MARS dataset and 84.1\% to 87.2\% (3.1\% difference) on the WL-DukeV dataset.
We evaluate our method under \emph {missing annotation} scenario for fine-grained re-id. As shown in Table \ref{table:fine-grained re-id}, we can see that when testing our method under missing annotation situation, for WL-MARS dataset, the rank-1 accuracy and mAP score decrease 5.2\% and 5.4\%, similarly, 1.0\% and 1.2\% for WL-DukeV dataset.
We can observe that our results under missing annotation situation are still very competitive compared to others under perfect annotation.
For example, comparing to HSLR, on WL-MARS dataset, the rank-1 accuracy and mAP score are improved by 3.4\% and 4.8\%, similarly, and 7.8\% and 9.0\% on WL-DukeV dataset. Comparing to unsupervised method BUC, we can also obtain better results, especially for the mAP score, our method is 2.6\% and 1.8\% better than that on WL-MARS and WL-DukeV datasets, respectively.
\begin{figure}[t]
\centerline{\includegraphics[width=1\columnwidth]{Figures/fig4.pdf}}
\caption{(a) presents the variations in rank-1 accuracy on WL-MARS dataset for coarse and fine-grained re-id tasks by changing parameter $\lambda$. Higher $\lambda$ represents more weights on the MIL and vice versa. (b) presents the variations in mAP score on WL-MARS dataset for both coarse-grained and fine-grained re-id tasks by changing $\lambda$ as discussed in the text.
}
\label{fig:lambda}
\end{figure}
\subsection{Weights Analysis on Loss Functions}
\label{sec:lambda}
In our framework, we jointly optimize MIL and CPAL to learn the weights of the multiple instance attention learning module. In this section, we investigate the relative contributions of the two loss functions to the recognition performance. In order to do that, we perform experiments on WL-MARS dataset, with different values of $\lambda$ (higher value indicates larger weight on MIL), and present the rank-1 accuracy and mAP score on coarse and fine-grained person re-id tasks in Figure \ref{fig:lambda}.
As may be observed from the plot, when $\lambda = 0.5$, the proposed method performs best, {\em i.e.}, both the loss functions have equal weights. Moreover, using only MIL, {\em i.e.}, $\lambda=1.0$, results in a decrease of 5.8\% and 2.3\% in mAP (5.4\% and 1.4\% in rank-1 accuracy) on coarse and fine-grained person re-id tasks, respectively.
This shows that the CPAL introduced in this work has a major effect towards the better performance of our framework.
\subsection{Parameter Analysis}
We adopt a $k$-max-mean-pooling strategy to compute the activation score corresponding to a particular identity in a bag. In this section, we evaluate the effect of varying $k$, which is used in Equation 2. As shown in Table \ref{tab:lambda_param}, the proposed multiple instance attention learning framework is evaluated with four different $k$ values $(k=1,5,10,20)$ on WL-MARS dataset for fine-grained person re-id. It can be seen that when $k=5$, we obtain the best recognition performance 65.0\% for rank-1 accuracy and 46.0\% for mAP score. Comparing to $k=1$ which selects the largest activation for each identity, the performance is improved by 4.0\% and 4.0\% for rank-1 accuracy and mAP score, respectively. We use this value of $k=5$ for all the experiments.
\subsection{Comparison with CV-MIML} \label{sec:cv_miml}
In this section, we compare the proposed framework with CV-MIML \cite{meng2019weakly} that has recently been proposed for weakly supervised person re-id task. Although \cite{meng2019weakly} is presented as a weakly supervised method, it should be noted that it uses a \emph{strongly labeled tracklet} for each identity (one-shot labels) in addition to the weak labels – this is not a true weakly supervised setting and we term it as \emph {semi-weakly supervised}. On the contrary, our method does not require the strong labels and is more in line with the weakly supervised frameworks proposed for object, activity recognition and segmentation \cite{heidarivincheh2019weakly,yu2019temporal,bilen2016weakly,khoreva2017simple,ahn2018learning}. Thus, CV-MIML is not directly applicable to our scenario where one only has access to bags of person images. However, for the sake of comparison, we implemented CV-MIML without the probe set-based MIML loss term $(\mathcal L_p)$ and cross-view bag alignment term $(\mathcal L_{CA})$, since these require the one-shot labels to calculate the cost or the distribution prototype for each class. We refer to this as CV-MIML* and compare it to our method on WL-MARS dataset for coarse-grained re-id task. We also briefly compare our results with the one reported in \cite{meng2019weakly} on Mars dataset.
As shown in Table \ref{tab:cvmiml}, it can be seen that despite the lack of strong labels, our method performs comparably with CV-MIML and completely outperforms its label-free variant CV-MIML* (more than 300\% relative improvement in mAP). In addition, comparing the recognition performance of CV-MIML* and CV-MIML, we find that CV-MIML method relies on strong labels a lot.
\begin{table}[t]
\centering
\caption{Fine-grained re-id performance comparisons with different parameter $k$ on WL-MARS dataset.}
\setlength{\tabcolsep}{3mm}
\begin{tabular}{llllll}
\toprule
$k$ & Rank-1 & Rank-5 & Rank-10 & Rank-20 & mAP \\\hline
$k=1$ & 61.0 & 78.5 & 83.4 & 88.0 & 42.0 \\
$k=5$ & \bf 65.0 & \bf 81.5 & \bf 86.1 & \bf 89.7 & \bf 46.0 \\
$k=10$ & 60.6 & 78.0 & 83.2 & 88.4 & 41.7 \\
$k=20$ & 57.5 & 75.9 & 81.2 & 86.0 & 38.4\\
\bottomrule
\end{tabular}
\label{tab:lambda_param}
\end{table}
\begin{table}[t]
\centering
\caption{Coarse-grained re-id performance comparisons with CV-MIML on WL-MARS dataset.}
\setlength{\tabcolsep}{5mm}
\begin{tabular}{lllll}
\toprule
Methods & R1 & R5 & R10 & mAP \\ \hline
\begin{tabular}[c]{@{}l@{}}CV-MIML*\end{tabular} & 33.3 & 51.3 & 58.5 & 10.7 \\
\begin{tabular}[c]{@{}l@{}}CV-MIML \cite{meng2019weakly}\end{tabular} & 66.8 & 82.0 & 87.2 & 55.1 \\
OURS & 78.6 & 90.1 & 93.9 & 47.1 \\
\bottomrule
\end{tabular}
\label{tab:cvmiml}
\end{table}
\begin{table}[t]
\centering
\caption{Fine-grained person re-id performance comparisons with tracklet setting.}
\begin{threeparttable}
\setlength{\tabcolsep}{2mm}
\begin{tabular}{llllll}
\toprule
Methods & \multicolumn{1}{l}{Settings} & Rank-1 & Rank-5 & Rank-10 & mAP \\\hline
HSLR \cite{dong2019single} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Weak\end{tabular}} & 55.4 & 72.8 & 78.6 & 34.7 \\
SSLR \cite{dong2019single} & & 49.0 & 67.9 & 74.0 & 28.7 \\
OURS & & 62.2 & \bf 79.4 & \bf 84.3 & \bf 43.0 \\\hline
BUC \cite{lin2019bottom} & \multicolumn{1}{l}{None} & 61.1 & 75.1 & 80.0 & 38.0\\
EUG \cite{wu2018exploit} & \multicolumn{1}{l}{One-shot} & \bf 62.6 & 74.9 & - & 42.4\\
UGA \cite{wu2019unsupervised} & \multicolumn{1}{l}{Intra} & 59.9 & - & - & 40.5\\
\bottomrule
\end{tabular}
Weak: weak supervision; None: unsupervised; One-shot: a singular labeled tracklet for each identity; Intra denotes intra-camera supervision, in which labels are provided only for samples within an individual camera view.
\end{threeparttable}
\label{tab:tracklet}
\end{table}
\subsection{Evaluation of Multiple Instance Attention Learning with Tracklet Setting}
Our proposed method work with individual frames of the tracklets given in the bag (video). In this section, we perform an ablation study, where we use tracklet features instead of using frame-level features. So, each training sample can be denoted as $(\mathcal X_i,y_i)$ where $\mathcal X_i=\{T_i^1,T_i^2,..,T_i^{m_i}\}$ contains $m_i$ person tracklets and $T_i^k$ is the $k$th tracklet obtained in $i$th video clip, and $y_i$ is a weak label for the bag. Tracklet features are computed by a mean-pooling strategy over the frame features. Table \ref{tab:tracklet} reports the fine-grained person re-id performance on WL-MARS dataset. Even in this setting, our method still performs better than others. Compared to multiple label learning-based HSLR \cite{dong2019single}, we achieve 6.8\% and 8.3\% improvement for rank-1 accuracy and mAP score, respectively. Compared to the state-of-the-art unsupervised BUC \cite{lin2019bottom}, we can also obtain better recognition performance, especially 5\% improvement for mAP score. Moreover, the proposed method is also very competitive compared to those semi-supervised person re-id methods, such as EUG \cite{wu2018exploit} and UGA \cite{wu2019unsupervised}, under tracklet setting. Especially, the mAP score is improved by 0.6\% and 2.5\%, comparing to EUG and UGA, respectively. Next, we present a more practical scenario (Noisy Tracking) where each tracklet may contain more than a singular identity due to the imperfect person tracking in a video clip.
{\emph{Noisy tracking.}}
Assuming correct tracking over the entire duration of a tracklet is a very strong and an unrealistic assumption. Thus, in a practical setting, a tracklet may contain more than a singular identity. Our method obviates this scenario by using frame features. Here, we present the performance using tracklets with noisy tracking. Specifically, we randomly divide the person images in the same bag into 4 parts and regard each of them as a person tracklet that may contain one or multiple person identities. Based on this setting, we compare the fine-grained person re-id performance of the proposed method to a few different methods on WL-MARS dataset. Table \ref{tab:wrong tracking} presents this comparison. Obviously, under noisy tracking setting, the recognition performance declines a lot for all methods comparing to those reported in Table \ref{tab:tracklet}. However, weak supervision-based methods outperform the state-of-the-art unsupervised BUC \cite{lin2019bottom} by a large margin consistently, especially, the proposed method obtains 12.4\% and 11.9\% improvement for rank-1 accuracy and mAP score.
\begin{table}[t]
\centering
\caption{Fine-grained person re-id performance comparisons with noisy tracking.}
\begin{threeparttable}
\setlength{\tabcolsep}{2mm}
\begin{tabular}{llllll}
\toprule
Methods & \multicolumn{1}{l}{Settings} & Rank-1 & Rank-5 & Rank-10 & mAP \\\hline
HSLR \cite{dong2019single} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Weak\end{tabular}} & 45.0 & 62.5 & 68.9 & 25.2 \\
SSLR \cite{dong2019single} & & 39.0 & 58.1 & 64.2 & 20.4 \\
OURS & & \bf 48.1 & \bf 66.2 & \bf 73.0 & \bf 28.0 \\\hline
BUC \cite{lin2019bottom} & \multicolumn{1}{l}{None} & 35.7 & 50.7 & 55.9 & 16.1\\
\bottomrule
\end{tabular}
Weak denotes weak supervision; None denotes unsupervision.
\end{threeparttable}
\label{tab:wrong tracking}
\end{table}
\begin{table}[]
\caption{Ablation studies of the proposed framework on WL-MARS dataset.}
\centering
\begin{tabular}{llllll}
\toprule
\multirow{2}{*}{Settings} & \multirow{2}{*}{Methods} & \multicolumn{4}{c}{WL-MARS} \\ \cline{3-6}
& & R-1 & R-5 & R-10 & mAP \\ \hline
\multirow{8}{*}{\begin{tabular}[c]{@{}l@{}}Coarse-Grained\\ Re-id\end{tabular}}
& HSLR & 69.6 & 85.9 & 89.8 & 35.4 \\
& HSLR+CPAL & 74.0 & 87.5 & 93.0 & 42.3 \\
& SSLR & 66.6 & 82.7 & 86.6 & 31.8 \\
& SSLR+CPAL & 70.0 & 86.3 & 91.1 & 37.9 \\
& WSDDN & 63.4 &81.9 & 86.6 & 30.3\\
& WSDDN+CPAL & 76.4 & 89.7 & 93.4 & 45.9\\
& MIL & 73.2 & 89.9 & 93.3 & 41.3 \\
& OURS (MIL+CPAL) & 78.6 & 90.1 & 93.9 & 47.1 \\ \hline
\multirow{8}{*}{\begin{tabular}[c]{@{}l@{}}Fine-Grained\\ Re-id\end{tabular}}
& HSLR & 56.4 & 72.6 & 78.3 & 35.8 \\
& HSLR+CPAL & 63.2 & 78.6 & 83.3 & 42.8 \\
& SSLR & 51.9 & 69.3 & 75.7 & 31.2 \\
& SSLR+CPAL & 59.3 & 76.2 & 82.2 & 39.4 \\
& WSDDN &59.2 & 76.4 & 82.4 & 41.7\\
& WSDDN+CPAL &63.6 & 80.3 & 84.0 & 43.1\\
& MIL & 63.6 & 79.1 & 84.2 & 43.7 \\
& OURS (MIL+CPAL) & 65.0 & 81.5 & 86.1 & 46.0 \\
\bottomrule
\end{tabular}
\label{table:ablation}
\end{table}
\subsection{Ablation Study}
In this section, we conduct ablation studies to evaluate the advantages of our proposed MIL loss and CPAL loss. We validate our methods on WL-MARS dataset under two different tasks - coarse-grained person re-id and fine-grained person re-id. From Table \ref{table:ablation}, we can see that (1) adding CPAL to other methods, such as HSLR+CPAL, SSLR+CPAL and WSDDN+CPAL helps to improve recognition performance by a large margin consistently, such as 4.4\% rank-1 accuracy and 6.9\% mAP score improvement for HSLR-based coarse-grained re-id, and 6.8\% rank-1 accuracy and 7.0\% mAP score improvement for HSLR-based fine-grained re-id; (2) MIL loss performs better than other deep logistic regression-based methods. Comparing to HSLR-based coarse-grained re-id, the rank-1 accuracy is improved from 69.6\% to 73.2\%, and 35.8\% to 43.7\% for mAP score; (3) Combining MIL and CPAL (MIL+CPAL), we can obtain the best recognition performance 78.6\% and 47.1\% for rank-1 accuracy and mAP score on coarse-grained re-id, and 65.0\% and 46.0\% on fine-grained re-id accordingly.
\begin{figure}
\centerline{\includegraphics[width=\columnwidth]{Figures/fig5.pdf}}
\caption{Illustration of coarse-grained person re-id and fine-grained person re-id results on WL-MARS dataset. (a) shows the results of coarse-grained person re-id. It demonstrates 4 retrieved bags (video clips) for a target person. Bounding boxes indicate the the most similar frame in a bag to the target person. Blue and red represent correct and wrong retrieval results. Yellow dots indicate the tracklets with the same identity as the query person. (b) shows the results of fine-grained person re-id. It illustrates 4 retrieved tracklets for a target tracklet.
}
\label{fig:results display}
\end{figure}
\subsection{Matching Examples}
To have better visual understanding, we show some coarse and fine-grained person re-id results achieved by our proposed multiple instance attention learning framework on WL-MARS dataset in Figure \ref{fig:results display}.
Figure \ref{fig:results display}(a) shows the coarse-grained person re-id results. We can see that each query is a bag containing one tracklet with one person identity and 4 returned bags (video clips) are shown in this figure. The bounding boxes indicate the most similar frame in a bag to the query person. Blue and red show the correct and wrong retrieval results, respectively. Yellow dots indicate the tracklets with the same identity as the query person. We find it happens that the most similar frame is wrong, but the retrieval results are correct as shown in Figure \ref{fig:results display}(a): Bag 3. That may explain coarse-grain rank-1 accuracy is better than fine-grained re-id to some extent. Figure \ref{fig:results display}(b) shows us some results of fine-grained person re-id, in which both query and gallery samples are tracklets.
\section{Conclusions}
In this paper, we introduce a novel problem of learning a person re-identification model from videos using weakly labeled data. In the proposed setting, only video-level labels (person identities who appear in the video) are required, instead of annotating each frame in the video - this significantly reduces the annotation cost. To address this weakly supervised person re-id problem, we propose a multiple instance attention learning framework, in which the video person re-identification task is converted to a multiple instance learning setting, on top of that, a co-person attention mechanism is presented to explore the similarity correlations between videos with common person identities. Extensive experiments on two weakly labeled datasets - WL-MARS and WL-DukeV datasets demonstrate that the proposed framework achieves the state-of-the-art results in the coarse-grained and fine-grained person re-identification tasks. We also validate that the proposed method is promising even when the weak labels are not reliable.
\iffalse
\section*{Acknowledgment}
This research is supported by China Scholarship Council and the National Natural Science Foundation of China under Grant No. 61771189.
\fi
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-309 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Quantum annealing (QA) has been studied as a way to solve combinational
optimization problems~\cite{kadowaki1998quantum,farhi2000quantum,farhi2001quantum}
where the goal is to minimize a cost function. Such a problem is mapped
into a finding of a ground state of Ising Hamiltonians that contain the information of the problem.
QA is designed to find an energy eigenstate of the target Hamiltonian by using adiabatic dynamics.
So, by using the QA, we can find the ground state of the Ising Hamiltonian for the combinational optimization problem.
D-Wave systems, Inc. has
have realized a quantum device to implement the QA \cite{johnson2011quantum}.
Superconducting flux qubits \cite{orlando1999superconducting,mooij1999josephson,harris2010experimental}
have been used in the device
for the QA. Since superconducting qubits are artificial atoms,
there are many degree of freedoms to control parameters
by changing the design and external fields, which is suitable for a programmable device.
QA with the D-Wave machines can be used not only for finding the ground state, but also for
quantum simulations \cite{harris2018phase,king2018observation}
and machine learning \cite{mott2017solving,amin2018quantum}.
Quantum chemistry is one of the important applications of
quantum information processing~\cite{levine2009quantum,serrano2005quantum,mcardle2020quantum}, and
it was recently shown that the QA can be also used for quantum chemistry
calculations~\cite{perdomo2012finding,aspuru2005simulated,lanyon2010towards,du2010nmr,peruzzo2014variational,
mazzola2017quantum,streif2019solving,babbush2014adiabatic}.
Important properties of molecules can be investigated by the second quantized Hamiltonian of the molecules.
Especially, the energy gap between the ground state and excited states is essential information for
calculating optical spectra and reaction rates
~\cite{serrano2005quantum}.
The second quantized Hamiltonian can be mapped into the Hamiltonian of
qubits~\cite{jordanwigner,bravyi2002fermionic,aspuru2005simulated,seeley2012bravyi,tranter2015b}.
Importantly, not only the ground state
but also the excited state of the Hamiltonian can be prepared by the QA \cite{chen2019demonstration,seki2020excited}. By measuring suitable observable on
such states prepared by the QA, we can estimate the eigenenergy of the Hamiltonian. In the conventional approaches,
we need to perform two separate experiments to estimate an energy gap between the ground state and the excited state.
In the first (second) experiment, we measure the eigenenergy of the ground (excited) state prepared by the QA. From the subtraction between the estimation of the eigenenergy of the ground state and that of the excited state, we can obtain the information of the energy gap \cite{seki2020excited}.
Here, we propose a way to estimate an energy gap between the ground state and excited state in a more direct manner.
The key idea is to use the Ramsey type measurement where a superposition between the ground state and excited state
acquires a relative phase that depends on the energy gap \cite{ramsey1950molecular}. By performing the Fourier transform of the signal from the
Ramsey type experiments, we can estimate the energy gap. We numerically study the performance of our protocol to estimate
the energy gap between the ground state and first excited state. We show robustness of our scheme against non-adiabatic
transitions between the ground state and first excited state.
\section{Estimation of the energy gap between the ground state and excited state based on the Ramsey type measurement
}
We use the following time-dependent Hamiltonian in our scheme
\begin{eqnarray}
H&=&A(t)H_{\rm{D}}+(1-A(t))H_{\rm{P}}\nonumber \\
A(t)&=&\left\{ \begin{array}{ll}
1 -\frac{t}{T}& (0\leq t \leq T) \\
0 & (T \leq t \leq T +\tau ) \\
\frac{t-(T+\tau )}{T} & (T+\tau \leq t \leq 2T+\tau )
\\
\end{array} \right.
\end{eqnarray}
where $A(t)$ denotes an external control parameter (as shown in the Fig. \ref{aatfigure}), $H_{\rm{D}}$ denotes the driving Hamiltonian that is typically chosen as the transverse magnetic field term,
and $H_{\rm{P}}$ denotes the target (or problem) Hamiltonian whose energy gap we want to know.
This means that, depending on the time period,
we have three types of the Hamiltonian as follows
\begin{eqnarray}
H_{\rm{QA}}&=&(1-\frac{t}{T})H_{\rm{D}}+\frac{t}{T}H_{\rm{P}}
\nonumber \\
H_{\rm{R}}&=&H_{\rm{P}}\nonumber \\
H_{\rm{RQA}}&=&\frac{t-(T+\tau )}{T}H_{\rm{D}}+(1-\frac{t-(T+\tau )}{T})H_{\rm{P}}\nonumber
\end{eqnarray}
In the first time period of $0\leq t \leq T$,
the Hamiltonian is $H_{\rm{QA}}$, and this
is the same as that is used in the standard QA.
In the next time period of $T \leq t \leq T +\tau$,
the Hamiltonian becomes $H_{\rm{R}}$, and
the dynamics induced by this Hamiltonian
corresponds to that of the Ramsey type evolution \cite{ramsey1950molecular} where the superposition
of the state acquires a relative phase depending on the energy gap.
In the last time period of $T+\tau \leq t \leq 2T+\tau$,
the Hamiltonian becomes $H_{\rm{RQA}}$, and this has a similar form of that
is used in a reverse QA
\cite{perdomo2011study,ohkuwa2018reverse,yamashiro2019dynamics,arai2020mean}.
\begin{figure}[bhtp]
\centering
\includegraphics[width=16cm]{atfigure}
\caption{
An external control parameter $A(t)$ of our time-dependent Hamiltonian
$H(t)=A(t)H_{\rm{D}}+(1-A(t))H_{\rm{P}}$ where $H_{\rm{D}}$ denotes
the driving Hamiltonian
and $H_{\rm{P}}$ denotes the target (problem) Hamiltonian.
With a time period of $0\leq t \leq T$, we have the Hamiltonian $H_{\rm{QA}}$ that is used in the standard QA.
With the next time period of $T \leq t \leq T+\tau $, we have the Ramsey time Hamiltonian $H_{\rm{R}}$
where the quantum state acquires a relative phase induced from the energy gap.
In the final time period of $T+\tau \leq t \leq 2T+\tau $, we have the Hamiltonian
$H_{\rm{RQA}}$ which is used in a reverse QA. By using the dynamics induced by these Hamiltonians, we can estimate the energy gap of the target Hamiltonian.
}\label{aatfigure}
\end{figure}
We explain the details of our scheme.
Firstly, prepare an initial state of
$|\psi _0\rangle =\frac{1}{\sqrt{2}}(|E_0^{\rm{(D)}}\rangle +|E_1^{\rm{(D)}}\rangle)$
where $|E_0^{\rm{(D)}}\rangle$ ($|E_1^{\rm{(D)}}\rangle$)
denotes the ground (excited) state of the driving Hamiltonian.
Secondly, let this state evolve in an adiabatic way
by the Hamiltonian of $H_{\rm{QA}}$
and we obtain a state of
$\frac{1}{\sqrt{2}}(|E_0^{\rm{(P)}}\rangle +e^{-i\theta }|E_1^{\rm{(P)}}\rangle)$
where $|E_0^{\rm{(P)}}\rangle$ ($|E_1^{\rm{(P)}}\rangle$)
denotes the ground (excited) state of the target Hamiltonian and $\theta $
denotes a relative phase acquired during the dynamics. Thirdly, let the state evolve by the Hamiltonian
of $H_{\rm{R}}$
for a time $T\leq t \leq T+\tau $, and we obtain
$\frac{1}{\sqrt{2}}(|E_0^{\rm{(P)}}\rangle +e^{-i\Delta E \tau
-i\theta }|E_1^{\rm{(P)}}\rangle)$ where $\Delta E= E_1^{(\rm{P})}-E_0^{(\rm{P})}$ denotes
an energy gap and $E_0^{(\rm{P})}$ ($E_1^{(\rm{P})}$) denotes the eigenenergy of the ground
(first excited) state of the target Hamiltonian.
Fourthly, let this state evolve in an adiabatic way
by the Hamiltonian of $H_{\rm{RQA}}$ from $t=T+\tau $ to $T$,
and we obtain a state of
$\frac{1}{\sqrt{2}}(|E_0^{\rm{(D)}}\rangle +e^{-i\Delta E \tau
-i\theta '}|E_1^{\rm{(D)}}\rangle)$
where $\theta '$ denotes
a relative phase acquired during the dynamics. Fifthly, we readout the state by using a projection operator of
$|\psi _0\rangle \langle \psi _0|$, and the projection probability is
$P_{\tau }=\frac{1}{2}+\frac{1}{2} \cos (\Delta E \tau +\theta ')$, which is
an oscillating signal with a frequency of the energy gap.
Finally, we repeat the above five steps by sweeping $\tau $, and obtain several values of $P_{\tau }$.
We can perform the Fourier transform of $P_{\tau }$ such as
\begin{eqnarray}
f(\omega )= \sum_{n=1}^{N}(P_{\tau }-\frac{1}{2})e^{-i\omega \tau _n}
\end{eqnarray}
where $\tau _n= t_{\rm{min}}+\frac{n-1}{(N-1) }(t_{\rm{max}} - t_{\rm{min}})$
denotes a time step, $t_{\rm{min}}$ ($t_{\rm{max}}$)
denotes a minimum (maximum) time to be considered,
and $N$ denotes the number of the steps. The peak in $f(\omega )$ shows the energy gap $\Delta E$.
\begin{figure}[bhtp]
\centering
\includegraphics[width=16cm]{combinedfigure-jjap}
\caption{Fourier function against a frequency. Here, we set parameters as $\lambda _1/2\pi =1$ GHz, $g/2\pi =0.5$ GHz, $\omega _1/2\pi = 0.2$ GHz, $\omega _2/\omega _1=1.2$, $g'/g=2.1$, $\lambda _2/\lambda _1 =10.7$, $L=2$,
$N=10000$, $t_{\rm{min}}=0$, $t_{\rm{max}}=100$ ns.
(a) We set $T=150 \ (75)$ ns for the blue (red) plot,
where we have a peak around $1.067$ GHz, which corresponds to
the energy difference between the ground state and first excited state of the target Hamiltonian. We have another
peak around $\omega =0$, and this comes from non-adiabatic transition during the QA.
(b) We set $T=37.5$($12.5$) ns for the blue (red) plot. We have an additional peak around $1.698$ GHz ($2.7646$ GHz),
which corresponds to
the energy difference between the first excited state (ground state) and second excited state of the target Hamiltonian.
}\label{dcodmr}
\end{figure}
To check the efficiency, we perform the numerical simulations to estimate the energy gap between the ground state and first excited state,
based on typical parameters for superconducting qubits. We choose the following Hamiltonians
\begin{eqnarray}
H_{\rm{D}}&=&\sum_{j=1}^{L}\frac{\lambda _j}{2}\hat{\sigma }_x^{(j)}\nonumber \\
H_{\rm{P}} &=&\sum_{j=1}^{L} \frac{\omega _j}{2}\hat{\sigma }_z^{(j)}
+\sum_{j=1}^{L-1}g \hat{\sigma }_z^{(j)}\hat{\sigma }_z^{(j+1)}
+g'(\hat{\sigma }_+^{(j)} \hat{\sigma }_-^{(j+1)} + \hat{\sigma }_-^{(j)} \hat{\sigma }_+^{(j+1)} )
\end{eqnarray}
where $\lambda _j$ denotes the amplitude of the transverse magnetic fields of the $j$-th qubit,
$\omega _j$ denotes the frequency of the $j$-th qubit, and $g$($g'$) denotes the Ising (flip-flop)
type coupling strength between qubits.
We consider the case of two qubits, and
the initial state is $|1\rangle |-\rangle $ where $|1\rangle $ ($|-\rangle $)
is an eigenstate of $\hat{\sigma }_z$($\hat{\sigma }_x$)
with an eigenvalue of +1 (-1). In the Fig. \ref{dcodmr} (a), we plot the Fourier function $|f(\omega )|$ against $\omega $ for this
case. When we set $T=150$ (ns) or $75$ (ns),
we have a peak around $\omega = 1.067$ GHz, which corresponds to the energy gap $\Delta E$
of the problem Hamiltonian in our parameter. So this result shows that we can estimate the energy gap by using our scheme.
Also, we have a smaller peak of around $\omega =0$ in the Fig. \ref{dcodmr} (a),
and this comes from non-adiabatic transitions between the ground state and first excited state.
If the dynamics is perfectly adiabatic, the population of both the ground state and first excited state should be
$\frac{1}{2}$ at
$t=T$.
However, in the parameters with $T=150$ ($T=75$) ns,
the population of the ground state and excited state is around 0.6 (0.7) and 0.4 (0.3) at $t=T$, respectively.
In this case, the probability at the readout step should be modified as
$P'_{\tau }=a+b \cos (\Delta E \tau +\theta ')$ where the parameters
$a$ and $b$ deviates from $\frac{1}{2}$ due to the non-adiabatic transitions. This induces the peak of around
$\omega =0$ in the Fourier function $f(\omega )$. As we decrease $T$,
the dynamics becomes less adiabatic, and the peak of $\omega =0$ becomes higher
while the target peak corresponding the energy gap $\Delta E $ becomes smaller as shown in the Fig. 1.
Importantly, we can still identify the peak corresponding to the energy gap for the following reasons.
First, there is a large separation between the peaks.
Second, the non-adiabatic transitions between
the ground state and first excited state
do not affect the peak
position. So our scheme is robust against the non-adiabatic transition between
the ground state and first excited state.
This is stark contrast with a previous scheme that is fragile against such non-adiabatic transitions
\cite{seki2020excited}.
Moreover, we have two more peaks in the Fig. \ref{dcodmr} (b) where we choose
$T=37.5$($12.5$) ns for the red (blue) plot, which is shorter than that of the Fig. \ref{dcodmr} (a).
The peaks are around $1.698$ GHz and $2.7646$ GHz, respectively.
The former (latter)
peak corresponds to
the energy difference between the first excited state (ground state) and second excited state.
We can interpret these peaks as follows.
Due to the non-adiabatic dynamics, not only the first excited state but also the second excited state is
induced in this case. The state after the evolution with $H_{\rm{R}}$ at $t=T+\tau $
is approximately given as
a superposition between the ground state, the first excited state,
and the second excited state such as
$c_0e^{-i E_0^{\rm{(P)}} \tau
-i\theta _0} |E_0^{\rm{(P)}}\rangle +c_1e^{-i E_1^{\rm{(P)}} \tau
-i\theta _1}
|E_1^{\rm{(P)}}\rangle + c_2e^{-i E_2^{\rm{(P)}} \tau
-i\theta _2}
|E_2^{\rm{(P)}}\rangle$ where $c_{i}$ $(i=0,1,2)$ denote real values and $\theta _i$ ($i=0,1,2$)
denotes
the relative phase induced by the QA.
So the Fourier transform of the probability distribution obtained from the measurements provides us with three frequencies
such as
$(E_0^{\rm{(P)}}-E_1^{\rm{(P)}})$, $(E_1^{\rm{(P)}}-E_2^{\rm{(P)}})$, and $(E_2^{\rm{(P)}}-E_0^{\rm{(P)}})$.
In the actual experiment, we do not know which peak corresponds to the energy gap between the ground state and first excited state, because there are other relevant peaks.
However, it is worth mentioning that we can still obtain some information of the energy spectrum (or energy eigenvalues of the Hamiltonian) from the experimental data, even under the effect of the non-adiabatic
transitions between the ground state and other excited states.
Again, this shows the robustness of our scheme against the non-adiabatic transitions compared with the previous schemes \cite{seki2020excited}.
\section{Conclusion}
In conclusion, we propose a scheme that
allows the direct estimation of an energy gap
of the target Hamiltonian by using quantum annealing (QA). While a ground state of a driving Hamiltonian
is prepared as an initial state for the conventional QA, we prepare a superposition between a ground state
and the first excited state of the driving Hamiltonian as the initial state. Also, the key idea in our scheme
is to use a Ramsey type measurement after the quantum annealing process where information of the energy gap
is encoded as a relative phase between the superposition. The readout of the relative phase by sweeping the
Ramsey measurement time duration provides a direct estimation of the energy gap of the target Hamiltonian.
We show that, unlike the previous scheme, our scheme is robust against non-adiabatic transitions.
Our
scheme paves an alternative way to estimate the energy gap of the target Hamiltonian for applications of quantum chemistry.
While this manuscript was being written,
an independent article also proposes to use a Ramsey measurement to estimate an energy
gap by using a quanutm device \cite{2020quantumenergygap}.
This paper is partly based on results obtained from a
project commissioned by the New Energy and Industrial Technology Development Organization (NEDO), Japan.
This work was also supported
by Leading Initiative for Excellent Young Researchers MEXT Japan, JST presto (Grant No. JPMJPR1919) Japan
, KAKENHI Scientific Research C (Grant No. 18K03465), and JST-PRESTO (JPMJPR1914).
| proofpile-arXiv_065-310 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro}
The adiabatic evolution of a selected quantum state, modified via slow temporal changes of a Hamiltonian is of widespread utility for quantum state engineering and unravelling complex quantum dynamics. Examples are adiabatic quantum computation protocols \cite{albash2018adiabatic, gosset2015universal, sarandy2005adiabatic,Phys_adiabgate_PhysRevA}, quantum optimisation \cite{steffen2003experimental} and nuclear motion in molecules \cite{may2011charge}. Adiabaticity is also frequently central to state preparation schemes in cold atomic physics such as stimulated Raman adiabatic passage (STIRAP) \cite{vitanov_stirap_RevModPhys} and other adiabatic quantum state transfer schemes \cite{chen2012long, chen2016robust, eckert2007efficient}.
Due to the central role of adiabaticity in all of the above, it may be of interest to quantify how adiabatic a certain dynamical process is.
If only net total adiabaticity is of interest, this is relatively straightforward using the non-adiabatic coupling terms in Schr{\"o}dinger's equation for the instantaneous eigenbasis.
These therefore have been used extensively to constrain conditions under which evolution due to a certain Hamiltonian should remain adiabatic. Nonetheless such conditions remain non-trivial to identify in general \cite{Comparat_general_adiab_PhysRevA,Jiangfeng_exp_adiab_PhysRevLett,Marzlin_inconsist_PhysRevLett}.
Many of the protocols above begin with the system in a specific initial quantum state and are designed such that it reaches a specific target state, following an eigenstate of the Hamiltonian. In contrast, there are scenarios in which the initial state is less clear or random and in which we care to what extent quantum transitions in a non-eigenbasis are due to adiabatic evolution or would have also occurred without manipulation of the Hamiltonian. One such scenario is the quantum transport of an electronic excitation in a molecular aggregate \cite{2013photonics} through molecular motion \cite{ritesh_adiabatic_molagg,Asadian_2010,semiao2010vibration,behzadi2017effects,o2014non,mulken2011directed}. There, the instantaneous eigenbasis diagonalizes long-range dipole-dipole interactions between molecules in the aggregate and thus is made of states describing a delocalized excitation, while the basis most useful to study quantum transport is made of states in which the electronic excitations is localized on a certain molecule.
\begin{figure}[htb]
\includegraphics[width=0.99\columnwidth]{Plot_model_tranport_v11}
\caption{Quantum transitions due to beating (c) versus adiabatic changes (d). In (c) we show the population in three states $\ket{A}$, $\ket{B}$ and $\ket{C}$ for the
level scheme shown in (a) with constant $\Delta E= 3$ and $J_0=2$. Population periodically reaches $\ket{C}$ due to beating between eigenstates.
(d) For a suitable time-dependent variation of coupling strengths $J_k(t)$ as sketched in (b) with constants $J_{10} = J_{20}=8$ and same $\Delta E$ as in (a), the population reaching $\ket{C}$ can be significantly increased through adiabatic following of eigenstates. Discriminating contributions as in (b) from those as in (d) is the central objective of this article.
\label{sketch_problem_statement}}
\end{figure}
Here we propose a measure to quantify whether transitions in an arbitrary basis are due to adiabatic following of time-evolving eigenstates or rather due to beating between different eigenstates in a superposition. Our motivation is to clarify, whether the enhanced energy transport efficiencies in molecular aggregates due to molecular motion reported in \cite{ritesh_adiabatic_molagg} can be attributed to adiabaticity. However, we expect that the measure is of much broader utility.
We illustrate the challenge to be addressed in \fref{sketch_problem_statement} with an abstract three-level system in quantum state
%
\begin{align}
\ket{\Psi(t)}=\sum_{n\in\{A,B,C\} }c_n(t) \ket{n}
\label{threelevel_state}
\end{align}
and with Hamiltonian
%
\begin{align}
\hat{H}(t)=\sum_n E_n \ket{n}\bra{n} &+ J_1(t)[\ket{A}\bra{B} + \mbox{c.c.}] \nonumber\\[0.15cm]
&+ J_2(t)[\ket{B}\bra{C} + \mbox{c.c.}].
\label{threelevel_Hamil}
\end{align}
When the couplings are constant, $J_k(t)=J_0$, a system initialized in state $\ket{A}$ will typically eventually reach state $\ket{C}$ with some probability, since $\ket{A}$ is not an eigenstate and the initially populated eigenstates will in general contain a $\ket{C}$ contribution. In that case, transport to $\ket{C}$ thus arises through beating of eigenstates. In contrast, consider time dependent couplings $J_1(t)$ and $J_2(t)$ of the form \cite{chen2016robust},
%
\begin{align}
J_1(t) &= J_{10} \sin^2{ \Big(\frac{\pi t}{2 t_{max}} \Big)},
\label{coupling_J_1} \\
J_2(t) &= J_{20} \cos^2{ \Big(\frac{\pi t}{2 t_{max}} \Big)},
\label{coupling_J_2}
\end{align}
where $J_{10}$ and $J_{20}$ are the maximal coupling strength and $t_{max}$ defines the time-scale for changes of this coupling. As we show in \fref{sketch_problem_statement},
this can result in a final state $\ket{C}$ with unit fidelity, without ever populating more than one eigenstate. Any generic time-evolution will contain both these features, as we shall see later.
Obtaining a relative measure for the importance of the latter, adiabatic, type of transition is our objective here.
The article is organised as follows: In \sref{measure} we formulate our proposal for a measure of the adiabaticity of quantum transitions, the functionality of which is demonstrated for a few simple examples in \sref{molagg}. In \sref{complex_tests} we then explore its performance in the context of energy transport in molecular aggregates.
\section{Adiabatic transition measure}
\label{measure}
We now formulate our proposal, first reviewing the general framework of quantum dynamics in a time-independent basis or the adiabatic basis and then proceeding
to quantify adiabatic state changes based on this framework.
\subsection{Quantum dynamics}
\label{generic_dyn}
Consider a generic time-dependent Hamiltonian $\hat{H}(t)$ with a discrete spectrum and let $\{\ket{n}\}$ be an arbitrary but time-independent basis of its Hilbertspace, which we refer to as diabatic basis. Meanwhile $\ket{\varphi_k(t)}$ are a solution of the instantaneous eigenproblem
%
\begin{align}
\hat{H}(t)\ket{\varphi_k(t)}&=U_k(t)\ket{\varphi_k(t)},
\label{instantSE}
\end{align}
for energy $U_k(t)$, which also form a basis, referred to as adiabatic basis. The total time-evolving state $\ket{\Psi(t)}$ can be expressed in either basis as
$\ket{\Psi(t)}=\sum_n c_n(t) \ket{n}$ or $\ket{\Psi(t)}=\sum_k \tilde{c}_k (t) \ket{\varphi_k(t)}$, which defines two different sets of expansion coefficients, related by $\tilde{c}_k (t) = \sum_n U_{kn}(t) c_n(t) = \sum_n\braket{\varphi_k(t) }{ n } c_n(t)$.
Projecting Schr{\"o}dinger's equation into either basis we reach
%
\begin{align}
i\hbar \frac{\partial}{\partial t} {c}_n(t) = \sum_m H_{nm}(t) {c}_m(t)
\label{diabSE}
\end{align}
or
%
\begin{align}
i\hbar \frac{\partial}{\partial t} \tilde{c}_k(t)&= U_k(t) \tilde{c}_k(t) - i\hbar\sum_m \kappa_{km} \tilde{c}_m(t).
\label{adiabSE}
\end{align}
It can be seen from \bref{adiabSE}, that as long as the non-adiabatic couplings $\kappa_{km}=\bra{\varphi_k(t)}\frac{\partial}{\partial t} \ket{\varphi_m(t)}$ remain small, the system evolves adiabatically with all populations of eigenstates fixed to their initial value $|\tilde{c}_k(t)|^2=|\tilde{c}_k(0)|^2$. Thus deviations of these populations from their initial value provide a measure of net non-adiabaticity while the size of non-adiabatic coupling terms provides a measure of instantaneous non-adiabaticity.
The situation becomes more subtle if one is instead interested in the root cause of population changes in the basis $\{\ket{n}\}$, for which there are two possibilities:
(i) The system may be in a superposition of eigenstates, such as $\ket{\Psi(0)}=( \ket{\varphi_1(0)} + \ket{\varphi_2(0)})/\sqrt{2}$. Then, even for a time-independent Hamiltonian, the population in a given basis state $p_n = |\braket{n}{\Psi(t)}|^2$ will experience beating
%
\begin{align}
p_n &= \frac{1}{2} \bigg( |U_{1n}|^2 + |U_{2n}|^2
+ \mbox{Re}[ U_{2n}^* U_{1n} ] \cos{\left[ \frac{(U_2 - U_1) t}{\hbar} \right]} \nonumber\\[0.15cm]
&+ \mbox{Im}[U_{2n}^* U_{1n}] \sin{\left[\frac{(U_2 - U_1) t}{\hbar}\right]}\bigg).
\label{beating}
\end{align}
(ii) The system may be in a unique eigenstate, such as
$\ket{\Psi(0)}= \ket{\varphi_1(0)}$, but the Hamiltonian is time-dependent, such that the population $|\braket{n}{\Psi(t)}|^2\approx |\braket{n}{\varphi_0(t)}|^2$ varies due to the resultant change of that eigenstate. In a generic quantum dynamics scenario, both contributions mix and are non-straightforward to disentangle.
\subsection{Extracting adiabatic contributions}
\label{adiab_cont}
We shall refer with $d^{(k)}_n(t)=\braket{n}{\varphi_k(t)}$ to the component amplitude in basis state $\ket{n}$ for system eigenstate $\ket{\varphi_k(t)}$. Using this shorthand, we can write the diabatic amplitude as
%
\begin{align}
c_n(t)&=\sum_k \tilde{c}_k(t) d^{(k)}_n(t),
\label{cdiab}
\end{align}
and hence the population in state $\ket{n}$ as
%
\begin{align}
p_n(t)&=|c_n(t)|^2=\sum_{k,k'} \tilde{c}^*_{k'}(t)\: \tilde{c}_k(t) \:d^{(k)}_n(t)\: d^{(k')*}_n(t).
\label{pop}
\end{align}
Its rate of change using the chain rule is
%
\begin{align}
\dot{p}_n(t)&=\sum_{k,k'}\big[ \left(\dot{\tilde{c}}^*_{k'}(t)\: \tilde{c}_k(t) + \tilde{c}^*_{k'}(t)\: \dot{\tilde{c}}_k(t)\right) \:d^{(k)}_n(t)\: d^{(k')*}_n(t)\nonumber\\[0.15cm]
&+ \tilde{c}^*_{k'}(t)\: \tilde{c}_k(t) \: \left( \dot{d}^{(k)}_n(t)\: d^{(k')*}_n(t) + d^{(k)}_n(t)\: \dot{d}^{(k')*}_n(t) \right) \big]
\label{pop_dot}
\end{align}
The last line already clearly contains contributions to $\dot{p}_n(t)$ from temporal changes of $\ket{\varphi_k(t)}$ and thus will be related to adiabatic state following.
Let us inspect the first line more closely for the case of a time-independent Hamiltonian. In that case we simply have $\tilde{c}_k(t) = \tilde{c}_k(0) \exp{[-i U_k t/\hbar]}$ and hence
%
\begin{align}
&\dot{\tilde{c}}^*_{k'}(t)\: \tilde{c}_k(t) + \tilde{c}^*_{k'}(t)\: \dot{\tilde{c}}_k(t)\nonumber\\[0.15cm]
&= i \frac{E_{k'}-E_k}{\hbar}\tilde{c}^*_{k'}(0)\: \tilde{c}_k(0) e^{-i \left(\frac{E_{k'}-E_k}{\hbar}\right) t}.
\label{Vdd}
\end{align}
This expression is non-zero even for a time-independent Hamiltonian, simply quantifying the temporal changes of $p_n(t)$ due to beating between different eigenstates as in \bref{beating}. Importantly, the resultant time-dependence for this term affects the phase of \bref{Vdd} only, not the modulus.
To exploit this, let us write the coefficient $\tilde{c}_k(t)$ in polar representation $\tilde{c}_k(t)=\tilde{a}_k(t) e^{i \tilde{b}_k(t)}$, with $\tilde{a}_k,\tilde{b}_k\in \mathbb{R}$, $\tilde{a}_k>0$. Then $\dot{\tilde{c}}_k(t)=\dot{\tilde{a}}_ke^{i \tilde{b}_k(t)} + \tilde{a}_k(t) [i\dot{\tilde{b}}_k]e^{i \tilde{b}_k(t)}$. We insert this expansion into \bref{pop_dot}, remove the phase evolution $\dot{\tilde{b}}_k$ and introduce a new notation for the remaining expression:
%
\begin{align}
\label{pop_dot_no_phase}
t_n(t)=&\sum_{k,k'}\bigg[ \bigg(\dot{\tilde{a}}_{k'}(t)e^{-i \tilde{b}_{k'}(t)} \: \tilde{c}_k(t) \\
&+ \tilde{c}^*_{k'}(t)\:\dot{\tilde{a}}_ke^{i \tilde{b}_k(t)} \bigg) \:d^{(k)}_n(t)\: d^{(k')*}_n(t)\nonumber\\[0.15cm]
&+ \tilde{c}^*_{k'}(t)\: \tilde{c}_k(t) \: \left( \dot{d}^{(k)}_n(t)\: d^{(k')*}_n(t) + d^{(k)}_n(t)\: \dot{d}^{(k')*}_n(t) \right) \bigg].\nonumber
\end{align}
The resultant real variable $t_n(t)$ is now a measure for the rate of change of the population in state $n$ due to temporal changes in the eigen-spectrum of the Hamiltonian.
By construction it does not contain any contribution from beating between several occupied eigenstates.
\subsection{Assembling the measure}
\label{measure}
We now further proceed to reach a single number to quantify adiabatic transitions between basis states using the $t_n(t)$. Several variants will be possible and
the best choice may depend on the type of quantum dynamics for which one intends to characterise adiabaticity. We shall explore the following two options.
\ssection{Variant 1}
%
\begin{align}
T_1(t)&=\frac{1}{2}\sum_n \int_0^t dt' |t_n(t')|.
\label{totcrit_1}
\end{align}
The expression shall give unity if a system makes a transition from one state $\ket{a}$ into a second state $\ket{b}$ entirely due to adiabatic following of a single eigenstate. It does treat transitions between all basis states $\ket{n}$ on equal footing and provides a time-averaged result for the entire duration $t$ of interest.
However owing to the modulus, \bref{totcrit_1} also treats transitions \emph{into} some state equivalent to transitions \emph{out of} that state, which can be problem in some cases as demonstrated shortly. That problem is remedied by \ssection{Variant 2}
%
\begin{align}
T_2(t)&= \int_0^t dt' t_X (t'),
\label{totcrit_2}
\end{align}
evaluated for a target state $\ket{X}$ only.
We shall explore characteristic features of measures \bref{totcrit_1} and \bref{totcrit_2} in the next section for a diverse selection of examples, in the context of \rref{ritesh_adiabatic_molagg}.
\section{Adiabatic excitation transport in molecular aggregates}
\label{molagg}
\subsection{Model}
\label{model}
We first briefly review the specific scenario of \rref{ritesh_adiabatic_molagg} as a setting where a measure of adiabaticity is desirable for physical analysis, and difficult to obtain due to dynamics being at best partially adiabatic and the quantum state usually involving a superpositions of eigenstates.
\begin{figure}[htb]
\includegraphics[width=0.99\columnwidth]{Long_potential_v3}
\caption{ (a) Energy level schematic for a one dimensional chain of N molecules, with electronic ground state $\ket{g}$, excited state $\ket{e}$, dipole-dipole interaction $V_{dd}(X)$ and $E_n$ the site energy of the $n$'th molecule. (b) Inter-molecular Morse potential for $\alpha=0.3$ \AA$^{-1}$ (blue dashed) and $\alpha=0.9$ \AA$^{-1}$ (red dot-dashed) and the strength of dipole-dipole interactions $V_{nm}^{(dd)}$ (red solid line).
\label{fig_molagg_geometry}}
\end{figure}
To model a molecular aggregate, we consider $N$ monomers of some molecular dye with mass $M$, arranged in a one dimensional (1D) chain along the $X$ direction, as sketched in \frefp{fig_molagg_geometry}{a}. The positions of the molecules are given by $\mathbf{X}$ = ($X_1$, $X_2$ ,...., $X_N$) i.e., the $n$'th monomer is located at a definite, classical position $X_n$ and treated as a point particle. Adjacent monomers are assumed to bind to each other with a Morse type potential
\begin{eqnarray}
\label{Morse_potential}
\sub V{mn}(\mathbf{X}) = D_e\Big[ e^{-2\alpha(|X_{mn}| - X_0)} - 2 e^{-\alpha(|X_{mn}| - X_0)} \Big],
\end{eqnarray}
where $D_e$ is the depth of the well, $|X_{mn}|=|X_n-X_m|$ the separation of monomers $n$ and $m$ with $X_0$ its equilibrium value and $\alpha$ controls the width of the binding potential, shown in \frefp{fig_molagg_geometry}{b}.
Additionally each monomer may be in an electronically excited state $\ket{e}$ or ground-state $\ket{g}$. Among the resultant many-body states, we restrict ourselves to the so-called single-exciton manifold, where just monomer number $n$ is excited, this state is denoted by $\ket{n}$. The excited state can then migrate to any other monomer via long-range dipole-dipole interactions. Altogether we thus have a classical Hamiltonian for molecular motion
\begin{eqnarray}\label{class_Hamiltonian}
\sub{H}{class} = \sum_{n=1}^{N} \frac{1}{2}M \dot{X}_n^2 + \sum_{n<m}\sub V{mn}(\mathbf{X}) ,
\end{eqnarray}
and a quantum mechanical one for excitation transport through dipole-dipole interactions.
\begin{eqnarray}\label{Hamiltonian}
\hat{H}(\mathbf{X}) = \sum_{m=1}^{N}E_{n} \ket{n}\bra{n} + \sum_{\stackrel{n\neq m}{n,m}}\frac{ \mu^2}{|X_{mn}|^2} \ket{n}\bra{m},
\end{eqnarray}
where $E_n$ is the electronic transition energy of the $n$'th monomer and $\mu$ is the transition dipole moment. We find the system dynamics in a quantum-classical approach, where the motion of the molecules is treated classically using Newton's equations,
\begin{eqnarray}
\label{newton_longit}
M \frac{\partial^2}{\partial t^2}{X}_n = - \frac{\partial}{\partial X_n} U_s(\textbf{X}) - \sum_m \frac{\partial}{\partial X_n}V_{mn}(\mathbf{X}).
\end{eqnarray}
Here $U_s(\textbf{X})$ are the potential energy surfaces defined by using the adiabatic basis $\ket{\varphi_s[\textbf{X}(t)]}$ as in \bref{instantSE}, i.e. solving
$H(\textbf{X}) \ket{\varphi_s[\textbf{X}(t)]}= U_s[\textbf{X}(t)] \ket{\varphi_s[\textbf{X}(t)]} $. The dynamics of excitation transport is obtained by write the electronic aggregate state as $\ket{\Psi(t)}=\sum_n c_n(t)$ and using Schr\"odinger equation,
\begin{eqnarray}
\label{TDSE_diabatic_basis}
i \frac{\partial}{\partial t} {c}_n = \sum_{m=1}^{N} H_{mn}[X_{mn}(t)] {c}_m.
\end{eqnarray}
Here $H_{mn}[X_{mn}(t)]$ is the matrix element $\bra{n}\hat{H}\ket{m}$ for the electronic coupling in \eref{Hamiltonian}.
We have used a similar model in \rref{ritesh_adiabatic_molagg} to show that thermal motion of molecules can enhance the transport of excitation in the presence of disorder, compared to the case where molecules are immobile. That research was motivated by earlier results proposing excitation transport due to adiabatic quantum state following in an ultra-cold atomic system \cite{wuster2010newton,mobius2011adiabatic}. However, in the more complex molecular setting, clearly tagging a contribution of adiabaticity to quantum transport is more challenging and shall be explored in the following. For these simulations and the following ones we have taken $\mu = 1.12$ a.u., and $M= 902330$ a.u. roughly matching carbonyl-bridged triaryl-amine (CBT) dyes \cite{saikin2017long}.
\subsection{Simple Test cases}
\label{simple_tests}
We first investigate a few simple scenarios, intended to demonstrate that the measures in \sref{measure} give useful results in clear cut cases, shown in
\fref{fig_trimer_cradles} for a trimer aggregate.
%
\begin{figure}[htb]
\includegraphics[width=0.99\columnwidth]{Adiabatic_transport_plot_v10}
\caption{Exciton transport in a trimer aggregate. The first row (1a - 4a) shows the trajectories of individual molecules (white lines) with the excitation probabilities of each molecule shown by the color shading. The second row (1b-4b) shows the excitation probability for each molecule $p_n=|c_n|^2$ (diabatic populations) and the third row (1c-1d) of each exciton state $\tilde{p}_n=|\tilde{c}_n|^2$ (adiabatic populations). The inset in panel (4c) shows a zoom onto one of these populations. Finally the fourth row (1d-4d) shows the proposed adiabaticity measures \bref{totcrit_1} (solid blue) and \bref{totcrit_2} for target site $\#3$ (red dashed). The columns differ by initial state and parameters as discussed in the text.
\label{fig_trimer_cradles}}
\end{figure}
The first column (1a-1d) shows an immobile case with $M\rightarrow \infty$. At $t=0$ the excitation is intialized on site $\#1$. The state $\ket{1}$ is not an eigenstate, and population quickly reaches site $\#3$ in the resultant beating. Since in this case eigenstates do not evolve, our adibaticity measures defined by \eref{totcrit_1} and \eref{totcrit_2} yield zero per construction as can be seen in (1d).
In the second column (2a-2d), monomers are mobile and the excitation initially shared between site one and two such that the initial electronic state is given by,
%
\begin{eqnarray}\label{adiab_estate}
\ket{\psi(0)} = \frac{1}{\sqrt{2}} (\ket{n=1} + \ket{n=2}).
\end{eqnarray}
Our parameters are adjusted such that the excitation reaches the output site solely due to adiabatic quantum state following, as in \cite{wuster2010newton, mobius2011adiabatic}. This can be inferred from all population remaining constantly in the initially occupied eigenstate (2c). At the moment where the excitation has reached the output site with probability $p=1/2$ at about $t=0.5$ ps, also the measures $T_1=T_2=1/2$, indicating that transport has been entirely adiabatic.
For the third column (3a-3d), we give the second molecule a significant initial velocity, such that the quantum dynamics no longer is adiabatic. Hence we see in (3c) that the population in the initially occupied exciton state has dropped to $0.5$ by the time $t=0.01$ ps. The adiabaticity measures shown in (3d) are accordingly decreased by a factor of about $1/2$, compared to the ideal adiabatic transport in the second column.
For the examples discussed so far, the two measures $T_1$ and $T_2$ by and large agree. However, this is not the case in
the last column (4a-4d) in \fref{fig_trimer_cradles}. It shows transport where the molecules are allowed to oscillate around their equilibrium separation after been given random initial offsets and velocity from a thermal distribution at room temperature. The initial electronic state at $t=0$ is assumed to be first site ($\#1$)
\begin{eqnarray}\label{single_site_inistate}
\ket{\psi(0)} =\ket{n=1}.
\end{eqnarray}
Adiabatic populations for this case show no significant net change over longer times, but exhibit fast small amplitude oscillations, as seen in (4c) and its inset. Any change in site populations
due to motion must be periodic due to the periodicity of molecular trajectories shown in (4a). In this more involved case, our measure $T_1$ shows a slow steady increase, since both, population increase and decrease on the target site are cumulatively contributing. This problem is removed for measure $T_2$, as can be seen in (4d), which is thus here more effective in identifying long-term useful adiabatic contributions to transport.
\subsection{Thermal motion of molecules}
\label{complex_tests}
While the examples in \sref{simple_tests} were designed to demonstrate the basic functionalities of the measures introduced in \sref{measure} for simple cases, we now proceed to benchmark \bref{totcrit_1} and \bref{totcrit_2} in a more complex setting: energy transport in thermally agitated molecular aggregates.
For this, \fref{fig_five_cradles} shows the dynamics of excitation transport for molecular aggregates at temperature $T=300$ K and with increasing energy disorder from (1a) to (4a). Energy disorder arises due to the coupling of the monomers with the environment causing slightly different transition energy shifts as sketched in \fref{fig_molagg_geometry}. We assume the energy disorder is Gaussian distributed with a standard deviation $\sigma_E$,
\begin{eqnarray}\label{Energy_Distribution}
p_E(E_n - E_0) = \frac{1}{\sqrt{2\pi \sigma_E}} e^{-(E_n - E_0)^2/(2\sigma_E^2)},
\end{eqnarray}
where $E_n$ is the site energy of $n$'th molecule defined in \eref{Hamiltonian} and $E_0$ is the unperturbed electronic transition energy of each molecule.
\begin{figure}[htb]
\includegraphics[width=0.99\columnwidth]{Adiabatic_transport_plot_v9}
\caption{Exciton dynamics similar to \fref{fig_trimer_cradles} but for a larger system and with thermally induced motion.
The first row (1a - 4a) shows the trajectories of individual molecules (white lines) with the excitation probabilities of each molecule shown by the color shading. The second row (1b-4b) shows the excitation probability on the output site $p_5=|c_5|^2$ (solid blue) and its cumulative maximum (red dashed). (1c-1d) Population of each adiabatic state $\tilde{p}_n=|\tilde{c}_n|^2$. (1d-4d) Adiabaticity measures \bref{totcrit_1} (solid blue) and \bref{totcrit_2} for target site $\#5$ (red dashed). The parameters varied were for the first column ($\sigma_E$, $\alpha$) = ($150$ $cm^{-1}$, $0.5$ \AA$^{-1}$), for the second column ($\sigma_E$, $\alpha$) = ($300$ $cm^{-1}$, $0.5$ \AA$^{-1}$), for the third column($\sigma_E$, $\alpha$) = ($450$ $cm^{-1}$, $0.5$ \AA$^{-1}$) and for the fourth column ($\sigma_E$, $\alpha$) = ($550$ $cm^{-1}$, $0.3$ \AA$^{-1}$).
\label{fig_five_cradles}}
\end{figure}
As in \sref{simple_tests} the initial state of excitation is given by \eref{single_site_inistate}. For the first column (1a-1d) in \fref{fig_five_cradles}, the disorder in energy is relatively small compared to the electronic dipole-dipole coupling. Due to the weak disorder, the excitation can reach the output-site with high amplitude at early times, before motion had a chance to impact dynamics. In \rref{ritesh_adiabatic_molagg} we quantify transport efficiency through the maximum of population on the output site (here 5) over the time of interest, shown as a red-dashed line in row (b). Probing the adiabaticity measures at the times where this maximum increases, gives a correctly constantly low contribution from adiabaticity only from measure $T_2$, not from $T_1$. The reason is as discussed for column 4 in \fref{fig_trimer_cradles}.
For slightly increased disorder, in column two, the adiabatic populations show some sharp changes indicating the onset of non-adiabaticity. However this implies that the eigenstate are actually significantly changing in time, and the population that is adiabatically retained will contribute to adiabatic transport. This is now heralded by a significantly larger measure $T_2$
in \fref{fig_five_cradles} (2d) indicating that a fraction of transport to the site $\#5$ is adiabatic.
For the example in column 3, significant adiabatic transport can now be inferred directly from panels (3a), (3c), since exciton populations remain fairly adiabatic while almost the complete site population is transferred from \#1 to \#3. This leads to stepwise increases in measure $T_1$, however not impacting $T_2$ since the latter is based on site \#5 which was not involved.
Finally the fourth column shows a clear cut case where excitation is transported from site 1 directly to 5, since energy disorder has rendered all intervening sites off resonant, while the adiabatic population remains constant. Consequently this shows up as nearly identical step-like increase in both measures.
We have seen that both measures give adequate results for certain regions in parameter-space, however care has to be taken with $T_1(t)$ in \eref{totcrit_1} for cases where this adds up fast in- and out- transfer of population among basis states, that does however not yield a significant net transition when averaged over longer times. This is alleviated by measure $T_2(t)$ in \eref{totcrit_2} at the expense of being sensitive only to transitions into one specific state.
\section{Conclusions}
\label{conclusions}
We have constructed a measure that is able to quantify the extent to which adiabatic following of the eigenstates of a quantum system are the root cause of quantum transitions in another basis than the eigenbasis. The basic functionality of the measure was first demonstrated with a few simple examples where adiabaticity is either not at all related to transitions or completely responsible for them. We then explored its behaviour in more complex settings, the main feature of all of which was that transitions due to adiabatic quantum state following and due to beating between eigenstates happen concurrently. These examples demonstrate that the measures proposed can at least give a relative indication of the importance of adiabaticity. This can then be useful to assess, whether adiabaticity is significantly impacting some quantum dynamics of interest in a desirable way, in which case known results regarding adiabaticity can be applied in order to enhance its effect.
\acknowledgements
We thank the Max-Planck society for financial support under the MPG-IISER partner group program. The support and the resources provided by Centre for Development of Advanced Computing (C-DAC) and the National Supercomputing Mission (NSM), Government of India are gratefully acknowledged.
RP is grateful to the Council of Scientific and Industrial Research (CSIR), India, for a Shyama Prasad Mukherjee (SPM) fellowship for pursuing the Ph.D (File No. SPM-07/1020(0304)/2019-EMR-I).
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
| proofpile-arXiv_065-311 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{introduction}}
The emerging field of Graph Signal Processing (GSP) aims to bridge the
gap between signal processing and spectral graph theory. One of the
objectives is to generalize fundamental analysis operations from regular
grid signals to irregular structures in the form of graphs. There is an
abundant literature on GSP, in particular we refer the reader to
\citet{shuman2013emerging} and \citet{ortega2018graph} for an
introduction to this field and an overview of recent developments,
challenges and applications. GSP has also given rise to numerous
applications in machine/deep learning: convolutional neural networks
(CNN) on graphs \citet{bruna2013spectral}, \citet{henaff2015deep},
\citet{defferrard2016convolutional}, semi-supervised classification with
graph CNN \citet{kipf2016semi}, \citet{hamilton2017inductive}, community
detection \citet{tremblay2014graph}, to name just a few.
Different software programs exist for processing signals on graphs, in
different languages. The Graph Signal Processing toolbox (GSPbox) is an
easy to use matlab toolbox that performs a wide variety of operations on
graphs. This toolbox was port to Python as the PyGSP
\citet{perraudin2014gspbox}. There is also another matlab toolbox the
Spectral Graph Wavelet Transform (SGWT) toolbox dedicated to the
implementation of the SGWT developed in \citet{hammond2011wavelets}.
However, to our knowledge, there are not yet any tools dedicated to GSP
in \proglang{R}. A first version of the \pkg{gasper} package is
available online\protect\rmarkdownfootnote{https://github.com/fabnavarro/gasper}. In
particular, it includes the methodology and
codes\protect\rmarkdownfootnote{https://github.com/fabnavarro/SGWT-SURE} developed in
\citet{de2019data} and provides an interface to the SuiteSparse Matrix
Collection \citet{davis2011university}.
\hypertarget{graphs-collection-and-visualization}{%
\section{Graphs Collection and
Visualization}\label{graphs-collection-and-visualization}}
A certain number of graphs are present in the package. They are stored
as an Rdata file which contains a list consisting of the graph's weight
matrix \(W\) (in the form of a sparse matrix denoted by \texttt{sA}) and
the coordinates associated with the graph (if it has any).
An interface is also provided. It allows to retrieve the matrices
related to many problems provided by the SuiteSparse Matrix Collection
(formerly known as the University of Florida Sparse Matrix Collection)
\citet{davis2011university}. This collection is a large and actively
growing set of sparse matrices that arise in real applications (as
structural engineering, computational fluid dynamics, computer
graphics/vision, optimization, economic and financial modeling,
mathematics and statistics, to name just a few). For more details see
\url{https://sparse.tamu.edu/}.
The \texttt{download\_graph} function allows to download a graph from
this collection, based on the name of the graph and the name of the
group that provides it. An example is given below
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{matrixname <-}\StringTok{ "grid1"}
\NormalTok{groupname <-}\StringTok{ "AG-Monien"}
\KeywordTok{download_graph}\NormalTok{(matrixname, groupname)}
\KeywordTok{attributes}\NormalTok{(grid1)}
\CommentTok{#> $names}
\CommentTok{#> [1] "sA" "xy" "dim" "info"}
\end{Highlighting}
\end{Shaded}
The output is stored (in a temporary folder) as a list composed of:
\begin{itemize}
\tightlist
\item
``sA'' the corresponding sparse matrix (in compressed sparse column
format);
\end{itemize}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{str}\NormalTok{(grid1}\OperatorTok{$}\NormalTok{sA)}
\CommentTok{#> Formal class 'dsCMatrix' [package "Matrix"] with 7 slots}
\CommentTok{#> ..@ i : int [1:476] 173 174 176 70 71 74 74 75 77 77 ...}
\CommentTok{#> ..@ p : int [1:253] 0 3 6 9 12 15 18 21 24 27 ...}
\CommentTok{#> ..@ Dim : int [1:2] 252 252}
\CommentTok{#> ..@ Dimnames:List of 2}
\CommentTok{#> .. ..$ : NULL}
\CommentTok{#> .. ..$ : NULL}
\CommentTok{#> ..@ x : num [1:476] 1 1 1 1 1 1 1 1 1 1 ...}
\CommentTok{#> ..@ uplo : chr "L"}
\CommentTok{#> ..@ factors : list()}
\end{Highlighting}
\end{Shaded}
\begin{itemize}
\tightlist
\item
possibly coordinates ``xy'' (stored in a \texttt{data.frame});
\end{itemize}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{head}\NormalTok{(grid1}\OperatorTok{$}\NormalTok{xy, }\DecValTok{3}\NormalTok{)}
\CommentTok{#> x y}
\CommentTok{#> [1,] 0.00000 0.00000}
\CommentTok{#> [2,] 2.88763 3.85355}
\CommentTok{#> [3,] 3.14645 4.11237}
\end{Highlighting}
\end{Shaded}
\begin{itemize}
\tightlist
\item
``dim'' the numbers of rows, columns and numerically nonzero elements
and
\end{itemize}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{grid1}\OperatorTok{$}\NormalTok{dim}
\CommentTok{#> NumRows NumCols NonZeros}
\CommentTok{#> 1 252 252 476}
\end{Highlighting}
\end{Shaded}
\begin{itemize}
\tightlist
\item
``info'' information about the matrix that can be display via
\texttt{file.show(grid1\$info)} for example or in the console:
\end{itemize}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{cat}\NormalTok{(}\KeywordTok{readLines}\NormalTok{(grid1}\OperatorTok{$}\NormalTok{info, }\DataTypeTok{n=}\DecValTok{14}\NormalTok{), }\DataTypeTok{sep =} \StringTok{"}\CharTok{\textbackslash{}n}\StringTok{"}\NormalTok{)}
\CommentTok{#>
\CommentTok{#>
\CommentTok{#>
\CommentTok{#>
\CommentTok{#>
\CommentTok{#>
\CommentTok{#>
\CommentTok{#>
\CommentTok{#>
\CommentTok{#>
\CommentTok{#>
\CommentTok{#>
\CommentTok{#>
\CommentTok{#>
\end{Highlighting}
\end{Shaded}
The package also allows to plot a (planar) graph using the function
\texttt{plot\_graph}. It also contains a function to plot signals
defined on top of the graph \texttt{plot\_signal}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{f <-}\StringTok{ }\KeywordTok{rnorm}\NormalTok{(}\KeywordTok{nrow}\NormalTok{(grid1}\OperatorTok{$}\NormalTok{sA))}
\KeywordTok{plot_graph}\NormalTok{(grid1)}
\KeywordTok{plot_signal}\NormalTok{(grid1, f, }\DataTypeTok{size =} \DecValTok{2}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{center}\includegraphics{gasper_vignette_files/figure-latex/unnamed-chunk-6-1} \includegraphics{gasper_vignette_files/figure-latex/unnamed-chunk-6-2} \end{center}
\hypertarget{example-of-application-to-denoising}{%
\section{Example of application to
denoising}\label{example-of-application-to-denoising}}
We give an example of an application in the case of the denoising of a
noisy signal \(f\in\mathbb{R}^V\) defined on a graph \(G\) with set of
vertices \(V\). More precisely, the (unnormalized) graph Laplacian
matrix \(\mathcal L\in\mathbb{R}^{V\times V}\) associated with \(G\) is the symmetric
matrix defined as \(\mathcal L=D - W\), where \(W\) is the matrix of weights
with coefficients \((w_{ij})_{i,j\in V}\), and \(D\) the diagonal
matrix with diagonal coefficients \(D_{ii}= \sum_{j\in V} w_{ij}\). A
signal \(f\) on the graph \(G\) is a function \(f:V\rightarrow \mathbb{R}\).
The degradation model can be written as \[
\tilde f = f + \xi,
\] where \(\xi\sim\mathcal{N}(0,\sigma^2)\). The purpose of denoising is
to build an estimator \(\hat f\) of \(f\) that depends only on
\(\tilde f\).
A simple way to construct an effective non-linear estimator is obtained
by thresholding the SGWT coefficients of \(f\) on a frame (see
\citet{hammond2011wavelets} for details about the SGWT).
A general thresholding operator \(\tau\) with threshold parameter
\(t\geq 0\) applied to some signal \(f\) is defined as
\begin{equation}\label{eq:tau}
\tau(x,t)=x\max \{ 1-t^{\beta}|x|^{-\beta},0 \},
\end{equation} with \(\beta \geq 1\). The most popular choices are the
soft thresholding (\(\beta=1\)), the James-Stein thresholding
(\(\beta=2\)) and the hard thresholding (\(\beta=\infty\)).
Given the Laplacian and a given frame, denoising in this framework can
be summarized as follows:
\begin{itemize}
\item
Analysis: compute the SGWT transform \(\mathcal{W} \tilde f\);
\item
Thresholding: apply a given thresholding operator to the coefficients
\(\mathcal{W} \tilde f\);
\item
Synthesis: apply the inverse SGWT transform to obtain an estimation
\(\hat f\) of the original signal.
\end{itemize}
Each of these steps can be performed via one of the functions
\texttt{analysis}, \texttt{synthesis}, \texttt{beta\_thresh}. Laplacian
is given by the function \texttt{laplacian\_mat}. The
\texttt{tight\_frame} function allows the construction of a tight frame
based on \citet{gobel2018construction} and \citet{coulhon2012heat}. In
order to select a threshold value, we consider the method developed in
\citet{de2019data} which consists in determining the threshold that
minimizes the Stein unbiased risk estimator (SURE) in a graph setting
(see \citet{de2019data} for more details).
We give an illustrative example on the \texttt{grid1} graph from the
previous section. We start by calculating, the Laplacian matrix (from
the adjacency matrix), its eigendecomposition and the frame
coefficients.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{A <-}\StringTok{ }\NormalTok{grid1}\OperatorTok{$}\NormalTok{sA}
\NormalTok{L <-}\StringTok{ }\KeywordTok{laplacian_mat}\NormalTok{(A)}
\NormalTok{val1 <-}\StringTok{ }\KeywordTok{eigensort}\NormalTok{(L)}
\NormalTok{evalues <-}\StringTok{ }\NormalTok{val1}\OperatorTok{$}\NormalTok{evalues}
\NormalTok{evectors <-}\StringTok{ }\NormalTok{val1}\OperatorTok{$}\NormalTok{evectors}
\CommentTok{#- largest eigenvalue}
\NormalTok{lmax <-}\StringTok{ }\KeywordTok{max}\NormalTok{(evalues)}
\CommentTok{#- parameter that controls the scale number}
\NormalTok{b <-}\StringTok{ }\DecValTok{2}
\NormalTok{tf <-}\StringTok{ }\KeywordTok{tight_frame}\NormalTok{(evalues, evectors, }\DataTypeTok{b=}\NormalTok{b)}
\end{Highlighting}
\end{Shaded}
Wavelet frames can be seen as special filter banks. The tight-frame
considered here is a finite collection \((\psi_j)_{j=0, \ldots,J}\)
forming a finite partition of unity on the compact \([0,\lambda_1]\),
where \(\lambda_1\) is the largest eigenvalue of the Laplacian spectrum
\(\mathrm{sp}(\mathcal L)\). This partition is defined as follows: let
\(\omega : \mathbb R^+ \rightarrow [0,1]\) be some function with support
in \([0,1]\), satisfying \(\omega \equiv 1\) on \([0,b^{-1}]\), for some
\(b>1\), and set \begin{equation*}
\psi_0(x)=\omega(x)~~\textrm{and}~~\psi_j(x)=\omega(b^{-j}x)-\omega(b^{-j+1}x)~~\textrm{for}~~j=1, \ldots, J,~~\textrm{where}~~J= \left \lfloor \frac{\log \lambda_1}{\log b} \right \rfloor + 2.
\end{equation*} Thanks to Parseval's identity, the following set of
vectors is a tight frame: \[
\mathfrak F = \left \{ \sqrt{\psi_j}(\mathcal L)\delta_i, j=0, \ldots, J, i \in V \right \}.
\] The \texttt{plot\_filter} function allows to represent the elements
(filters) of this partition.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{plot_filter}\NormalTok{(lmax,b)}
\end{Highlighting}
\end{Shaded}
\begin{center}\includegraphics{gasper_vignette_files/figure-latex/unnamed-chunk-8-1} \end{center}
The SGWT of a signal \(f \in \mathbb R^V\) is given by \[
\mathcal{W} f = \left ( \sqrt{\psi_0}(\mathcal L)f^{T},\ldots,\sqrt{\psi_J}(\mathcal L)f^{T} \right )^{T} \in \mathbb R^{n(J+1)}.
\] The adjoint linear transformation \(\mathcal{W}^\ast\) of \(\mathcal{W}\) is: \[
\mathcal{W}^\ast \left (\eta_{0}^{T}, \eta_{1}^{T}, \ldots, \eta_{J}^T \right )^{T} = \sum_{j\geq 0} \sqrt{\psi_j}(\mathcal L)\eta_{j}.
\] The tightness of the underlying frame implies that
\(\mathcal{W}^\ast \mathcal{W}=\mathrm{Id}_{\mathbb R^V}\) so that a signal
\(f \in \mathbb R^V\) can be recovered by applying \(\mathcal{W}^\ast\) to its
wavelet coefficients
\(((\mathcal{W} f)_i)_{i=1, \ldots, n(J+1)} \in \mathbb R^{n(J+1)}\).
Then, noisy observations \(\tilde f\) are generated from a random signal
\(f\).
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{n <-}\StringTok{ }\KeywordTok{nrow}\NormalTok{(L)}
\NormalTok{f <-}\StringTok{ }\KeywordTok{randsignal}\NormalTok{(}\FloatTok{0.01}\NormalTok{, }\DecValTok{3}\NormalTok{, A)}
\NormalTok{sigma <-}\StringTok{ }\FloatTok{0.01}
\NormalTok{noise <-}\StringTok{ }\KeywordTok{rnorm}\NormalTok{(n, }\DataTypeTok{sd =}\NormalTok{ sigma)}
\NormalTok{tilde_f <-}\StringTok{ }\NormalTok{f }\OperatorTok{+}\StringTok{ }\NormalTok{noise}
\end{Highlighting}
\end{Shaded}
Below is a graphical representation of the original signal and its noisy
version.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{plot_signal}\NormalTok{(grid1, f, }\DataTypeTok{size =} \DecValTok{2}\NormalTok{)}
\KeywordTok{plot_signal}\NormalTok{(grid1, tilde_f, }\DataTypeTok{size =} \DecValTok{2}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{center}\includegraphics{gasper_vignette_files/figure-latex/unnamed-chunk-10-1} \includegraphics{gasper_vignette_files/figure-latex/unnamed-chunk-10-2} \end{center}
We compute the SGWT transforms \(\mathcal{W} \tilde f\) and \(\mathcal{W} f\).
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{wcn <-}\StringTok{ }\KeywordTok{analysis}\NormalTok{(tilde_f,tf)}
\NormalTok{wcf <-}\StringTok{ }\KeywordTok{analysis}\NormalTok{(f,tf)}
\end{Highlighting}
\end{Shaded}
An alternative to avoid frame calculation is given by the
\texttt{forward\_sgwt} function which provides a fast forward SGWT. For
exemple:
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{wcf <-}\StringTok{ }\KeywordTok{forward_sgwt}\NormalTok{(f, evalues, evectors, }\DataTypeTok{b=}\NormalTok{b)}
\end{Highlighting}
\end{Shaded}
The optimal threshold is then determined by minimizing the SURE (using
Donoho and Johnstone's trick \citet{donoho1995adapting} which remains
valid here, see \citet{de2019data}). More precisely, the SURE for a
general thresholding process \(h\) is given by the following identity\\
\begin{equation}
\mathbf{SURE}(h)=-n \sigma^2 + \|h(\widetilde F)-\widetilde F\|^2 + 2 \sum_{i,j=1}^{n(J+1)} \gamma_{i,j}^2 \partial_j h_i(\widetilde F),
\end{equation} where \(\gamma_{i,j}^2=\sigma^2(\mathcal{W} \mathcal{W} ^\ast)_{i,j}\)
that can be computed from the frame (or estimated via Monte-Carlo
simulation). The \texttt{SURE\_thresh}/\texttt{SURE\_MSEthresh}allow to
evaluate the SURE (in a global fashion) considering the general
thresholding operator \(\tau\) \eqref{eq:tau} (the parameter \texttt{b}
stands for \(\beta\) in the definition). These functions provide two
different ways of applying the threshold, ``uniform'' and ``dependent''
(\emph{i.e.}, the same threshold for each coefficient vs a threshold
normalized by the variance of each coefficient). The second approach
generally provides better results (especially when the weights have been
calculated via the frame). A comparative example of these two approaches
is given below (with \(\beta=2\) James-Stein attenuation threshold).
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{diagWWt <-}\StringTok{ }\KeywordTok{colSums}\NormalTok{(}\KeywordTok{t}\NormalTok{(tf)}\OperatorTok{^}\DecValTok{2}\NormalTok{)}
\NormalTok{thresh <-}\StringTok{ }\KeywordTok{sort}\NormalTok{(}\KeywordTok{abs}\NormalTok{(wcn))}
\NormalTok{opt_thresh_d <-}\StringTok{ }\KeywordTok{SURE_MSEthresh}\NormalTok{(wcn, }
\NormalTok{ wcf, }
\NormalTok{ thresh, }
\NormalTok{ diagWWt, }
\DataTypeTok{b=}\DecValTok{2}\NormalTok{, }
\NormalTok{ sigma, }
\OtherTok{NA}\NormalTok{,}
\DataTypeTok{policy =} \StringTok{"dependent"}\NormalTok{,}
\DataTypeTok{keepwc =} \OtherTok{TRUE}\NormalTok{)}
\NormalTok{opt_thresh_u <-}\StringTok{ }\KeywordTok{SURE_MSEthresh}\NormalTok{(wcn, }
\NormalTok{ wcf, }
\NormalTok{ thresh, }
\NormalTok{ diagWWt, }
\DataTypeTok{b=}\DecValTok{2}\NormalTok{, }
\NormalTok{ sigma, }
\OtherTok{NA}\NormalTok{,}
\DataTypeTok{policy =} \StringTok{"uniform"}\NormalTok{,}
\DataTypeTok{keepwc =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
We can plot MSE risks and their SUREs estimates as a function of the
threshold parameter (assuming that \(\sigma\) is known).
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{plot}\NormalTok{(thresh, opt_thresh_u}\OperatorTok{$}\NormalTok{res}\OperatorTok{$}\NormalTok{MSE,}
\DataTypeTok{type=}\StringTok{"l"}\NormalTok{, }\DataTypeTok{xlab =} \StringTok{"t"}\NormalTok{, }\DataTypeTok{ylab =} \StringTok{"risk"}\NormalTok{, }\DataTypeTok{log=}\StringTok{"x"}\NormalTok{)}
\KeywordTok{lines}\NormalTok{(thresh, opt_thresh_u}\OperatorTok{$}\NormalTok{res}\OperatorTok{$}\NormalTok{SURE}\OperatorTok{-}\NormalTok{n}\OperatorTok{*}\NormalTok{sigma}\OperatorTok{^}\DecValTok{2}\NormalTok{, }\DataTypeTok{col=}\StringTok{"red"}\NormalTok{)}
\KeywordTok{lines}\NormalTok{(thresh, opt_thresh_d}\OperatorTok{$}\NormalTok{res}\OperatorTok{$}\NormalTok{MSE, }\DataTypeTok{lty=}\DecValTok{2}\NormalTok{)}
\KeywordTok{lines}\NormalTok{(thresh, opt_thresh_d}\OperatorTok{$}\NormalTok{res}\OperatorTok{$}\NormalTok{SURE}\OperatorTok{-}\NormalTok{n}\OperatorTok{*}\NormalTok{sigma}\OperatorTok{^}\DecValTok{2}\NormalTok{, }\DataTypeTok{col=}\StringTok{"red"}\NormalTok{, }\DataTypeTok{lty=}\DecValTok{2}\NormalTok{)}
\KeywordTok{legend}\NormalTok{(}\StringTok{"topleft"}\NormalTok{, }\DataTypeTok{legend=}\KeywordTok{c}\NormalTok{(}\StringTok{"MSE_u"}\NormalTok{, }\StringTok{"SURE_u"}\NormalTok{,}
\StringTok{"MSE_d"}\NormalTok{, }\StringTok{"SURE_d"}\NormalTok{),}
\DataTypeTok{col=}\KeywordTok{rep}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\StringTok{"black"}\NormalTok{, }\StringTok{"red"}\NormalTok{), }\DecValTok{2}\NormalTok{), }
\DataTypeTok{lty=}\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{2}\NormalTok{), }\DataTypeTok{cex =} \DecValTok{1}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{center}\includegraphics{gasper_vignette_files/figure-latex/unnamed-chunk-14-1} \end{center}
Finally, the synthesis allows us to determine the resulting estimators
of \(f\), \emph{i.e.}, the ones that minimize the unknown MSE risks and
the ones that minimizes the SUREs.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{wc_oracle_u <-}\StringTok{ }\NormalTok{opt_thresh_u}\OperatorTok{$}\NormalTok{wc[, opt_thresh_u}\OperatorTok{$}\NormalTok{min[}\DecValTok{1}\NormalTok{]]}
\NormalTok{wc_oracle_d <-}\StringTok{ }\NormalTok{opt_thresh_d}\OperatorTok{$}\NormalTok{wc[, opt_thresh_d}\OperatorTok{$}\NormalTok{min[}\DecValTok{1}\NormalTok{]]}
\NormalTok{wc_SURE_u <-}\StringTok{ }\NormalTok{opt_thresh_u}\OperatorTok{$}\NormalTok{wc[, opt_thresh_u}\OperatorTok{$}\NormalTok{min[}\DecValTok{2}\NormalTok{]]}
\NormalTok{wc_SURE_d <-}\StringTok{ }\NormalTok{opt_thresh_d}\OperatorTok{$}\NormalTok{wc[, opt_thresh_d}\OperatorTok{$}\NormalTok{min[}\DecValTok{2}\NormalTok{]]}
\NormalTok{hatf_oracle_u <-}\StringTok{ }\KeywordTok{synthesis}\NormalTok{(wc_oracle_u, tf)}
\NormalTok{hatf_oracle_d <-}\StringTok{ }\KeywordTok{synthesis}\NormalTok{(wc_oracle_d, tf)}
\NormalTok{hatf_SURE_u <-}\StringTok{ }\KeywordTok{synthesis}\NormalTok{(wc_SURE_u, tf)}
\NormalTok{hatf_SURE_d <-}\StringTok{ }\KeywordTok{synthesis}\NormalTok{(wc_SURE_d, tf)}
\NormalTok{res <-}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\StringTok{"Input_SNR"}\NormalTok{=}\KeywordTok{round}\NormalTok{(}\KeywordTok{SNR}\NormalTok{(f,tilde_f),}\DecValTok{2}\NormalTok{),}
\StringTok{"MSE_u"}\NormalTok{=}\KeywordTok{round}\NormalTok{(}\KeywordTok{SNR}\NormalTok{(f,hatf_oracle_u),}\DecValTok{2}\NormalTok{),}
\StringTok{"SURE_u"}\NormalTok{=}\KeywordTok{round}\NormalTok{(}\KeywordTok{SNR}\NormalTok{(f,hatf_SURE_u),}\DecValTok{2}\NormalTok{),}
\StringTok{"MSE_d"}\NormalTok{=}\KeywordTok{round}\NormalTok{(}\KeywordTok{SNR}\NormalTok{(f,hatf_oracle_d),}\DecValTok{2}\NormalTok{),}
\StringTok{"SURE_d"}\NormalTok{=}\KeywordTok{round}\NormalTok{(}\KeywordTok{SNR}\NormalTok{(f,hatf_SURE_d),}\DecValTok{2}\NormalTok{))}
\end{Highlighting}
\end{Shaded}
\begin{longtable}[]{@{}rrrrr@{}}
\caption{Uniform vs Dependent}\tabularnewline
\toprule
Input\_SNR & MSE\_u & SURE\_u & MSE\_d & SURE\_d\tabularnewline
\midrule
\endfirsthead
\toprule
Input\_SNR & MSE\_u & SURE\_u & MSE\_d & SURE\_d\tabularnewline
\midrule
\endhead
8.24 & 12.55 & 12.15 & 14.38 & 14.38\tabularnewline
\bottomrule
\end{longtable}
It can be seen form Table 1 that in both cases, SURE provides a good
estimator of the MSE and therefore the resulting estimators have
performances close (in terms of SNR) to those obtained by minimizing the
unknown risk.
Equivalently, estimators can be obtained by the inverse of the SGWT
given by the function \texttt{inverse\_sgwt}. For exemple:
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{hatf_oracle_u <-}\StringTok{ }\KeywordTok{inverse_sgwt}\NormalTok{(wc_oracle_u,}
\NormalTok{ evalues, evectors, b)}
\end{Highlighting}
\end{Shaded}
Or if the coefficients have not been stored for each threshold value
(\emph{i.e.}, with the argument ``keepwc=FALSE'' when calling
\texttt{SUREthresh}) using the thresholding function
\texttt{beta\_thresh}, \emph{e.g.},
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{wc_oracle_u <-}\StringTok{ }\KeywordTok{betathresh}\NormalTok{(wcn, }
\NormalTok{ thresh[opt_thresh_u}\OperatorTok{$}\NormalTok{min[[}\DecValTok{1}\NormalTok{]]], }\DecValTok{2}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\renewcommand\refname{Bibliography}
| proofpile-arXiv_065-312 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Human pose estimation\cite{andriluka20142d} is a fundamental yet challenging computer vision problem, that aims to localize keypoints (human body joints or parts). It is the basis of other related tasks and various downstream vision applications, including video pose estimation\cite{xiaohan2015joint}, tracking\cite{cho2013adaptive,xiao2018simple} and human action recognition \cite{liang2014expressive,wang2013approach,yan2018spatial}.
This paper is interested in 2D pose estimation to detect the spatial location (i.e. 2D coordinate) of keypoints for persons in a top-down manner.
Keypoint localization is a very challenging task, even for humans. It is really difficult to locate the keypoint coordinates precisely, since the variation of clothing, the occlusion between the limbs, the deformation of human joints under different poses and the complex unconstrained background, will affect the keypoint recognition and localization \cite{zhang2019distribution}.
\begin{figure} [t]
\centering
\includegraphics[height=0.5\textwidth]{figures/intro.pdf}
\caption{\textbf{Example of 2D pose estimation}. The green points and lines indicate keypoints and their connections that are correctly predicted, while the red ones indicate incorrect predictions.
We have observed two important characteristics of keypoint localization: 1) different features and processes are preferred for rough and accurate localization, 2) relationship between keypoints should be considered.}
\label{fig:intro}
\end{figure}
Most existing state-of-the-art methods use CNNs to get the heatmap of each keypoint
\cite{chen2014articulated,gkioxari2016chained,wei2016convolutional,chu2016crf,chu2017multi,yang2016end,chu2016structured,sun2017compositional,yang2017pyramid,xiao2018simple,ning2017tmm,tang2018deeply,chen2018cascaded,ke2018multi,liu2018cascaded,su2019multi,sun2019deep}. Then the heatmap will be directly decoded to keypoint coordinates.
However, these approaches do not take into account two important characteristics of human pose estimation: 1) different features and processes are preferred for rough and accurate localization, 2) relationship between keypoints should be considered.
First, humans perform keypoint localization in a two-step manner\cite{fieraru2018learning}, \cite{bulat2016human}, \cite{belagiannis2015robust}, \cite{moon2019posefix}. For example, for the blue point in Fig.~\ref{fig:intro}(a), we will first perform a rough localization based on the context information, including fingers and arms shown in the blue circle, to determine whether there is a wrist keypoint in a nearby area. This step can be treated as a proposal process. After rough localization, we will further observe the detail structure of the wrist itself to determine the accurate location of wrist keypoint, which can be seen as a refinement process.
We get inspiration from object detection. In object detection methods, proposal and refinement are performed based on two different feature map achieved by two separate subnets. We suggest that the proposal and refinement processes in keypoint localization should also be based on different feature maps. Therefore, we apply two different subnets to get feature maps for proposal and refinement respectively. Besides, two-stage method is very common in object detection, and can achieve excellent results in terms of both effectiveness and performance. A natural idea is that we apply the design of the two-stage to keypoint localization task, let the first stage focus on the proposal process, improving the recall of keypoint, and let the second stage focus on the refinement process, improving the localization accuracy. Therefore, we introduce the concept of Guided Point. First, we select the guided points based on the heatmap as rough proposals in the first stage. Then in the second stage, based on the corresponding features of selected guided points, we perform coordinate refinement for accurate keypoint regression.
Secondly, in the case of complicated clothing and occlusion, the relationship between the keypoints is very important to judge its location. For example, the yellow keypoint in Fig.~\ref{fig:intro}(a), due to the occlusion, we cannot see its location directly. We can only infer it from the location of other related keypoints. In addition, due to the structural limitations of the human body, there exist obvious mutual constraints between keypoints. In the refinement process, considering the relationship between keypoints may help to avoid and correct the misprediction. For example, in Fig.~\ref{fig:intro}(b), the keypoints in red are the wrong predictions of the left leg. By considering the connection between them and other keypoints, we can find out and correct these errors more easily.
However, in the traditional heatmap based method, we cannot know the location of keypoints before decoding the heatmap to coordinates.
This makes it difficult for us to build a pose graph that connects keypoint features at different locations.
After introducing guided points, we can know the rough locations of keypoints, such that we can build a pose graph between keypoints easily.
Therefore, we propose a graph pose refinement (GPR) module, which is an extension of graph convolutional network, to improve the accuracy of keypoint localization.
The main contributions of this paper include:
\begin{itemize}
\item This paper proposes a model-agnostic two-stage keypoint localization framework, Graph-PCNN, which can be used in any heatmap based keypoint localization method to bring significant improvement.
\item A graph pose refinement module is proposed to consider the relationship between keypoints at different locations, and further improve the localization accuracy.
\item Our method set a new stage-of-the-art on COCO \texttt{test-dev} split.
\end{itemize}
\section{Related work}
The classical approach of human pose estimation is using the pictorial structures framework with a pre-defined pose or part templates not depending on image data, which limit the expressiveness of the model \cite{yang2012articulated,pishchulin2013poselet}.
Convolution Neural Networks (CNNs) have dramatically changed the direction of pose estimation methods. Since the introduction of "DeepPose" \cite{toshev2014deeppose} by Toshev et al., most recent pose estimation systems have generally adopted CNNs as their backbone.
There are mainly two kinds of methods to get the locations of keypoints: directly regress coordinates and estimate the heatmaps of the keypoints first, and then decode to coordinates.
\pheadB{Coordinate based Methods}
Only a few methods regress coordinates of keypoints directly. DeepPose \cite{toshev2014deeppose} formulate pose estimation as a CNN-based regression problem directly towards body joints in a holistic fashion. Fan et al., \cite{fan2015combining} propose to integrate both the body part appearance and the holistic view of each local part for more accurate regression. A few other methods \cite{carreira2015human,sun2018integral} further improve performance, but there is still a gap between with heatmap based methods.
\pheadB{Heatmap based Methods}
The heatmap representation is first introduce by Tompson et al. \cite{tompson2014joint},
and then quickly becomes the most popular solution in state-of-the-art methods.
A lot of research works improve the network architectures to improve the effectiveness of heatmap regression~\cite{chen2014articulated,gkioxari2016chained,belagiannis2016recurrent,lifshitz2016human,newell2016stacked,wei2016convolutional,chu2016crf,chu2017multi,yang2016end,chu2016structured,sun2017compositional,yang2017pyramid} \cite{xiao2018simple,ning2017tmm,tang2018deeply,chen2018cascaded,ke2018multi,liu2018cascaded,su2019multi,sun2019deep}. For example, Hourglass \cite{newell2016stacked} and its follow-ups \cite{yang2017pyramid,chen2017adversarial,chu2017multi} consist of blocks of several pooling and upsampling layers, which looks like an hourglass, to capture information at every scale.
SimpleBaseline \cite{xiao2018simple} adds several deconvolutional layers to enlarge the resolution of output feature maps, which is quite simple but performs better.
The HRNet \cite{sun2019deep} model has outperformed all existing methods on public dataset by maintaining a high-resolution representation through the whole process.
\pheadB{Hybrid Methods}
Some works speculate that heatmap will introduce a statistical error and try to combine heatmap estimation with coordinate offset regression for better localization accuracy \cite{papandreou2017towards,huang2019devil}. But in these methods, heatmap estimation and coordinate regression are performed at the same time on the same feature map, without refinement process to gradually improve accuracy.
\pheadB{Refinement Methods}
Many works focus on coordinate refinement to improve the accuracy of keypoints localization\cite{carreira2015human,bulat2016human,fieraru2018learning,moon2019posefix}. Instead of predicting absolute joint locations, Carreira et al. refine pose estimation by predicting error feedback at each iteration\cite{carreira2015human}, Bulat et al. design a cascaded architecture for mining part relationships and spatial context\cite{bulat2016human}. Some other works use a human pose refinement network to exploit dependencies between input and output spaces \cite{fieraru2018learning,moon2019posefix}.
However, they can not effectively combine heatmap estimation and coordinate regression, and the relationship between different keypoints is not considered during refinement.
Our method will introduce the relationship between keypoints for more effective refinement.
%
Zhang et al. \cite{zhang2019human} builds a pose graph directly on heatmaps and uses Graph Neural Network for refinement. However, it essentially only considers the relationship between heatmap weights at the same location, while the visual information of keypoints is completely ignored. In our framework, pose graph is built on the visual features at the position of corresponding keypoints, which is more conducive to subsequent refinement.
\section{Two stage pose estimation framework}
\begin{figure} [t]
\centering
\includegraphics[width=1.0\textwidth]{figures/methodoverview.pdf}
\caption{\textbf{Overall architecture of two stage pose estimation framework}. In the first stage, heatmap regressor is applied to obtain a rough localization heatmap, and a set of guided points are sampled. In the second stage, guided points with corresponding localization features are constructed as pose graphs and then feed into a graph pose refinement (GPR) module to get refined results.}
\label{fig:method_overview}
\end{figure}
In the top-down manner pose estimation methods, single person pose estimator aims to locate $K$ keypoints $\vvec{P} = \{\vvec{p}_1, \vvec{p}_2, ..., \vvec{p}_k\}$ from an image $\vvec{I}$ of size $W \times H \times 3$, where $\vvec{p}_k$ is a 2D-coordinates. Heatmap based methods transform this problem to estimating $K$ heatmaps $\{ \vvec{H}_1, \vvec{H}_2, ... , \vvec{H}_k \} $ of size $W' \times H' \times K$, where each heatmap $\vvec{H}_k$ will be decoded to the corresponding coordinates $\vvec{p}_k$ during the test phase.
Our method simply follows the popular methods to generate the heamap in the first stage. A common pipeline is first to use a deep convolutional network $\phi$ to extract visual features $\vvec{V}$ from image $\vvec{I}$,
\begin{equation}
\vvec{V} = \phi(\vvec{I}).
\end{equation}
A heapmap regressor $\phi_h$, typically ended with a $1 \times 1$ convolutional layer, is applied to estimating the heatmaps,
\begin{equation}
\{ \vvec{H}_1, \vvec{H}_2, ... , \vvec{H}_k \} = \phi_h(\vvec{V}).
\end{equation}
The refinement network is added after the heatmap regression, without any changes to the existing network architecture in the first stage. Therefore, our method can be applied to any heatmap based models easily. The overall architecture of our method is shown in Fig.~\ref{fig:method_overview}.
At first, we apply a localization subnet $\phi_l$ to transform the visual feature to the same spacial scale as heatmaps,
\begin{equation}
\vvec{F} = \phi_l(\vvec{V}),
\end{equation}
where the size of $\vvec{F}$ is $W' \times H' \times C$.
During training, $N$ guided points $\{ \vvec{s}^1_k, \vvec{s}^2_k, ..., \vvec{s}^N_k \}$ are sampled for each heatmap $\vvec{H}_k$, while the best guided points $\vvec{s}^*_k$ is selected for heapmap $\vvec{H}_k$ during testing. For sake of simplification, we omit the superscript in the following formula. For any guided point $\vvec{s}_k$, guided feature $\vvec{f}_k=\vvec{F} [\vvec{s}_k]$ at the corresponding location and its confidence score $h_k=\vvec{H_k} [\vvec{s}_k]$ can be extracted.
Subsequently, we can build $N$ pose graph for $N \times K$ guided features, and introduce a graph pose refinement (GPR) module to refine the visual features by considering the relationship between keypoints.
\begin{equation}
\{\vvec{g}_1, \vvec{g}_2, ..., \vvec{g}_K \} = \GPR(\{ \vvec{f}_1, \vvec{f}_2, ..., \vvec{f}_K \}, \{ h_1, h_2, ..., h_K \} ).
\end{equation}
Finally, the refined classification result $\vvec{c}_k$ and offset regression result $\vvec{r}_k$ are achieved based on the refined feature $\vvec{g}_k$. The refined coordinate of keypoint is
\begin{equation}
\vvec{p}_k = \vvec{s}_k + \vvec{r}_k .
\end{equation}
In the following, we first describe the guided point sampling strategy in section \ref{sec:sampling}. Second, we show the detail structure of graph pose refinement module in section \ref{sec:gcn}. Third, we introduce the loss used for training in section \ref{sec:loss}. Finally, we show how to integrate our framework to existing backbones and elaborate the details of training and testing in section \ref{sec:detail}.
\subsection{Guided Point Sampling} \label{sec:sampling}
Locating human joint based on the peak of heatmap is frequently-used in modern human pose estimators, and they modeled the target of heatmap by generating gaussian distribution around the ground truth. But due to the complex image context and human action, the joint heat may not be satisfy gaussian distribution strictly which, together with quantisation affect of image resolution downsampling, leads to an insufficient precision of this localization method. However, the peak of heatmap is always close to the true location of joint, which make it adequate to regress the true location.
\begin{figure} [t]
\centering
\includegraphics[width=1.0\textwidth]{figures/proposalsampling.pdf}
\caption{\textbf{Illustration of sampling region}. Taking right wrist as example, (a), (b), (c) show the three kinds of guided points respectively, points which are close to the ground truth keypoint, points which are far away from the ground truth keypoint and points which have high heat response, where the yellow circle points indicate sampled guided points and the yellow star points indicate ground truth of right wrist.}
\label{fig:proposalsampling}
\end{figure}
To achieve the goal of obtaining refined coordinate based on the peak of heatmap, we sample several guided points and train coordinate refinement in stage2. Concretely, we equally sample three kinds of guided points for training: (a) points which are close to the ground truth keypoint, (b) points which are far away from the ground truth keypoint, and (c) points which have high heat response. And the $k$th ground truth keypoint is denoted as $\vvec{t}_k$. As exhibited in Fig.~\ref{fig:proposalsampling}, (a) and (b) are randomly sampled within the red region and blue region, respectively, and the red region which centered at ground truth has a radius of $3\sigma$, where $\sigma$ is same with the standard deviation for generating gaussian heatmap target. (c) is randomly sampled from the top $N$ highest response points at the heatmap.
Due to the different characterization of different keypoints, we sample guided points for each keypoint individually, and the total amount of the three kinds of guided points for each keypoint is set equally to $N$.
After the $N$ guided points $\{ \vvec{s}^1_k, \vvec{s}^2_k, ..., \vvec{s}^N_k \}$ are sampled, we divide them into two sets, positive set and negative set, denoted as
\begin{equation}
\begin{split}
& \mathcal{S}_k^+ = \{ \vvec{s}_k \ | \ ||\vvec{s}_k-\vvec{t}_k|| < 3\sigma \} \\
& \mathcal{S}_k^- = \{ \vvec{s}_k \ | \ ||\vvec{s}_k-\vvec{t}_k|| \geq 3\sigma \}
\end{split}
\end{equation}
%
and $N_k^+=|\mathcal{S}_k^+|$, $N_k^-=|\mathcal{S}_k^-|$. Then all of the corresponding guided feature extracted from $\vvec{F}$ by means of bilinear interpolation are feeded into stage2 for refinement while only the guided points from positive set contributed to the coordinate regression.
According to the above label assignment manner, (a) and (b) are definite positive and negative samples, and the influence of proportion between them will be explored in Section~\ref{sec:ablation}. While (c) is almost negative samples during the beginning stage of training and turns to positive samples as the training schedule goes on. We suppose that (c) can not only accelerate the feature learning at the beginning of training, because (c) are hard negative samples for classification at this stage, but also contribute to the learning of regression when the classification status of feature is relatively stable, as (c) are almost positive samples at this period. Further more, (c) is not necessarily positive when the model converges roughly because of some prediction error caused by hard situation. In this circumstances, (c) can also be regarded as hard negative samples for helping the model to be trained better.
\subsection{Graph Pose Refinement} \label{sec:gcn}
In most of previous works, many fields have been well studied for human pose estimation, such as network structure, data preprocessing and postprocessing, post refinement, etc. However, in these works, the localization of human keypoints are conducted independently for each keypoint while the relationship between different keypoints is ignored all along. Intuitively, the human keypoints construct a salient graph structure base on the pattern of human body, and they have clear adjacent relation with each other. So we consider that the localization of keypoints can be infered better with the help of the information hinted by this relationship. For instance, in our framework, if we know that a guided point is left elbow, then the positive guided points of left wrist should tend to have higher response on left wrist a priori, as left wrist is adjacent to left elbow. So that more supervision can be imposed upon the feature of these keypoints than treating them independently.
\begin{figure} [t]
\centering
\includegraphics[width=1.0\textwidth]{figures/gcnhead.pdf}
\caption{\textbf{The structure of graph pose refinement module}. The relationship between keypoints is taken into account in contrast to the struct-agnostic module.}
\label{fig:gcnhead}
\end{figure}
To take advantage of the information implicit in the graph structure mentioned above, we propose a graph pose refinement module to model it, and then refine the feature of these keypoints. As shown in Fig.~\ref{fig:gcnhead}, we build a graph and conduct graph convolution for each keypoint. The output embedding feature can be computed by
\begin{equation}
\label{fml:computegcn}
\begin{split}
& \vvec{g}_k = \frac{1}{Z_k}\sum_{\vvec{s}_{k^{'}} \in \mathcal{N}(k)} \omega_{k^{'}} \mathcal{T}_{k^{'}k}(\vvec{f}_{k^{'}}) \\
& \omega_{k^{'}} = \left\{\begin{aligned}
& h_{k^{'}} \mathbbm{1}(R_{k^{'}}), & k^{'} \ne k \\
& 1, & k^{'} = k
\end{aligned}\right.
\end{split}
\end{equation}
%
where $\mathcal{N}(k)$ represents for a point set containing the guided point $\vvec{s}_k$ and its neighbours, $\mathcal{T}_{k^{'}k}$ for the linear transformation from guided point $\vvec{s}_{k^{'}}$ to $\vvec{s}_k$, and $\mathbbm{1}$ for the indicator function. $Z_k = \sum_{\vvec{s}_{k^{'}} \in \mathcal{N}(k)} \omega_{k^{'}}$ is used for normalization. $R_{k^{'}}$ is a boolean type parameter encoding the reliability of a guided point which works for filtering out points of low quality, and its definition will be explained in detail in section~\ref{sec:detail}.
Specially, as defined in (\ref{fml:computegcn}), this graph convolution is an extension of traditional graph convolution, it is designed by considering the characteristic of pose estimation problem. Firstly, we add a weight for each message generated from $\vvec{s}_k^{'}$ to $\vvec{s}_k$, which can control the contribution of each message according to the intensity and reliability of $\vvec{s}_k^{'}$. With the constraint of these weights, the graph convolution can be trained more stably. Further more, we set $\omega_{k^{'}}=1$ when $k^{'} = k$. And this can make the graph convolution degrading to a traditional linear transformation for $\vvec{s}_k$ when $\mathbbm{1}(R_{k^{'}})=0$ for all $\vvec{s}_{k^{'}} \in \mathcal{N}(k)$ where $k^{'} \ne k$, without being affected by the intensity and reliability of $\vvec{s}_k$ itself.
\subsection{Loss Function} \label{sec:loss}
After the refinement module above, the embedded feature is sent to a module containing several fully connected layers and batch norm layers, as illustrated in Fig.~\ref{fig:gcnhead}. Finally two predictions are outputed, denoted as $\vvec{c}_k$ and $\vvec{r}_k$, for classification and regression respectively. Giving ground truth keypoint location $\vvec{t}_k$, the losses for these two branches are defined as
\begin{equation}
\begin{split}
L_k^{cls} &= \frac{1}{2} \left[ \frac{1}{N_k^+}\sum_{\vvec{s}_k^i \in \mathcal{S}_k^+} \alpha_k^i\mathcal{L}_{cls}(\vvec{c}_k^i, 1) + \frac{1}{N_k^-}\sum_{\vvec{s}_k^i \in \mathcal{S}_k^-}\mathcal{L}_{cls}(\vvec{c}_k^i, 0) \right] \\
%
& \hspace{2.3cm} \alpha_k^i = exp(-\frac{(\vvec{s}_k^i-\vvec{t}_k)^2}{2\sigma^2})
\end{split}
\end{equation}
%
and
%
\begin{equation}
L_k^{reg} = \frac{1}{N_k^+}\sum_{\vvec{s}_k^i \in \mathcal{S}_k^+}\mathcal{L}_{reg}(\vvec{r}_k^i, \vvec{t}_k-\vvec{s}_k^i),
\end{equation}
%
where $\mathcal{L}^{cls}$ and $\mathcal{L}^{reg}$ are softmax cross-entropy loss and L1 loss.
The total loss of the stage2, can be expressed as
\begin{equation}
L^{s_2} = \frac{\sum_k \gamma_k(L_k^{cls} + \lambda L_k^{reg})}{\sum_k \gamma_k},
\end{equation}
%
where $\gamma_k$ is the target weight of keypoint $k$. And $\lambda$ is a loss weight which is set to 16 constantly. And the total loss of Graph-PCNN is
\begin{equation}
L = L^{s_1} + L^{s_2},
\end{equation}
%
where $L^{s_1}$ is the traditional heatmap regression loss for stage1.
\subsection{Network Architecture} \label{sec:detail}
\subsubsection{Network Architecture}
In previous works such as \cite{papandreou2017towards}, \cite{huang2019devil}, there is also a coordinate refinement after heatmap decoding, and their coordinate refinement branch share the same feature map with heatmap prediction branch. However, the rough and accurate localization always need different embedding feature, further more, it is hard to conduct particular feature refinement for either of these two branches. In order to alleviate the above problems, we copy the last stage of the backbone network to produce two different feature maps with the same size followed by heatmap regression convolution and graph pose refinement module respectively. By means of this modification, the network can learn more particular feature for two different branches, and easily conduct guided points sampling for further feature refinement.
\subsubsection{Training and Testing}
For the proposed two stage pose estimation framework, several operations are specific in the training and testing phase.
Firstly, in order to make the stage2 be trained sufficiently, we sample multiple guided points for each keypoint following the strategy described in Section~\ref{sec:sampling} during training, and the amount of guided points $N$ is various according to the input size. While during testing, only one guided point is generated by decoding the predicted heatmap, and the output score of it is gathered as the corresponding heat response score from stage1. Following most of previous works\cite{xiao2018simple}, \cite{sun2019deep}, a quarter offset in the direction from the highest response to the second highest response is added to the position of heatmap peak for higher precision, when decoding this guided point from heatmap.
Secondly, the definition of guided point reliability metric $R_{k^{'}}$ is different for training and testing, which is represented as
\begin{equation}
R_{k^{'}} = \left\{\begin{aligned}
& ||\vvec{s}_{k^{'}}-\vvec{t}_{k^{'}}||<\delta & in\ training\ phase \\
& h_{k^{'}} > \xi & in\ testing\ phase
\end{aligned}\right.
\end{equation}
%
At the training phase, the ground truth is available for measuring this reliability, and the guided points which are close to their corresponding ground truth can be regarded reliable. $\delta$ is a distance threshold controling the close degree which equals to $2\sigma$. While at the testing phase the ground truth is unknown, so for insurance, the guided points which heat responses are high enough are qualified to pass message to their neighbour points. And $\xi$ is a threshold for gating the heat response, which is set to 0.85 constantly.
Finally, during training, we shuffle the guided points of one keypoint after the guided point sampling in order to create more various situation of graph combination, which can make the graph pose refinement module more generalized.
\section{Experiments}
\subsection{Dataset}
In this paper, we use the most popular human pose estimation dataset, COCO. The COCO keypoint dataset \cite{lin2014microsoft} presents challenging images with multi-person pose of various body scales and occlusion patterns in unconstrained environments. It contains 200,000 images and 250,000 person samples. Each person instance is labelled with 17 joints. We train our models on \texttt{train2017}(includes 57K images and 150K person instances) with no extra data, and conduct ablation study on \texttt{val2017}. Then we test our models on
\texttt{test-dev}
for comparison with the state-of-the-art methods. In evaluation, we use the metric of Object Keypoint Similarity (OKS) for COCO to report the model performance.
\subsection{Implementation Details}
For fair comparison, we follow the same training configuration as \cite{xiao2018simple} and \cite{sun2019deep} for ResNet and HRNet respectively.
To construct the localization subnet,
we copy the conv5 stage, which spatial size is 1/32 to the input size, and the last three deconvolution layers for ResNet series networks, while copying the stage4, which has three high resolution modules, for HRNet series networks. For ablation study, we also add 128x96 input size in our experiment following \cite{zhang2019distribution}. And we set $N$ as 48, 192 and 432 corresponding to the three input sizes of 128x96, 256x192 and 384x288 during all of our experiment except the ablation study of $N$. During inference, we use person detectors of AP 56.4 and 60.9 for COCO \texttt{val2017} and \texttt{test-dev} respectively, while for pose estimation, we evaluate single model and only use flipping test strategy for testing argumentation.
\subsection{Ablation Studies} \label{sec:ablation}
We use ResNet-50 backbone to perform ablation study on COCO
\texttt{val2017}.
\setlength{\tabcolsep}{4pt}
\begin{table} [t]
\centering
\caption{Ablation study on COCO
\texttt{val2017}
}
\label{table:ablation-self}
\begin{tabular}{cccc}
\hline\noalign{\smallskip}
Method & Size & stage1 AP & stage2 AP \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
SBN & 128x96 & 59.3 & - \\
Graph-PCNN & 128x96 & \textbf{61.1} & \textbf{64.6} \\
SBN & 256x192 & 70.4 & - \\
Graph-PCNN & 256x192 & \textbf{71.3} & \textbf{72.6} \\
SBN & 384x288 & 72.2 & - \\
Graph-PCNN & 384x288 & \textbf{72.7} & \textbf{73.6} \\
\hline
\end{tabular}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\pheadB{Two stage pose estimation framework.} Firstly, we evaluate the effectiveness of our proposed two stage pose estimation framework. As Table~\ref{table:ablation-self} shows, the stage2 of Graph-PCNN gives 5.3\%, 2.2\%, 1.4\% AP gain comparing to original simple baseline network(SBN) at the three input sizes, which demonstrates that our regression based two stage framework is more effective than decoding joint location from heatmap. Further more, we test the stage1 of Graph-PCNN which shares the same network architecture with SBN. It should be noted that training with Graph-PCNN can also boost the performance of heatmap, and 1.8\%, 0.9\%, 0.5\% AP gain are got as shown. That means we can also get considerable performance boosting without any extra computing cost during inference if we only use the stage1 of Graph-PCNN.
\begin{figure} [t]
\centering
\includegraphics[width=1.0\textwidth]{figures/ablation-sampling.pdf}
\caption{Influence of the proportion and total amount of guided points in sampling. (a) is the results on different proportions, while the x-axis represents the proportions between positive guided points and negative guided points. (b) is the results on different values of the total amount, while the x-axis represents the values of $N$.}
\label{fig:ablation-sampling}
\end{figure}
\pheadB{Sampling strategy.} Secondly, we study the influence of the proportion of different kinds of guided points and the total amount of guided points $N$ based on ResNet-50 with 128x96 input size. In order to avoid exploring the proportion among all the three kinds of guided points, we simplify the proportion study by using only definite positive points and negative points, and then we set different proportion between them with $N$ unchanged. From the results shown in Fig.~\ref{fig:ablation-sampling} (a), we can come to that the proportion ranging from 1:2 to 2:1 is already appropriate, and the sampling strategy proposed in Section~\ref{sec:sampling}
can fit this proportion range in any
situation.
In addition, we try different $N$ based on the strategy in Section~\ref{sec:sampling} and finally select 48 as the value of $N$ according to the results reported in Fig.~\ref{fig:ablation-sampling} (b).
\pheadB{Graph pose refinement module.} Finally, we evaluate the contribution of the proposed graph pose refinement(GPR) module. In this study,
we compare proposed GPR with a struct-agnostic baseline module and several variants of GPR(GPR-va, GPR-vb, GPR-vc).
GPR-va set $\omega_{k^{'}}=1$ for all $\{k^{'} | \vvec{s}_{k^{'}} \in \mathcal{N}(k)\}$ in (\ref{fml:computegcn}), GPR-vb set $\omega_{k^{'}}=\mathbbm{1}(R_{k^{'}})$ for $\{k^{'} | \vvec{s}_{k^{'}} \in \mathcal{N}(k), k^{'} \ne k\}$ with the heat response factor dropped, and GPR-vc dropped the guided points shuffling operation mentioned in Section~\ref{sec:detail}. The comparison results are displayed in Table~\ref{table:ablation-graph_coordinate_refinement}. We can see that GPR boosts the stage1 AP and stage2 AP by 0.4\% and 0.8\% respectively, comparing to the struct-agnostic baseline.
And the performance of GPR is better than all of its other variants, which reveals the importance of parameter $\omega_{k^{'}}$ and the guided points shuffling operation. Especially, the reliability factor $\mathbbm{1}(R_{k^{'}})$ affects the performance greatly.
%
Thus, we believe that GPR can refine the feature of a guided point by taking advantage of the supervision signal of its neighbour keypoint which is good located, as we supposed in Section~\ref{sec:gcn}.
\setlength{\tabcolsep}{4pt}
\begin{table} [t]
\centering
\caption{Effectiveness of the graph pose refinement(GPR) module.}
\label{table:ablation-graph_coordinate_refinement}
\begin{tabular}{cccc}
\hline\noalign{\smallskip}
Method & Size & stage1 AP & stage2 AP \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
struct-agnostic & 128x96 & 60.7 & 63.8 \\
GPR-va & 128x96 & 61.2 & 52.1 \\
GPR-vb & 128x96 & 61.1 & 64.5 \\
GPR-vc & 128x96 & 60.8 & 64.3 \\
GPR & 128x96 & 61.1 & \textbf{64.6} \\
\hline
\end{tabular}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\subsection{Comparison with Other Methods with Coordinate Refinement}
DARK\cite{zhang2019distribution} is a state-of-the-art method which improved traditional decoding by a more precise refinement based on Taylor-expansion. We follow the training settings of DARK and compare our refinement results with it. From Table~\ref{table:ablation-dark} we can observe that our
Graph-PCNN
%
generally outperforms DARK over different network architecture and input size. This suggests that regression based refinement predicts coordinate more precise than analyzing the distribution of response signal from heatmap, as the response signal itself may not satisfy gaussian distribution strictly because of complex human pose and image context while regression is regardless of these drawback.
\setlength{\tabcolsep}{4pt}
\begin{table} [t]
\centering
\caption{Comparison with distribution-aware coordinate representation of keypoint(DARK) on COCO \texttt{val2017}.}
\label{table:ablation-dark}
\begin{tabular}{ccccccccc}
\hline\noalign{\smallskip}
Method & Backbone & Size & \emph{AP} & \emph{$AP^{50}$} & \emph{$AP^{75}$} & \emph{$AP^M$} & \emph{$AP^L$} & \emph{AR} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
DARK & R50 & 128x96 & 62.6 & 86.1 & 70.4 & 60.4 & 67.9 & 69.5 \\
Graph-PCNN & R50 & 128x96 & \textbf{64.6} & \textbf{86.4} & \textbf{72.7} & \textbf{62.4} & \textbf{70.1} & \textbf{71.5} \\
\hline
\noalign{\smallskip}
DARK & R101 & 128x96 & 63.2 & 86.2 & 71.1 & 61.2 & 68.5 & 70.0 \\
Graph-PCNN & R101 & 128x96 & \textbf{64.8} & \textbf{86.6} & \textbf{73.1} & \textbf{62.6} & \textbf{70.3} & \textbf{71.7} \\
\hline
\noalign{\smallskip}
DARK & R152 & 128x96 & 63.1 & 86.2 & 71.6 & 61.3 & 68.1 & 70.0 \\
Graph-PCNN & R152 & 128x96 & \textbf{66.1} & \textbf{87.2} & \textbf{74.6} & \textbf{64.1} & \textbf{71.5} & \textbf{73.0} \\
\hline
\noalign{\smallskip}
DARK & HR32 & 128x96 & 70.7 & 88.9 & 78.4 & 67.9 & 76.6 & 76.7 \\
Graph-PCNN & HR32 & 128x96 & \textbf{71.5} & \textbf{89.0} & \textbf{79.0} & \textbf{68.4} & \textbf{77.6} & \textbf{77.3} \\
\hline
\noalign{\smallskip}
DARK & HR48 & 128x96 & 71.9 & 89.1 & 79.6 & 69.2 & 78.0 & 77.9 \\
Graph-PCNN & HR48 & 128x96 & \textbf{72.8} & \textbf{89.2} & \textbf{80.1} & \textbf{69.9} & \textbf{79.0} & \textbf{78.6} \\
\hline
\noalign{\smallskip}
\hline
\noalign{\smallskip}
DARK & HR32 & 256x192 & 75.6 & 90.5 & 82.1 & 71.8 & 82.8 & 80.8 \\
Graph-PCNN & HR32 & 256x192 & \textbf{76.2} & \textbf{90.3} & \textbf{82.6} & \textbf{72.5} & \textbf{83.2} & \textbf{81.2} \\
\hline
\noalign{\smallskip}
DARK & HR32 & 384x288 & 76.6 & 90.7 & 82.8 & 72.7 & 83.9 & 81.5 \\
Graph-PCNN & HR32 & 384x288 & \textbf{77.2} & \textbf{90.7} & \textbf{83.6} & \textbf{73.5} & \textbf{84.0} & \textbf{82.1} \\
\hline
\end{tabular}
\end{table}
\setlength{\tabcolsep}{1.4pt}
PoseFix\cite{moon2019posefix} is a model-agnostic method which refines a existing pose result from any other method by a independent model. A coarse-to-fine coordinate estimation schedule ended by coordinate calculation following integral loss\cite{sun2018integral} is used to enhance the precision. We conduct comparison with PoseFix by using same backbone and input size with its model from refinement stage and the performance of human detectors for these two methods are comparable, AP 55.3 vs 56.4 for PoseFix(using CPN) and our
Graph-PCNN
%
respectively. As illustraed in Table~\ref{table:ablation-posefix}, we achieve a competable result with PoseFix, but PoseFix included input from CPN which need an extra R50 network while our method only need an extra R50 conv5 stage as refinement branch.
\setlength{\tabcolsep}{4pt}
\begin{table} [t]
\centering
\caption{Comparison with model-agnostic human pose refinement network(PoseFix) on COCO \texttt{val2017}.}
\label{table:ablation-posefix}
\begin{tabular}{ccccccccc}
\hline\noalign{\smallskip}
Method & Backbone & Size & \emph{AP} & \emph{$AP^{50}$} & \emph{$AP^{75}$} & \emph{$AP^M$} & \emph{$AP^L$} & \emph{AR} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
PoseFix & R50 & 256x192 & 72.1 & 88.5 & 78.3 & 68.6 & 78.2 & - \\
Graph-PCNN & R50 & 256x192 & \textbf{72.6} & \textbf{89.1} & \textbf{79.3} & \textbf{69.1} & \textbf{79.7} & 78.1 \\
\hline
\end{tabular}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\subsection{Comparison to State of the Art}
We compare our Graph-PCNN with other top-performed methods on COCO \texttt{test-dev}. As Table~\ref{table:sota-test-dev} reports, our method with HR48 backbone at the input size of 384x288 achieves the best AP(76.8), and improves HR48 with the same input size(75.5) by a large margin(+1.3). Mean while, It also outperforms other competitors with same backbone and input size settings, such as DARK(76.2), UDP(76.5) and PoseFix(76.7), which illustrates the advantages of our method.
\setlength{\tabcolsep}{4pt}
\begin{table} [t]
\centering
\caption{Comparison with the state-of-the-arts methods on COCO \texttt{test-dev}.}
\label{table:sota-test-dev}
\resizebox{1.0\textwidth}{!}{
\begin{tabular}{ccccccccc}
\hline\noalign{\smallskip}
Method & Backbone & Size & \emph{AP} & \emph{$AP^{50}$} & \emph{$AP^{75}$} & \emph{$AP^M$} & \emph{$AP^L$} & \emph{AR} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
CMU-Pose\cite{cao2017realtime} & - & - & 61.8 & 84.9 & 67.5 & 57.1 & 68.2 & 66.5 \\
Mask-RCNN\cite{he2017mask} & R50-FPN & - & 63.1 & 87.3 & 68.7 & 57.8 & 71.4 & - \\
G-RMI\cite{papandreou2017towards} & R101 & 353x257 & 64.9 & 85.5 & 71.3 & 62.3 & 70.0 & 69.7 \\
AE\cite{newell2017associative} & - & 512x512 & 65.5 & 86.8 & 72.3 & 60.6 & 72.6 & 70.2 \\
Integral Pose\cite{sun2018integral} & R101 & 256x256 & 67.8 & 88.2 & 74.8 & 63.9 & 74.0 & - \\
CPN\cite{chen2018cascaded} & ResNet-Inception & 384x288 & 72.1 & 91.4 & 80.0 & 68.7 & 77.2 & 78.5 \\
RMPE\cite{fang2017rmpe} & PyraNet\cite{yang2017pyramid} & 320x256 & 72.3 & 89.2 & 79.1 & 68.0 & 78.6 & - \\
CFN\cite{huang2017coarse} & - & - & 72.6 & 86.1 & 69.7 & 78.3 & 64.1 & - \\
CPN(ensemble)\cite{chen2018cascaded} & ResNet-Inception & 384x288 & 73.0 & 91.7 & 80.9 & 69.5 & 78.1 & 79.0 \\
Posefix\cite{moon2019posefix} & R152+R152 & 384x288 & 73.6 & 90.8 & 81.0 & 70.3 & 79.8 & 79.0 \\
CSM+SCARB\cite{su2019multi} & R152 & 384x288 & 74.3 & 91.8 & 81.9 & 70.7 & 80.2 & 80.5 \\
CSANet\cite{yu2019context} & R152 & 384x288 & 74.5 & 91.7 & 82.1 & 71.2 & 80.2 & 80.7 \\
MSPN\cite{li2019rethinking} & MSPN & 384x288 & 76.1 & 93.4 & 83.8 & 72.3 & 81.5 & 81.6 \\
\hline
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Simple Base\cite{xiao2018simple} & R152 & 384x288 & 73.7 & 91.9 & 81.1 & 70.3 & 80.0 & 79.0 \\
UDP\cite{huang2019devil} & R152 & 384x288 & 74.7 & 91.8 & 82.1 & 71.5 & 80.8 & 80.0 \\
\textbf{Graph-PCNN} & R152 & 384x288 & \textbf{75.1} & \textbf{91.8} & \textbf{82.3} & \textbf{71.6} & \textbf{81.4} & \textbf{80.2} \\
\hline
\noalign{\smallskip}
HRNet\cite{sun2019deep} & HR32 & 384x288 & 74.9 & 92.5 & 82.8 & 71.3 & 80.9 & 80.1 \\
UDP\cite{huang2019devil} & HR32 & 384x288 & 76.1 & 92.5 & 83.5 & 72.8 & 82.0 & 81.3 \\
\textbf{Graph-PCNN} & HR32 & 384x288 & \textbf{76.4} & \textbf{92.5} & \textbf{83.8} & \textbf{72.9} & \textbf{82.4} & \textbf{81.3} \\
\hline
\noalign{\smallskip}
HRNet\cite{sun2019deep} & HR48 & 384x288 & 75.5 & 92.5 & 83.3 & 71.9 & 81.5 & 80.5 \\
DARK\cite{zhang2019distribution} & HR48 & 384x288 & 76.2 & 92.5 & 83.6 & 72.5 & 82.4 & 81.1 \\
UDP\cite{huang2019devil} & HR48 & 384x288 & 76.5 & \textbf{92.7} & 84.0 & 73.0 & 82.4 & 81.6 \\
PoseFix\cite{moon2019posefix} & HR48+R152 & 384x288 & 76.7 & 92.6 & 84.1 & 73.1 & 82.6 & 81.5 \\
\textbf{Graph-PCNN} & HR48 & 384x288 & \textbf{76.8} & 92.6 & \textbf{84.3} & \textbf{73.3} & \textbf{82.7} & \textbf{81.6} \\
\hline
\end{tabular}
}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\section{Conclusions}
In this paper, we propose a two stage human pose estimator for the top-down pose estimation network, which improves the overall
localization
performance by introducing different features for rough and accurate localization. Meanwhile, a graph pose refinement module is proposed to refine the feature for pose regression by taking the relationship between keypoints into account, which boosts the performance of our two stage pose estimator further. Our proposed method is
model-agnostic
and can be added on most of the mainstream backbone. Even better, more improvement can be explored by drawing on the successful experience of the two stage detection framework in the future.
%
%
\bibliographystyle{splncs04}
| proofpile-arXiv_065-313 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{figs/teaser.pdf}
\caption{Our audio-visual video parsing model aims to parse a video into different audio (audible), visual (visible), and audio-visual (audi-visible) events with correct categories and boundaries. A dog in the video visually appears from 2nd second to 5th second and make barking sounds from 4th second to 8th second. So, we have audio event (4s-8s), visual event (2s-5s), and audio-visual event (4s-5s) for the \textit{Dog} event category.}
\label{fig:teaser}
\end{figure}
Human perception involves complex analyses of visual, auditory, tactile, gustatory, olfactory, and other sensory data.
Numerous psychological and brain cognitive studies~\cite{bulkin2006seeing,jacobs2019can,shams2008benefits,spence2003multisensory} show that combining different sensory data is crucial for human perception.
However, the vast majority of work~\cite{gaidon2013temporal,lin2014microsoft,shou2016temporal,yang2009video} in scene understanding, an essential perception task, focuses on visual-only methods ignoring other sensory modalities.
They are inherently limited. For example, when the object of interest is outside of the field-of-view (FoV), one would rely on audio cues for localization.
While there is little data on tactile, gustatory, or olfactory signals, we do have an abundance of multimodal audiovisual data, e.g., YouTube videos.
Utilizing and learning from both auditory and visual modalities is an emerging research topic. Recent years have seen progress in learning representations~\cite{Arandjelovic2017ICCV,aytar2016soundnet,hu2019deep,korbar2018cooperative,owens2018audio,owens2016ambient}, separating visually indicated sounds~\cite{ephrat2018looking,gao2018learning,gao20192,gao2019co,zhao2019sound,zhao2018sound,gan2020music,zhou2020sep}, spatially localizing visible sound sources~\cite{owens2018audio,senocak2018learning,tian2018audio}, and temporally localizing audio-visual synchronized segments~\cite{lin2019dual,tian2018audio,wu2019DAM}. However, past approaches usually assume audio and visual data are always correlated or even temporally aligned. In practice, when we analyze the video scene, many videos have audible sounds, which originate outside of the FoV, leaving no visual correspondences, but still contribute to the overall understanding, such as out-of-screen running cars and a narrating person.
Such examples are ubiquitous, which leads us to some basic questions: what video events are audible, visible, and ``audi-visible,'' where and when are these events inside of a video, and how can we effectively detect them?
To answer the above questions, we pose and try to tackle a fundamental problem: \textit{audio-visual video parsing} that recognizes event categories bind to sensory modalities, and meanwhile, finds temporal boundaries of when such an event starts and ends (see Fig.~\ref{fig:teaser}).
However, learning a fully supervised audio-visual video parsing model requires densely annotated event modality and category labels with corresponding event onsets and offsets, which will make the labeling process extremely expensive and time-consuming.
To avoid tedious labeling, we explore weakly-supervised learning for the task, which only requires sparse labeling on the presence or absence of video events.
The weak labels are easier to annotate and can be gathered in a large scale from web videos.
We formulate the weakly-supervised audio-visual video parsing as a Multimodal Multiple Instance Learning (MMIL) problem and propose a new framework to solve it.
Concretely, we use a new hybrid attention network (HAN) for leveraging unimodal and cross-modal temporal contexts simultaneously. We develop an attentive MMIL pooling method for adaptively aggregating useful audio and visual content from different temporal extent and modalities. Furthermore, we discover modality bias and noisy label issues and alleviate them with an individual-guided learning mechanism and label smoothing~\cite{reed2014training}, respectively.
To facilitate our investigations, we collect a \textit{Look, listen, and Parse} (LLP) dataset that has $11,849$ YouTube video clips from $25$ event categories. We label them with sparse video-level event labels for training.
For evaluation, we label a set of precise labels, including event modalities, event categories, and their temporal boundaries. Experimental results show that it is tractable to learn audio-visual video parsing even with video-level weak labels. Our proposed HAN model can effectively leverage multimodal temporal contexts. Furthermore, modality bias and noisy label problems can be addressed with the proposed individual learning strategy and label smoothing, respectively. Besides, we make a discussion on the potential applications enabled by audio-visual video parsing.
The contributions of our work include: (1) a new audio-visual video parsing task towards a unified multisensory perception; (2) a novel hybrid attention network to leverage unimodal and cross-modal temporal contexts simultaneously; (3) an effective attentive MMIL pooling to aggregate multimodal information adaptively; (4) a new individual guided learning approach to mitigate the modality bias in the MMIL problem and label smoothing to alleviate noisy labels; and (5) a newly collected large-scale video dataset, named LLP, for audio-visual video parsing. Dataset, code, and pre-trained models are publicly available in \url{https://github.com/YapengTian/AVVP-ECCV20}.
\section{Related Work}
\label{sec:related}
In this section, we discuss some related work on temporal action localization, sound event detection, and audio-visual learning.
\noindent\textbf{Temporal Action Localization.} Temporal action localization (TAL) methods usually use sliding windows as action candidates and address TAL as a classification problem~\cite{gaidon2013temporal,lin2018bsn,long2019gaussian,shou2017cdc,shou2016temporal,zhao2017temporal} learning from full supervisions. Recently, weakly-supervised approaches are proposed to solve the TAL. Wang \emph{et al.}~\cite{wang2017untrimmednets} present an UntrimmedNet with a classification module and a selection module to learn the action models and reason about the
temporal duration of action instances, respectively. Hide-and-seek~\cite{singh2017hide} randomly hides certain sequences while training to force the model to explore more discriminative content. Paul~\emph{et al.}~\cite{paul2018w} introduce a co-activity similarity loss to enforce instances in the same class to be similar in the feature space. Inspired by the class activation map method~\cite{zhou2016learning}, Nguyen~\emph{et al.}~\cite{nguyen2018weakly} propose a sparse temporal pooling network (STPN). Liu~\emph{et al.}~\cite{liu2019completeness} incorporate both action completeness modeling and action-context separation into a weakly-supervised TAL framework. Unlike actions in TAL, video events in audio-visual video parsing might contain motionless or even out-of-screen sound sources and the events can be perceived by either audio or visual modalities. Even though, we extend two recent weakly-supervised TAL methods: STPN~\cite{nguyen2018weakly} and CMCS~\cite{liu2019completeness} to address visual event parsing and compare them with our model in Sec.~\ref{sec:comp}.
\noindent\textbf{Sound Event Detection.}
Sound event detection (SED) is a task of recognizing and locating audio events in acoustic environments. Early supervised approaches rely on some machine learning models, such as support vector machines~\cite{elizalde2016experiments}, Gaussian mixture models~\cite{heittola2013context} and recurrent neural networks~\cite{parascandolo2016recurrent}. To bypass strongly labeled data, weakly-supervised SED methods are developed~\cite{chou2018learning,kong2018audio,mcfee2018adaptive,wang2019comparison}. These methods only focus on audio events from constrained domains, such as urban sounds~\cite{salamon2014dataset} and domestic environments~\cite{mesaros2017dcase} and visual information is ignored. However, our audio-visual video parsing will exploit both modalities to parse not only event categories and boundaries but also event perceiving modalities towards a unified multisensory perception for unconstrained videos.
\noindent\textbf{Audio-Visual Learning.} Benefiting from the natural synchronization between auditory and visual modalities, audio-visual learning has enabled a set of new problems and applications including representation learning~\cite{Arandjelovic2017ICCV,aytar2016soundnet,hu2019deep,korbar2018cooperative,ngiam2011multimodal,owens2018audio,owens2016ambient}, audio-visual sound separation~\cite{ephrat2018looking,gao2018learning,gao20192,gao2019co,zhao2019sound,zhao2018sound,gan2020music,zhou2020sep}, vision-infused audio inpainting~\cite{zhou2019vision}, sound source spatial localization~\cite{owens2018audio,senocak2018learning,tian2018audio}, sound-assisted action recognition~\cite{gao2019listentolook,kazakos2019epic,korbar2019scsampler}, audio-visual video captioning~\cite{rahman2019watch,tian2018attempt,Tian_2019_CVPR_Workshops,wang2018watch}, and audio-visual event localization~\cite{lin2019dual,tian2018audio,tian2019audio,wu2019DAM}. Most previous work assumes that temporally synchronized audio and visual content are always matched conveying the same semantic meanings. However, unconstrained videos can be very noisy: sound sources might not be visible (\emph{e.g.}, an out-of-screen running car and a narrating person) and not all visible objects are audible (\emph{e.g.}, a static motorcycle and people dancing with music).
Different from previous methods, we pose and seek to tackle a fundamental but unexplored problem: audio-visual video parsing for parsing unconstrained videos into a set of video events associated with event categories, boundaries, and modalities. Since the existing methods cannot directly address our problem, we modify the recent weakly-supervised audio-visual event
localization methods: AVE~\cite{tian2018audio} and AVSDN~\cite{lin2019dual} adding additional audio and visual parsing branches as baselines.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{figs/llp_new.pdf}
\caption{Some examples from the LLP dataset.}
\label{fig:data}
\end{figure}
\section{LLP: The Look, Listen and Parse Dataset}
To the best of our knowledge, there is no existing dataset that is suitable for our research.
Thus, we introduce a \textit{Look, Listen, and Parse} dataset for audio-visual video scene parsing, which contains 11,849 YouTube video clips spanning over 25 categories for
a total of 32.9 hours collected from AudioSet~\cite{gemmeke2017audio}.
A wide range of video events (\emph{e.g.}, human speaking, singing, baby crying, dog barking, violin playing, and car running, and vacuum cleaning \emph{etc}.) from diverse domains (\emph{e.g.}, human activities, animal activities, music performances, vehicle sounds, and domestic environments) are included in the dataset. Some examples in the LLP dataset are shown in Fig.~\ref{fig:data}.
Videos in the LLP have 11,849 video-level event annotations on the presence or absence of different video events for facilitating weakly-supervised learning.
Each video is 10$s$ long and has at least 1$s$ audio or visual events.
There are 7,202 videos that contain events from more than one event categories and per video has averaged 1.64 different event categories. To evaluate audio-visual scene parsing performance, we annotate individual audio and visual events with second-wise temporal boundaries for randomly selected 1,849 videos from the LLP dataset. Note that the audio-visual event labels can be derived from the audio and visual event labels. Finally, we have totally 6,626 event annotations, including 4,131 audio events and 2,495 visual events for the 1,849 videos. Merging the individual audio and visual labels, we obtain 2,488 audio-visual event annotations.
To do validation and testing, we split the subset into a validation set with 649 videos and a testing set with 1,200 videos. Our weakly-supervised audio-visual video parsing network will be trained using the 10,000 videos with weak labels and the trained models are developed and tested on the validation and testing sets with fully annotated labels, respectively.
\section{Audio-Visual Video Parsing with Weak Labels}
\label{sec:avsp_problem}
We define the \textit{Audio-Visual Video Parsing} as a task to {group video segments and parse a video into different temporal audio, visual, and audio-visual events associated with semantic labels}. Since event boundary in the LLP dataset was annotated at second-level, video events will be parsed at scene-level not object/instance level in our experimental setting. Concretely, given a video sequence containing both audio and visual tracks, we divide it into $T$ non-overlapping audio and visual snippet pairs $\{V_t, A_t\}_{t=1}^{T}$, where each snippet is 1$s$ long and $V_t$ and $A_t$ denote visual and audio content in the same video snippet, respectively. Let $\textbf{\textit{y}}_t = \{(y_{t}^a, y_{t}^v, y_{t}^{av})|[y_{a}^t]_{c}, [y_{v}^t]_{c}, [y_{av}^t]_{c} \in \{0, 1\}, c = 1, ..., C\}$ be the event label set for the video snippet $\{V_t, A_t\}$, where $c$ refers to the $c$-th event category and $y_{t}^a$, $y_{t}^v$, and $y_{t}^{av}$ denote audio, visual, and audio-visual event labels, respectively. Here, we have a relation: $y_{t}^{av} = y_{t}^{a}*y_{t}^{v}$, which means that audio-visual events occur only when there exists both audio and visual events at the same time and from the same event categories.
In this work, we explore the audio-visual video parsing in a weakly-supervised manner.
We only have video-level labels for training, but will predict precise event label sets for all video snippets during testing, which makes the weakly-supervised audio-visual video parsing be a multi-modal multiple instance learning (MMIL) problem.
Let a video sequence with $T$ audio and visual snippet pairs be a bag.
Unlike the previous audio-visual event localization~\cite{tian2018audio} that is formulated as a MIL problem~\cite{maron1998framework} where an audio-visual snippet pair is regarded as an instance, each audio snippet and the corresponding visual snippet occurred at the same time denote two individual instances in our MMIL problem.
So, a positive bag containing video events will have at least one positive video snippet; meanwhile at least one modality has video events in the positive video snippet.
During training, we can only access bag labels.
During inference, we need to know not only which video snippets have video events but also which sensory modalities perceive the events. The temporal and multi-modal uncertainty in this MMIL problem makes it very challenging.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{figs/framework.pdf}
\caption{The proposed audio-visual video parsing framework. It uses pre-trained CNNs to extract snippet-level audio and visual features and leverages multimodal temporal contexts with the proposed hybrid attention network (HAN). For each snippet, we will predict both audio and visual event labels from the aggregated features by the HAN. Attentive MMIL pooling is utilized to adaptively predict video-level event labels for weakly-supervised learning (WSL) and individual guided learning is devised to mitigate the modality bias issue.}
\label{fig:framework}
\end{figure}
\section{Method}
\label{sec:method}
First, we present the overall framework that formulates the weakly-supervised audio-visual video parsing as an MMIL problem in Sec.~\ref{sec:overview}.
Built upon this framework, we propose a new multimodal temporal model: hybrid attention network in Sec.~\ref{sec:han}; attentive MMIL pooling in Sec.~\ref{sec:att_pool}; addressing modality bias and noisy label issues in Sec.~\ref{sec:GL_LN}.
\subsection{Audio-Visual Video Parsing Framework}
\label{sec:overview}
Our framework, as illustrated in Fig.~\ref{fig:framework}, has three main modules: audio and visual feature extraction, multimodal temporal modeling, and attentive MMIL pooling.
Given a video sequence with $T$ audio and visual snippet pairs $\{V_t, A_t\}_{t=1}^{T}$, we first use pre-trained visual and audio models to extract snippet-level visual features: $\{f_v^t\}_{t=1}^{T}$ and audio features: $\{f_a^t\}_{t=1}^{T}$, respectively.
Taking extracted audio and visual features as inputs, we use two hybrid attention networks as the multimodal temporal modeling module to leverage unimodal and cross-modal temporal contexts and obtain updated visual features $\{\hat{f}_v^t\}_{t=1}^{T}$ and audio features $\{\hat{f}_a^t\}_{t=1}^{T}$.
To predict audio and visual instance-level labels and make use of the video-level weak labels, we address the MMIL problem with a novel attentive MMIL pooling module outputting video-level labels.
\subsection{Hybrid Attention Network}
\label{sec:han}
Natural videos tend to contain continuous and repetitive rather than isolated audio and visual content. In particular, audio or visual events in a video usually redundantly recur many times inside the video, both within the same modality (unimodal temporal recurrence~\cite{naphade2002discovering,roma2013recurrence}), as well as across different modalities (audio-visual temporal synchronization~\cite{korbar2018cooperative} and asynchrony~\cite{vroomen2004recalibration}). The observation suggests us to jointly model the temporal recurrence, co-occurrence, and asynchrony in a unified approach. However, existing audio-visual learning methods~\cite{lin2019dual,tian2018audio,wu2019DAM} usually ignore the audio-visual temporal asynchrony and explore unimodal temporal recurrence using temporal models (\emph{e.g.}, LSTM~\cite{hochreiter1997long} and Transformer~\cite{vaswani2017attention}) and audio-visual temporal synchronization using multimodal fusion (\emph{e.g.}, feature fusion~\cite{tian2018audio} and prediction ensemble~\cite{kazakos2019epic}) in a isolated way. To simultaneously capture multimodal temporal contexts, we propose a new temporal model: Hybrid Attention Network (HAN), which uses a self-attention network and a cross-attention network to adaptively learn which bimodal and cross-modal snippets to look for each audio or visual snippet, respectively.
At each time step $t$, a hybrid attention function $g$ in HAN will be learned from audio and visual features: $\{f_a^{t}, f_v^t\}_{t=1}^{T}$ to update $f_a^{t}$ and $f_v^{t}$, respectively. The updated audio feature $\hat{f}_a^{t}$ and visual feature $\hat{f}_v^{t}$ can be computed as:
\begin{align}
\hat{f}_a^{t} = g(f_a^{t}, \textbf{\textit{f}}_a, \textbf{\textit{f}}_v) = f_a^t + g_{sa}(f_a^{t}, \textbf{\textit{f}}_a) + g_{ca}(f_a^{t}, \textbf{\textit{f}}_v)\enspace,\\
\hat{f}_v^{t} = g(f_v^{t}, \textbf{\textit{f}}_a, \textbf{\textit{f}}_v) = f_v^t + g_{sa}(f_v^{t}, \textbf{\textit{f}}_v) + g_{ca}(f_v^{t}, \textbf{\textit{f}}_a)\enspace,
\end{align}
where $\textbf{\textit{f}}_a = [f_a^1;...;f_a^T]$ and $\textbf{\textit{f}}_v= [f_v^1;...;f_v^T]$; $g_{sa}$ and $g_{ca}$ are self-attention and cross-modal attention functions, respectively; skip-connections can help preserve the identity information from the input sequences. The two attention functions are formulated with the same computation mechanism. With $g_{sa}(f_a^{t}, \textbf{\textit{f}}_a)$ and $g_{ca}(f_a^{t}, \textbf{\textit{f}}_v)$ as examples, they are defined as:
\begin{align}
g_{sa}(f_a^{t}, \textbf{\textit{f}}_a) = \sum_{t=1}^{T}w_t^{sa}f_a^t = \textit{softmax}(\frac{f_a^t\textbf{\textit{f}}_a^{'}}{\sqrt{d}})\textbf{\textit{f}}_a\enspace,\\
g_{ca}(f_a^{t}, \textbf{\textit{f}}_v) = \sum_{t=1}^{T}w_t^{ca}f_v^t = \textit{softmax}(\frac{f_a^t\textbf{\textit{f}}_v^{'}}{\sqrt{d}})\textbf{\textit{f}}_v\enspace,
\end{align}
where the scaling factor $d$ is equal to the audio/visual feature dimension and $(\cdot)^{'}$ denotes the transpose operator. Clearly, the self-attention and cross-modal attention functions in HAN will assign large weights to snippets, which are similar to the query snippet containing the same video events within the same modality and cross different modalities. The experimental results show that the HAN modeling unimodal temporal recurrence, multimodal temporal co-occurrence, and audio-visual temporal asynchrony can well capture unimodal and cross-modal temporal contexts and improves audio-visual video parsing performance.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{figs/attentive.pdf}
\caption{Attentive MMIL Pooling. For event category $c$, temporal and audio-visual attention mechanisms will adaptively select informative event predictions crossing temporal and modality axes, respectively, for predicting whether there is an event at the category. }
\label{fig:attentive}
\end{figure}
\subsection{Attentive MMIL Pooling}
\label{sec:att_pool}
To achieve audio-visual video parsing, we predict all event labels for audio and visual snippets from temporal aggregated features: $\{\hat{f}_a^{t}, \hat{f}_v^t\}_{t=1}^{T}$. We use a shared fully-connected layer to project audio and visual features to different event label space and adopt a sigmoid function to output probability for each event category:
\begin{align}
p_a^t &= sigmoid(FC(\hat{f}_a^{t}))\enspace, \\ p_v^t &= sigmoid(FC(\hat{f}_v^{t}))\enspace,
\end{align}
where $p_a^t$ and $p_v^t$ are predicted audio and visual event probabilities at timestep $t$, respectively. Here, the shared FC layer can implicitly enforce audio and visual features into a similar latent space. The reason to use sigmoid to output an event probability for each event category rather than softmax to predict a probability distribution over all categories is that a single snippet may have multiple labels rather than only a single event as assumed in Tian \emph{et al.}~\cite{tian2018audio}.
Since audio-visual events only occur when sound sources are visible and their sounds are audible, the audio-visual event probability $ p_{av}^{t}$ can be derived from individual audio and visual predictions: $p_{av}^{t} = p_a^t * p_v^t$.
If we have direct supervisions for all audio and visual snippets from different time steps, we can simply learn the audio-visual video parsing network in a fully-supervised manner. However, in this MMIL problem, we can only access a video-level weak label $\bar{\textbf{\textit{{y}}}}$ for all audio and visual snippets: $\{A_t, V_t\}_{t=1}^{T}$ from a video. To learn our network with weak labels, as illustrated in Fig.~\ref{fig:attentive}, we propose a attentive MMIL pooling method to predict video-level event probability: $\bar{\textbf{\textit{p}}}$ from $\{{p}_a^{t}, {p}_v^t\}_{t=1}^{T}$.
Concretely, the $\bar{\textbf{\textit{p}}}$ is computed by:
\begin{align}
\bar{\textbf{\textit{p}}} = \sum_{t=1}^{T}\sum_{m=1}^{M} (W_{tp}\odot W_{av}\odot P)[t, m, :]\enspace,
\end{align}
where $\odot$ denotes element-wise multiplication; $m$ is a modality index and $M$ = $2$ refers to audio and visual modalities; $W_{tp}$ and $W_{av}$ are temporal attention and audio-visual attention tensors predicted from $\{\hat{f}_a^{t}, \hat{f}_v^t\}_{t=1}^{T}$, respectively, and $P$ is the probability tensor built by $\{{p}_a^{t}, {p}_v^t\}_{t=1}^{T}$ and we have $P(t, 0, :) = p_{a}^{t}$ and $P(t, 1, :) = p_{v}^{t}$. To compute the two attention tensors, we first compose an input feature tensor $F$, where $F(t, 0, :) = \hat{f}_{a}^{t}$ and $F(t, 1, :) = \hat{f}_{v}^{t}$. Then, two different FC layers are used to transform the $F$ into two tensors: $F_{tp}$ and $F_{av}$, which has the same size as $P$. To adpatively select most informative snippets for predicting probabilities of different event categories, we assign different weights to snippets at different time steps with a temporal attention mechanism:
\begin{align}
W_{tp}[:, m, c] = softmax(F_{tp}[:, m, c])\enspace,
\end{align}
where $m$ = $1, 2$ and $c$ = $1, \dots, C$. Accordingly, we can adaptively select most informative modalities with the audio-visual attention tensor:
\begin{align}
W_{av}[t, :, c] = softmax(F_{av}[t, :, c])\enspace,
\end{align}
where $t$ = $1, \dots, T$ and $c$ = $1, \dots, C$. The snippets within a video from different temporal steps and different modalities may have different video events. The proposed attentive MMIL pooling can well model this observation with the tensorized temporal and multimodal attention mechanisms.
With the predicted video-level event probability $\bar{\textbf{\textit{p}}}$ and the ground truth label $\bar{\textbf{\textit{{y}}}}$, we can optimize the proposed weakly-supervised learning model with a binary cross-entropy loss function: $\mathcal{L}_{wsl} = CE(\bar{\textbf{\textit{p}}}, \bar{\textbf{\textit{y}}}) = -\sum_{c=1}^{C} \bar{\textbf{\textit{y}}}[c]log(\bar{\textbf{\textit{p}}}[c])$.
\subsection{Alleviating Modality Bias and Label Noise}
\label{sec:GL_LN}
The weakly supervised audio-visual video parsing framework only uses less detailed annotations without requiring expensive densely labeled audio and visual events for all snippets. This advantage makes this weakly supervised learning framework appealing. However, it usually enforces models to only identify discriminative patterns in the training data, which was observed in previous weakly-supervised MIL problems~\cite{singh2017hide,song2014weakly,zhou2016learning}. In our MMIL problem, the issue becomes even more complicated since there are multiple modalities and different modalities might not contain equally discriminative information. With weakly-supervised learning, the model tends to only use information from the most discriminative modality but ignore another modality, which can probably achieve good video classification performance but terrible video parsing performance on the events from ignored modality and audio-visual events. Since a video-level label contains all event categories from audio and visual content within the video, to alleviate the modality bias in the MMIL, we propose to use explicit supervisions to both modalities with a guided loss:
\begin{align}
\label{eq:gl}
\mathcal{L}_{g} = CE(\bar{\textbf{\textit{p}}}_a, \bar{\textbf{\textit{y}}}_a) + CE(\bar{\textbf{\textit{p}}}_v, \bar{\textbf{\textit{y}}}_v)\enspace,
\end{align}
where $\bar{\textbf{\textit{y}}}_a = \bar{\textbf{\textit{y}}}_v = \bar{\textbf{\textit{y}}}$, and $\bar{\textbf{\textit{p}}}_a = \sum_{t=1}^{T}(W_{tp}\odot P)[t, 0, :]$ and $\bar{\textbf{\textit{p}}}_v = \sum_{t=1}^{T}(W_{tp}\odot P)[t, 1, :]$ are video-level audio and visual event probabilities, respectively.
However, not all video events are audio-visual events, which means that an event occurred in one modality might not occur in another modality and then the corresponding event label will be label noise for one of the two modalities. Thus, the guided loss: $\mathcal{L}_{g}$ suffers from noisy label issue. For the example shown in Fig.~\ref{fig:framework}, the video-level label is \{\textit{Speech}, \textit{Dog}\} and the video-level visual event label is only \{\textit{Dog}\}. The \{\textit{Speech}\} will be a noisy label for the visual guided loss.
To handle the problem, we use label smoothing~\cite{szegedy2016rethinking} to lower the confidence of positive labels with
smoothing $\bar{\textbf{\textit{y}}}$ and generate smoothed labels: $\bar{\textbf{\textit{y}}}_a$ and $\bar{\textbf{\textit{y}}}_v$. They are formulated as: $\bar{\textbf{\textit{y}}}_a = (1 - \epsilon_a) \bar{\textbf{\textit{y}}} + \frac{\epsilon_a}{K}$ and $\bar{\textbf{\textit{y}}}_v = (1 - \epsilon_v) \bar{\textbf{\textit{y}}} + \frac{\epsilon_v}{K}$,
where $\epsilon_a, \epsilon_v\in [0,1)$ are two confidence parameters to balance the event probability distribution and a uniform distribution: $u = \frac{1}{K}$ ($K > 1$).
For a noisy label at event category $c$, when $\bar{\textbf{\textit{y}}}[c] = 1$ and real $\bar{\textbf{\textit{y}}}_a[c] = 0$, we have $\bar{\textbf{\textit{y}}}[c] = (1 - \epsilon_a) \bar{\textbf{\textit{y}}}[c] + \epsilon_a > (1 - \epsilon_a) \bar{\textbf{\textit{y}}}[c] + \frac{\epsilon_a}{K}=\bar{\textbf{\textit{y}}}_a[c]$ and the smoothed label will become more reliable. Label smoothing technique is commonly adopted in a lot of tasks, such as image classification~\cite{szegedy2016rethinking}, speech recognition~\cite{chorowski2016towards}, and machine translation~\cite{vaswani2017attention} to reduce over-fitting and improve generalization capability of deep models. Different from the past methods, we use smoothed labels to mitigate label noise occurred in the individual guided learning. Our final model is optimized with the two loss terms: $\mathcal{L} = \mathcal{L}_{wsl} + \mathcal{L}_{g}$.
\section{Experiments}
\subsection{Experimental Settings}
\label{base&metric}
\noindent \textbf{Implementation Details.} For a 10-second-long video, we first sample video frames at $8 fps$ and each video is divided into non-overlapping snippets of the same length with $8$ frames in 1 second. Given a visual snippet, we extract a $512$-D snippet-level feature with fused features extracted from ResNet152~\cite{he2016deep} and 3D ResNet~\cite{tran2018closer}. In our experiments, batch size and number of epochs are set as 16 and 40, respectively. The initial learning rate is $3e$-$4$ and will drop by multiplying $0.1$ after every 10 epochs. Our models optimized by Adam can be trained using one NVIDIA 1080Ti GPU.
\noindent \textbf{Baselines.} Since there are no existing methods to address the audio-visual video parsing, we design several baselines based on previous state-of-the-art weakly-supervised sound detection~\cite{kong2018audio,wang2019comparison}, temporal action localization~\cite{liu2019completeness,nguyen2018weakly}, and audio-visual event localization~\cite{lin2019dual,tian2018audio} methods to validate the proposed framework. To make \cite{lin2019dual,tian2018audio} possible to address audio-visual scene parsing, we add additional audio and visual branches to predict audio and visual event probabilities supervised with an additional guided loss as defined in Sec.~\ref{sec:GL_LN}. For fair comparisons, the compared approaches use the same audio and visual features as our method.
\noindent \textbf{Evaluation Metrics.} To comprehensively measure the performance of different methods, we evaluate them on parsing all types of events (individual audio, visual, and audio-visual events) under both segment-level and event-level metrics. To evaluate overall audio-visual scene parsing performance, we also compute aggregated results, where Type@AV computes averaged audio, visual, and audio-visual event evaluation results and Event@AV computes the F-score considering all audio and visual events for each sample rather than directly averaging results from different event types as the Type@AV. We use both segment-level and event-level F-scores~\cite{mesaros2016metrics} as metrics. The segment-level metric can evaluate snippet-wise event labeling performance. For computing event-level F-score results, we extract events with concatenating positive consecutive snippets in the same event categories and compute the event-level F-score based on {mIoU} = 0.5 as the threshold.
\setlength{\tabcolsep}{3pt}
\begin{table}[t]
\begin{center}
\caption{Audio-visual video parsing accuracy ($\%$) of different methods on the LLP test dataset. These methods all use the same audio and visual features as inputs for a fair comparison. The top-1 results in each line are highlighted.}
\label{tbl:avsp}
\scalebox{1.0}{
\begin{tabular}{l| c| c c }
\toprule
Event type&Methods &Segment-level &Event-level \\
\midrule
\multirow{6}{*}{Audio}
&Kong \emph{et. al} 2018~\cite{kong2018audio}&39.6&29.1\\
&TALNet~\cite{wang2019comparison}&{50.0}&{41.7}\\
&AVE~\cite{tian2018audio}&47.2&40.4\\
&AVSDN~\cite{lin2019dual}&47.8&34.1\\
&Ours& \textbf{60.1}&\textbf{51.3}\\
\midrule
\multirow{5}{*}{Visual}
&STPN~\cite{nguyen2018weakly} &46.5&41.5\\
&CMCS~\cite{liu2019completeness}&48.1&45.1\\
&AVE~\cite{tian2018audio}&37.1&34.7\\
&AVSDN~\cite{lin2019dual}&52.0&46.3\\
&Ours&\textbf{52.9}&\textbf{48.9}\\
\midrule
\multirow{3}{*}{Audio-Visual}&AVE~\cite{tian2018audio}&35.4&31.6\\
&AVSDN~\cite{lin2019dual}&37.1&26.5\\
&Ours&\textbf{48.9}&\textbf{43.0}\\
\midrule
\multirow{3}{*}{Type@AV}&AVE~\cite{tian2018audio}&39.9&35.5\\
&AVSDN~\cite{lin2019dual}&45.7&35.6\\
&Ours&\textbf{54.0}&\textbf{47.7}\\
\midrule
\multirow{3}{*}{Event@AV}&AVE~\cite{tian2018audio}&41.6&36.5\\
&AVSDN~\cite{lin2019dual}&50.8&37.7\\
&Ours&\textbf{55.4}&\textbf{48.0}\\
\bottomrule
\end{tabular}
}
\end{center}
\end{table}
\subsection{Experimental Comparison}
\label{sec:comp}
To validate the effectiveness of the proposed audio-visual video parsing network, we compare it with weakly-supervised sound event detection methods: Kong \emph{et al} 2018~\cite{kong2018audio} and TALNet~\cite{wang2019comparison} on audio event parsing, weakly-supervised action localization methods: STPN~\cite{nguyen2018weakly} and CMCS~\cite{liu2019completeness} on visual event parsing, and modified audio-visual event localization methods: AVE~\cite{tian2018audio} and AVSD~\cite{lin2019dual} on audio, visual, and audio-visual event parsing. The quantitative results are shown in Tab.~\ref{tbl:avsp}. We can see that our method outperforms compared approaches on all audio-visual video parsing subtasks under both the segment-level and event-level metrics, which demonstrates that our network can predict more accurate snippet-wise event categories with more precise event onsets and offsets for testing videos.
\setlength{\tabcolsep}{2pt}
\begin{table}[t]
\begin{center}
\caption{Ablation study on learning mechanism, attentive MMIL pooling, hybrid attention network, and handling noisy labels. Segment-level audio-visual video parsing results are shown. The best results for each ablation study are highlighted.}
\label{tbl:ablation}
\scalebox{0.715}{
\begin{tabular}{c| c| c| c |c c c| c c}
\toprule
Loss &MMIL Pooling& Temporal Net &Handle Noisy Label &Audio &Visual &Audio-Visual&Type@AV &Event@AV\\
\midrule
\textcolor{blue}{$\mathcal{L}_{wsl}$}& Attentive&$\times$&$\times$&\textbf{56.9}& 16.4 &17.2 &30.2&43.3\\
\textcolor{blue}{$\mathcal{L}_g$}& Attentive&$\times$&$\times$& 42.3& 43.9& 34.5&40.3&42.0\\
\textcolor{blue}{$\mathcal{L}_{wsl} + \mathcal{L}_g$}& Attentive&$\times$&$\times$&45.1& \textbf{51.7} &\textbf{35.0} &\textbf{44.0} & \textbf{48.9}\\
\midrule
$\mathcal{L}_{wsl} + \mathcal{L}_g$& \textcolor{blue}{Max}&$\times$ &$\times$&31.6&43.6&22.5&32.6&39.1\\
$\mathcal{L}_{wsl} + \mathcal{L}_g$& \textcolor{blue}{Mean}&$\times$&$\times$& 40.2&43.2&\textbf{35.0}&39.5&39.7\\
$\mathcal{L}_{wsl} + \mathcal{L}_g$& \textcolor{blue}{Attentive}&$\times$&$\times$&\textbf{45.1}& \textbf{51.7} &\textbf{35.0} &\textbf{44.0} & \textbf{48.9}\\
\midrule
$\mathcal{L}_{wsl} + \mathcal{L}_g$& Attentive&\textcolor{blue}{$\times$}&{$\times$}&45.1& {51.7} &{35.0} &{44.0} & {48.9}\\
$\mathcal{L}_{wsl} + \mathcal{L}_g$& Attentive&\textcolor{blue}{GRU}~\cite{cho2014learning}&$\times$&52.0&49.4&39.0&46.8&51.0\\
$\mathcal{L}_{wsl} + \mathcal{L}_g$& Attentive&\textcolor{blue}{Transformer}~\cite{vaswani2017attention}&$\times$&53.4& \textbf{53.8}& 41.8&49.7&53.3\\
$\mathcal{L}_{wsl} + \mathcal{L}_g$& Attentive&\textcolor{blue}{HAN}&$\times$&\textbf{58.4}& {52.8}&\textbf{48.4}&\textbf{53.2}&\textbf{54.5}\\
\midrule
\textcolor{blue}{$\mathcal{L}_{wsl}$}& Attentive&HAN&$\times$&39.6&40.5&20.1&33.4&44.9\\
\textcolor{blue}{$\mathcal{L}_g$}& Attentive&HAN&$\times$& 57.5&52.5&47.4&52.5&53.8\\
\textcolor{blue}{$\mathcal{L}_{wsl} + \mathcal{L}_g$}& Attentive&HAN&$\times$&\textbf{58.4}& \textbf{52.8}&\textbf{48.4}&\textbf{53.2}&\textbf{54.5}\\
\midrule
$\mathcal{L}_{wsl} + \mathcal{L}_g$& \textcolor{blue}{Max}&HAN &$\times$&55.7&52.0&\textbf{48.6}&52.1&51.8\\
$\mathcal{L}_{wsl} + \mathcal{L}_g$& \textcolor{blue}{Mean}&HAN&$\times$& 56.0&51.9&46.3&51.4&52.9\\
$\mathcal{L}_{wsl} + \mathcal{L}_g$& \textcolor{blue}{Attentive}&HAN&$\times$&\textbf{58.4}& \textbf{52.8}&{48.4}&\textbf{53.2}&\textbf{54.5}\\
\midrule
$\mathcal{L}_{wsl} + \mathcal{L}_g$& Attentive&HAN&\textcolor{blue}{$\times$}&{58.4}& {52.8}&{48.4}&{53.2}&{54.5}\\
$\mathcal{L}_{wsl} + \mathcal{L}_g$& Attentive&HAN&\textcolor{blue}{Bootstrap}~\cite{reed2014training}&59.0&52.6&47.8&53.1&55.2\\
$\mathcal{L}_{wsl} + \mathcal{L}_g$& Attentive&HAN&\textcolor{blue}{Label Smoothing~\cite{szegedy2016rethinking}}&\textbf{60.1}& \textbf{52.9}&\textbf{48.9}&\textbf{54.0}&\textbf{55.4}\\
\bottomrule
\end{tabular}
}
\end{center}
\end{table}
\noindent \textbf{Individual Guided Learning.} From Tab.~\ref{tbl:ablation}, we observe that the model without individual guided learning can achieve pretty good performance on audio event parsing but incredibly bad visual parsing results leading to terrible audio-visual event parsing; w/ only $\mathcal{L}_{g}$ model can achieve both reasonable audio and visual event parsing results; our model trained with both $\mathcal{L}_{wsl}$ and $\mathcal{L}_{g}$ outperforms model train without and with only $\mathcal{L}_{g}$. The results indicate that the model trained only $\mathcal{L}_{wsl}$ find discriminative information from mostly sounds and visual information is not well-explored during training and the individual learning can effectively handle the modality bias issue. In addition, when the network is trained with only $\mathcal{L}_{g}$, it actually models audio and visual event parsing as two individual MIL problems in which only noisy labels are used. Our MMIL framework can learn from clean weak labels with $\mathcal{L}_{wsl}$ and handle the modality bias with $\mathcal{L}_{g}$ achieves the best overall audio-visual video parsing performance.
Moreover, we would like to note that the modality bias issue is from audio and visual data unbalance in training videos, which are originally from an audio-oriented dataset: AudioSet. Since the issue occurred after just 1 epoch training, it is not over-fitting.
\noindent \textbf{Attentive MMIL Pooling.} To validate the proposed Attentive MMIL Pooling, we compare it with two commonly used methods: Max pooling and Mean pooling. Our Attentive MMIL Pooling (see Tab.~\ref{tbl:ablation}) is superior over the both compared methods. The Max MMIL pooling only selects the most discriminative snippet for each training video, thus it cannot make full use of informative audio and visual content. The Mean pooling does not distinguish the importance of different audio and visual snippets and equally aggregates instance scores in a bad way, which can obtain good audio-visual event parsing but poor individual audio and visual event parsing since a lot of audio-only and visual-only events are incorrectly parsed as audio-visual events.
Our attentive MMIL pooling allows assigning different weights to audio and visual snippets within a video bag for each event category, thus can adaptively discover useful snippets and modalities.
\noindent \textbf{Hybrid Attention Network.} We compare our HAN with two popular temporal networks: GRU and Transformer and a base model without temporal modeling in Tab. \ref{tbl:ablation}. The models with GRU and Transformer are better than the base model and our HAN outperforms the GRU and Transformer. The results demonstrate that temporal aggregation with exploiting temporal recurrence is important for audio-visual video parsing and our HAN with jointly exploring unimodal temporal recurrence, multimodal temporal co-occurrence, and audio-visual temporal asynchrony is more effective in leveraging the multimodal temporal contexts. Another surprising finding of the HAN is that it actually tends to alleviate the modality bias by enforcing cross-modal modeling.
\noindent \textbf{Noisy Label.} Tab.~\ref{tbl:ablation} also shows results of our model without handling the noisy label, with Bootstrap~\cite{reed2014training} and label smoothing-based method. We can find that Bootstrap updating labels using event predictions even decreases performance due to error propagation. Label smoothing-based method with reducing confidence for potential false positive labels can help to learn a more robust model with improved audio-visual video parsing results.
\section{Limitation}
To mitigate the modality bias issue, the guided loss is introduced to enforce that each modality should also be able to make the correct prediction on its own. Then, a new problem appears: the guide loss is not theoretically correct because some of the events only appear in one modality, so the labels are wrong. Finally, the label smoothing is used to alleviate the label noise. Although the proposed methods work at each step, they also introduce new problems. It is worth to design a one-pass approach. One possible solution is to introduce a new learning strategy to address the modality bias problem rather than using the guided loss. For example, we could perform modality dropout to enforce the model to explore both audio and visual information during training.
\section{Conclusion and Future Work}
In this work, we investigate a fundamental audio-visual research problem: audio-visual video parsing in a weakly-supervised manner. We introduce baselines and propose novel algorithms to address the problem. Extensive experiments on the newly collected LLP dataset support our findings that the audio-visual video parsing is tractable even learning from cheap weak labels, and the proposed model is capable of leveraging multimodal temporal contexts, dealing with modality bias, and mitigating label noise.
Accurate audio-visual video parsing opens the door to a wide spectrum of potential applications, as discussed below.
\begin{figure}[tb]%
\centering
\subfloat[Asynchronous Separation]{{\includegraphics[width=0.36\columnwidth]{figs/as_sep_new.pdf} }}%
\subfloat[Scene-Aware Video Understanding]{{\includegraphics[width=0.6\columnwidth]{figs/video_understanding.pdf} }}%
\caption{Potential applications of audio-visual video parsing. (a) Temporally asynchronous visual events detected by audio-visual video parsing highlighted in blue boxes can provide related visual information to separate \textit{Cello} sound from the audio mixture in the red box. (b) Parsed scenes can provide important cues for audio-visual scene-aware video dense captioning and question answering.}%
\label{fig:app}
\end{figure}
\noindent{\textbf{Asynchronous Audio-Visual Sound Separation.}} Audio-visual sound separation approaches use sound sources in videos as conditions to separate the visually indicated individual sounds from sound mixtures~\cite{ephrat2018looking,gao2018learning,gao20192,gao2019co,zhao2019sound,zhao2018sound}. The underlying assumption is that sound sources are visible. However, sounding objects can be occluded or not recorded in videos and the existing methods will fail to handle these cases. Our audio-visual video parsing model can find temporally asynchronous cross-modal events, which can help to alleviate the problem. For the example in Fig.~\ref{fig:app} (a), the existing audio-visual separation models will fail to separate the \textit{Cello} sound from the audio mixture at the time step $t$, since the sound source \textit{Cello} is not visible in the segment. However, our model can help to find temporally asynchronous visual events with the same semantic label as the audio event \textit{Cello} for separating the sound. In this way, we can improve the robustness of audio-visual sound separation by leveraging temporally asynchronous visual content identified by our audio-visual video parsing models.
\noindent{\textbf{Audio-Visual Scene-Aware Video Understanding.}} The current video understanding community usually focuses on the visual modality and regards information from sounds as a bonus assuming that audio content should be associated with the corresponding visual content. However, we want to argue that auditory and visual modalities are equally important and most natural videos contain numerous audio, visual, and audio-visual events rather than only visual and audio-visual events.
Our audio-visual scene parsing can achieve a unified multisensory perception, therefore it has the potential to help us build an audio-visual scene-aware video understanding system regarding all audio and visual events in videos(see Figure~\ref{fig:app} (b)).
\noindent \textbf{Acknowledgment}
We thank the anonymous reviewers for the constructive feedback. This work was supported in part by NSF 1741472, 1813709, and 1909912. The article solely reflects the opinions and conclusions of its authors but not the funding agents.
\clearpage
\bibliographystyle{splncs04}
| proofpile-arXiv_065-314 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsubsection{\@startsection{subsubsection}{3}%
\hfuzz=1.5pt
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{array}{\begin{array}}
\def\end{array}{\end{array}}
\def\displaystyle{\displaystyle}
\def{\rm Tr}{{\rm Tr}}
\def\text{arccot}{\text{arccot}}
\def\alpha{\alpha}
\def\dot{\bar{\alpha}}{\dot{\bar{\alpha}}}
\def\dot{\bar{\beta}}{\dot{\bar{\beta}}}
\def\dot{\bar{\gamma}}{\dot{\bar{\gamma}}}
\def\tilde \epsilon{\tilde \epsilon}
\def\tilde\Delta{\tilde\Delta}
\newdimen\tableauside\tableauside=1.0ex
\newdimen\tableaurule\tableaurule=0.4pt
\newdimen\tableaustep
\def\phantomhrule#1{\hbox{\vbox to0pt{\hrule height\tableaurule
width#1\vss}}}
\def\phantomvrule#1{\vbox{\hbox to0pt{\vrule width\tableaurule
height#1\hss}}}
\def\sqr{\vbox{%
\phantomhrule\tableaustep
\hbox{\phantomvrule\tableaustep\kern\tableaustep\phantomvrule\tableaustep}%
\hbox{\vbox{\phantomhrule\tableauside}\kern-\tableaurule}}}
\def\squares#1{\hbox{\count0=#1\noindent\loop\sqr
\advance\count0 by-1 \ifnum\count0>0\repeat}}
\def\tableau#1{\vcenter{\offinterlineskip
\tableaustep=\tableauside\advance\tableaustep by-\tableaurule
\kern\normallineskip\hbox
{\kern\normallineskip\vbox
{\gettableau#1 0 }%
\kern\normallineskip\kern\tableaurule}%
\kern\normallineskip\kern\tableaurule}}
\def\gettableau#1 {\ifnum#1=0\let\next=\null\else
\squares{#1}\let\next=\gettableau\fi\next}
\tableauside=1.5ex
\tableaurule=0.8pt
\def\mathcal{A}{\mathcal{A}}
\def\mathcal{B}{\mathcal{B}}
\def\mathcal{C}{\mathcal{C}}
\def\mathcal{D}{\mathcal{D}}
\def\mathcal{E}{\mathcal{E}}
\def\mathcal{F}{\mathcal{F}}
\def\mathcal{G}{\mathcal{G}}
\def\mathcal{H}{\mathcal{H}}
\def\mathcal{I}{\mathcal{I}}
\def\mathcal{J}{\mathcal{J}}
\def\mathcal{K}{\mathcal{K}}
\def\mathcal{L}{\mathcal{L}}
\def\mathcal{M}{\mathcal{M}}
\def\mathcal{N}{\mathcal{N}}
\def\mathcal{O}{\mathcal{O}}
\def\mathcal{P}{\mathcal{P}}
\def\mathcal{Q}{\mathcal{Q}}
\def\mathcal{R}{\mathcal{R}}
\def\mathcal{S}{\mathcal{S}}
\def\mathcal{T}{\mathcal{T}}
\def\mathcal{U}{\mathcal{U}}
\def\mathcal{V}{\mathcal{V}}
\def\mathcal{W}{\mathcal{W}}
\def\mathcal{X}{\mathcal{X}}
\def\mathcal{Y}{\mathcal{Y}}
\def\mathcal{Z}{\mathcal{Z}}
\numberwithin{equation}{section} \makeatletter
\@addtoreset{equation}{section}
\hfuzz=1.5pt
\defAdS_{3}{AdS_{3}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{array}{\begin{array}}
\def\end{array}{\end{array}}
\def\partial{\partial}
\def\displaystyle{\displaystyle}
\defa_{\alpha}\frac{\partial}{\partial a_{\beta}}{a_{\
alpha}\frac{\partial}{\partial a_{\beta}}}
\defb_{\alpha}\frac{\partial}{\partial b_{\beta}}{b_{\alpha}\frac{\partial}{\partial b_{\beta}}}
\def\begin{array}{\begin{array}}
\def\end{array}{\end{array}}
\def\partial{\partial}
\def\displaystyle{\displaystyle}
\defa_{\alpha}\frac{\partial}{\partial a_{\beta}}{a_{\alpha}\frac{\partial}{\partial a_{\beta}}}
\defb_{\alpha}\frac{\partial}{\partial b_{\beta}}{b_{\alpha}\frac{\partial}{\partial b_{\beta}}}
\def\bar{\psi}{\bar{\psi}}
\newcommand{$cu(1,0|8)\,\,$}{$cu(1,0|8)\,\,$}
\def{\cal T}{{\cal T}}
\defs_1+s_2-1{s_1+s_2-1}
\defs_1-s_2-1{s_1-s_2-1}
\def\label{\label}
\def|0\rangle{|0\rangle}
\def\langle 0|{\langle 0|}
\def\langle \varphi|{\langle \varphi|}
\def|\varphi\rangle{|\varphi\rangle}
\def\langle \xi|{\langle \xi|}
\def|\xi\rangle{|\xi\rangle}
\def\label{\label}
\def\lambda{\lambda}
\def|S\rangle{|S\rangle}
\def\langle S|{\langle S|}
\defc \rightarrow \infty{c \rightarrow \infty}
\def \bar z{\bar z}
\defc\to\infty{c\to\infty}
\def\Delta{\Delta}
\def\psi{\psi}
\def\bf\Phi{\bf\Phi}
\defsl(2,\mathbb{R}){sl(2,\mathbb{R})}
\defsl(2,\mathbb{C}){sl(2,\mathbb{C})}
\def3$j${3$j$}
\def\varkappa_{_{j_p,j_1}}{\varkappa_{_{j_p,j_1}}}
\def\Omega{\Omega}
\defI{I}
\def\centerarc[#1](#2)(#3:#4:#5);%
{
\draw[#1]([shift=(#3:#5)]#2) arc (#3:#4:#5);
}
\usepackage{jheppub}
\makeatletter
\def\@fpheader{\vspace{-.1cm}}
\makeatother
\title{More on Wilson toroidal networks and torus blocks}
\author[a,b]{Konstantin Alkalaev}
\author[c\,\dagger]{and Vladimir Belavin}
\note[$\dagger$]{On leave from Lebedev Physical Institute and Institute for Information Transmission Problems, Moscow, Russia.}
\affiliation[a]{I.E. Tamm Department of Theoretical Physics, \\P.N. Lebedev Physical
Institute,\\ Leninsky ave. 53, 119991 Moscow, Russia}
\affiliation[b]{Department of General and Applied Physics, \\
Moscow Institute of Physics and Technology, \\
7 Institutskiy per., Dolgoprudnyi, 141700 Moscow region, Russia}
\affiliation[c]{Physics Department, Ariel University, Ariel 40700, Israel}
\emailAdd{alkalaev@lpi.ru}
\emailAdd{vlbelavin@gmail.com}
\abstract{We consider the Wilson line networks of the Chern-Simons $3d$ gravity theory with toroidal boundary conditions which calculate global conformal blocks of degenerate quasi-primary operators in torus $2d$ CFT. After general discussion that summarizes and further extends results known in the literature we explicitly obtain the one-point torus block and two-point torus blocks through particular matrix elements of toroidal Wilson network operators in irreducible finite-dimensional representations of $sl(2,\mathbb{R})$ algebra. The resulting expressions are given in two alternative forms using different ways to treat multiple tensor products of $sl(2,\mathbb{R})$ representations: (1) $3mj$ Wigner symbols and intertwiners of higher valence, (2) totally symmetric tensor products of the fundamental $sl(2,\mathbb{R})$ representation.
}
\begin{document}
\maketitle
\flushbottom
\section{Introduction}
\label{sec:intro}
Conformal blocks are basic ingredients of conformal field theory correlation functions, they also play crucial role in the conformal bootstrap program~\cite{Belavin:1984vu,Poland:2018epd}. Recently, CFT$_d$ conformal blocks were interpreted in the AdS${}_{d+1}$/CFT${}_{d}$ correspondence as geodesic (Witten) networks stretched in the asymptotically AdS$_{d+1}$ spaces~\cite{Hartman:2013mia,Fitzpatrick:2014vua,Hijano:2015rla,Fitzpatrick:2015zha,Alkalaev:2015wia,Hijano:2015qja,Hijano:2015zsa,Alkalaev:2015lca,Alkalaev:2015fbw,Banerjee:2016qca,Gobeil:2018fzy,Hung:2018mcn,Alekseev:2019gkl}. The alternative description of conformal blocks in terms of Wilson lines was extensively studied in \cite{deBoer:2013vca,Ammon:2013hba,deBoer:2014sna,Hegde:2015dqh,Melnikov:2016eun,Bhatta:2016hpz,Besken:2017fsj,Hikida:2017ehf,Hikida:2018eih,Hikida:2018dxe,Besken:2018zro,Bhatta:2018gjb,DHoker:2019clx,Castro:2018srf,Kraus:2018zrn,Hulik:2018dpl,Castro:2020smu,Chen:2020nlj}.\footnote{See also further extensive developments of the block/network correspondence in different context like black holes \cite{Anous:2016kss,Chen:2017yze,Chen:2018qzm}, heavy-light approximations and other backgrounds \cite{Belavin:2017atm,Kusuki:2018wcv,Kusuki:2018nms,Hijano:2018nhq,Anous:2019yku,Alkalaev:2019zhs,Chen:2019hdv,Alkalaev:2020kxz,Cardona:2020cfy}, supersymmetric extensions \cite{Chen:2016cms,Alkalaev:2018qaz}, higher-point blocks \cite{Hulik:2016ifr,Rosenhaus:2018zqn,Alkalaev:2018nik,Fortin:2019zkm,Parikh:2019ygo,Jepsen:2019svc,Anous:2020vtw}, torus (thermal) CFT \cite{Alkalaev:2016ptm,Kraus:2017ezw,Alkalaev:2017bzx,Gobeil:2018fzy}, etc. }
On the other hand, there is an intriguing relation between the space of quantum states in the three-dimensional Chern-Simons theory in the presence of the Wilson lines and the space of conformal blocks in two-dimensional conformal field theory noticed a long ago \cite{Witten:1988hf,Verlinde:1989ua,Labastida:1989wt}. Since the $SO(2,2)$ Chern-Simons theory describes $3d$ gravity with the cosmological term then the above relation acquires a new meaning in the context of the AdS$_3$/CFT$_2$ correspondence \cite{Bhatta:2016hpz,Besken:2016ooo,Fitzpatrick:2016mtp,Kraus:2017ezw,Bhatta:2018gjb}.
The Wilson line networks under consideration are typical Penrose's spin networks \cite{Penrose,Baez:1994hx}. Formally, such a network is a graph in AdS space with a number of boundary endpoints, edges associated with $sl(2,\mathbb{R})$ representations and vertices given by 3-valent intertwiners. For a fixed background gravitational connection the Wilson line network is a gauge covariant functional of associated representations. To gain the conformal block interpretation one calculates the matrix element of the network operator between specific boundary states which are highest(lowest)-weight vectors in the respective $sl(2,\mathbb{R})$ representations.\footnote{More generally, one can consider arbitrary matrix elements that we call {\it vertex } functions. In Section \bref{sec:further} we show that these are related to correlation functions of descendant operators. }
In this paper we revisit the holographic relation between the Wilson line networks and conformal blocks focusing on the case of finite-dimensional $sl(2,\mathbb{R})$ representations. Our primary interest are toroidal Wilson networks in the thermal AdS$_3$ space and corresponding torus blocks. We formulate and calculate one-point and two-point Wilson network functionals which are dual to one-point and two-point torus conformal blocks for degenerate quasi-primary operators. The paper is organised as follows:
-- in Section \bref{sec:wilson} we review what is known about Wilson networks and how they compute conformal blocks. Here, we briefly recall some necessary background about Chern-Simons description of $3d$ gravity with the cosmological term.
Then, on the basis of the findings of Refs. \cite{Bhatta:2016hpz,Besken:2016ooo,Kraus:2017ezw}, we attempt to rethink the whole approach focusing on key elements that would allow one to study higher-point conformal blocks of (quasi-)primary and secondary operators as well as extension to toroidal Wilson networks which are dual to torus conformal blocks.
-- in Section \bref{sec:toroidal} we define toroidal Wilson network operators with one and two boundary attachments. They are the basis for explicit calculations of one-point blocks and two-point blocks in two OPE channels in the following sections.
-- in Section \bref{sec:one-point} we consider torus conformal blocks for degenerate quasi-primary operators which are dual to the Wilson networks carrying finite-dimensional representations of the gauge algebra.
-- Section \bref{sec:gauge} contains explicit calculation of the one-point toroidal Wilson network operator in two different representations, using 3$j$ Wigner symbols and symmetric tensor product representation. In particular, in the later representation we find the character decomposition of one-point torus block for degenerate operators.
-- Section \bref{sec:Two-point} considers explicit calculations of two-point Wilson toroidal networks. In Sections \bref{sec:2s} and \bref{sec:2t} we formulate the symmetric tensor product representation of the toroidal Wilson networks. Explicit demonstration that the corresponding network operators calculate 2-point blocks is given by one simple example (unit conformal weights) for each OPE channel contained in Appendix \bref{app:ex}. In Sections \bref{S2pt-proof} and \bref{T2pt-proof} we explicitly calculate the $s$-channel and $t$-channel toroidal networks for general conformal weights using 3$j$ Wigner symbols and show that the resulting functions coincide with 2-point $s$-channel and $t$-channel torus blocks.
-- concluding remarks and future perspectives are shortly discussed in Section \bref{sec:concl}. Technical details are collected in Appendices \bref{sec:sl2}--\bref{app:ex}.
\section{Wilson networks vs conformal blocks}
\label{sec:wilson}
In this section we mainly review Wilson line approach to conformal blocks proposed and studied in different contexts in \cite{Bhatta:2016hpz,Besken:2016ooo,Fitzpatrick:2016mtp,Kraus:2017ezw,Bhatta:2018gjb}. Here, we rephrase the whole construction in very general terms underlying key elements that finally allow direct passing from concrete calculations of sphere blocks in the above references to calculation of torus blocks. We will discuss only the case of (non-unitary) finite-dimensional $sl(2,\mathbb{R})$ representations (see Appendix \bref{sec:sl2}). The Wilson networks carrying (unitary) infinite-dimensional representations and the corresponding global sphere blocks are considered in \cite{Bhatta:2016hpz,Fitzpatrick:2016mtp}.
\subsection{Brief review of $3d$ Chern-Simons gravity theory}
\label{sec:CS}
The Chern-Simons formulation of $3d$ gravity with the cosmological term is obtained by combining the dreibein and spin-connection into the $o(2,2)$-connection $\mathcal{A}$ \cite{Achucarro:1987vz,Witten:1988hc} (for extensive review see e.g. \cite{Banados:1998gg,Ammon:2013hba}). Decomposing the gauge algebra as $o(2,2) \approx sl(2,\mathbb{R})\oplus sl(2,\mathbb{R})$ one introduces associated (anti)-chiral connections $A, \bar A$ in each simple factor $sl(2,\mathbb{R})$ with basis elements $J_{0,\pm1}$,
see the Appendix \bref{sec:sl2} for more details. Then, the 3d gravity action is given by the $o(2,2)$ Chern-Simons action
\begin{equation}
\label{action}
S[\mathcal{A}]=\frac{k}{4\pi} \int_{\mathcal{M}^3} {\rm Tr}\big(\mathcal{A}\wedge \mathcal{A} +\frac{2}{3} \mathcal{A}\wedge \mathcal{A}\wedge \mathcal{A}\big) \;,
\end{equation}
where $k$ is related to the $3$-dimensional Newton constant $G_3$ through $k = l/(4G_3)$ and $l$ is the AdS$_3$ radius, and ${\rm Tr}$ stands for the Killing invariant form. Equivalently, the action can be factorized as $S[\mathcal{A}] = S[A]-S[\bar{A}]$, where each chiral component is the $sl(2,\mathbb{R})$ Chern-Simons action. The convenient choice of local coordinates is given by $x^\mu=(\rho,z, \bar z)$ with radial $\rho\geq 0$ and (anti)holomorphic $z, \bar z \in \mathbb{C}$.
The equations of motion that follow from the CS action \eqref{action} are generally solved by flat $o(2,2)$-connections $\mathcal{A}$. After imposing appropriate boundary conditions the solutions yielding flat boundary metric can be written as the gauge transformed (chiral) connection $A = U^{-1} \Omega U+ U^{-1} dU$ with the gauge group element $U(x) = \exp{\rho J_0}$ \cite{Banados:1994tn} and the holomorphic gravitational connection given by
\begin{equation}
\label{con_a}
\Omega=\bigg(J_1-2\pi\frac{6T(z)}{c} J_{-1}\bigg) dz\;,
\end{equation}
where $T(z)$ is the holomorphic boundary stress tensor, the central charge $c$ is defined through the Brown-Henneaux relation $c = 3l/(2G_3)$ \cite{Brown:1986nw}. The same anti-holomorphic connection $\bar \Omega = \bar \Omega({\bar z})$ arises in the anti-chiral $sl(2,\mathbb{R})$ sector.
Considering a path $L$ connecting two points $x_1, x_2\in\mathcal{M}_3$ we can associate to $L$ the following chiral Wilson line operators
\begin{equation}
\label{wilson}
W_R[L] = \mathcal{P} \exp{\left(-\int_L \Omega\right)}\;,
\end{equation}
where the chiral $sl(2,\mathbb{R})$ connection is given by \eqref{con_a} in some representation $R$. Similarly, one can consider $\overline{W}_R[L]$ in the anti-chiral sector. Under the gauge group, the Wilson operator transforms homogeneously as $W_R[L] \to U_R(x_2) W_R[L] U_{R}^{-1}(x_1)$, where the gauge group elements are $U_R = \exp \epsilon J_R$ with generators $J_R$ in the representation $R$. As we deal with the flat connections, the Wilson line operators depend only on the path $L$ endpoints and on the topology of the base manifold $\mathcal{M}_3$. (Anti-)chiral Wilson operators \eqref{wilson} are instrumental when discussing (anti-)holomorphic conformal blocks in the boundary conformal theory.
\subsection{General construction}
\label{sec:general}
The Euclidean AdS$_3$ space metric can be obtained from \eqref{con_a} by taking a constant boundary stress tensor. In what follows we discuss spaces with both periodic and non-periodic time directions. In the non-periodic case, the stress tensor can be chosen as $T(z) = 0$ so that the chiral gravitational connection \eqref{con_a} takes the form
\begin{equation}
\label{con-glob-12}
\Omega= J_1 dz\;,
\end{equation}
and the corresponding AdS$_3$ metric is given in the Poincare coordinates. In the periodic case (thermal AdS$_3$), the stress tensor is $T(z) = -c/48\pi$ so that the chiral connection is given by
\begin{equation}
\label{con-glob-13}
\Omega=\bigg(J_1+\frac{1}{4} J_{-1}\bigg) dw\;,
\end{equation}
along with the standard identifications $w \sim w+2\pi$ and $w \sim w + 2\pi \tau$, where $i\tau\in \mathbb{R}_-$. The boundary (rectangular) torus is defined by the modular parameter $\tau$ while the conformal symmetry algebra in the large-$c$ limit is contracted to the finite-dimensional $sl(2,\mathbb{R})\oplussl(2,\mathbb{R})$.
In the chiral sector, the Wilson line \eqref{wilson} for the connections \eqref{con-glob-12} or \eqref{con-glob-13} is the holonomy of the chiral gauge field along the path $L$ with endpoints $x_1$ and $x_2$:
\begin{equation}
\label{chiral_wilson}
W_{a}[x_1,x_2]=\mathcal{P}\exp \bigg(-\int_{x_1}^{x_2} \Omega \bigg) = \exp \left(x_{12}\, \Omega\right)\;,
\end{equation}
where $x_{mn} = x_m-x_n$, and $a$ labels a finite-dimensional spin-$j_a$ representation $\mathcal{D}_a$ of the chiral gauge algebra $sl(2,\mathbb{R})$. Recall that the Wilson line operators have the transition property
\begin{equation}
\label{trans}
W_{a}[x_1,x_2] = W_{a}[x_1,x] W_{a}[x,x_2]\;,
\end{equation}
where $x$ is some intermediate point. The relation \eqref{trans} is obvious for coordinate independent connections like \eqref{con-glob-12} and \eqref{con-glob-13}.
\vspace{2mm}
\noindent In order to realize conformal blocks through the Wilson networks we need the following ingredients.
\begin{itemize}
\item[1)] The Wilson line $W_{a}[z,x]$ in a spin-$j_a$ representation $\mathcal{D}_a$ of $sl(2,\mathbb{R})$ algebra, connecting the external operator $\mathcal{O}_{\Delta_a}(z, \bar z)$ on the boundary with some point $x$ in the bulk. The conformal dimension of the boundary operator is $\Delta_a = -j_a$.
\item[2)]The Wilson line $W_{a}[x,y]$ connecting two bulk points $x$ and $y$. In the thermal AdS$_3$, the thermal cycle yields the Wilson loop $W_{\alpha}[x,x+2\pi \tau]$.
\item[3)] The trivalent vertex in the bulk point $x_b$ connects three Wilson line operators associated with three representations $\mathcal{D}_{b}, \mathcal{D}_{c}$, and $\mathcal{D}_{a}$ by means of the 3-valent intertwiner operator
\begin{equation}
\label{inter1}
I_{a; b, c}:\qquad \mathcal{D}_b \otimes \mathcal{D}_c \to \mathcal{D}_a\;,
\end{equation}
which satisfies the defining $sl(2,\mathbb{R})$ invariance property
\begin{equation}
\label{inter2}
I_{a; b,c}\, U_b\, U_c = U_a\, I_{a; b,c}\;,
\end{equation}
where $U_{\alpha}$ labelled by $\alpha=a, b,c$ are linear operators acting in the respective representation spaces. In other words, the intertwiner spans the one-dimensional space of $sl(2,\mathbb{R})$ invariants Inv$(\mathcal{D}^*_a \otimes \mathcal{D}_b \otimes \mathcal{D}_c)$, where $*$ denotes a contragredient representation.
\item[4)] The Wilson line attached to the boundary acts on a particular state $|a\rangle \in \mathcal{D}_a$.
\end{itemize}
In general, $n$-point global conformal blocks $\mathcal{F}(\Delta, \tilde\Delta|{\bf q},{\bf z})$ on Riemann surface of genus $g$ with modular parameters ${\bf q} = (q_1,..., q_{g})$, with external and intermediate conformal dimensions $\Delta$ and $\tilde \Delta$, can be calculated as the following matrix element
\begin{equation}
\label{Phi}
\mathcal{F}(\Delta_i, \tilde\Delta_j|{\bf q},{\bf z}) = \langle\!\langle\,\Phi\left[W_{a}, I_{b;c, d}|{\bf q},{\bf z}\right]\,\rangle\!\rangle\;.
\end{equation}
Here, the {\it Wilson network operator} $\Phi[W_{a}, I_{b;c, d}]$ is built of Wilson line operators $W_{a}$ associated to a particular (bulk-to-bulk or bulk-to-boundary) segments joined together by 3-valent intertwiners $I_{b;c, d}$ to form a network with the boundary endpoints ${\bf z}=(z_1,...,z_n)$. The double brackets mean that one calculates a particular matrix element of the operator $\Phi$ between specific vectors of associated $sl(2,\mathbb{R})$ representations in such a way that the resulting quantity is $sl(2,\mathbb{R})$ gauge algebra singlet. Using general arguments one may show that the matrix element \eqref{Phi}: (a) does not depend on positions of bulk vertex points due to the gauge covariance of the Wilson operators, (b) transforms under $sl(2)$ conformal boundary transformations as $n$-point correlation function.
\subsection{Vertex functions}
In what follows we discuss examples of the operator \eqref{Phi}: 2-point, 3-point and 4-point Wilson networks in the AdS$_3$ space with the spherical (plane) boundary. Let us consider first the trivalent vertex consisting of three boundary anchored Wilson lines meeting in the bulk point $x$. Let $|a\rangle$ be some vector in the spin-$j_a$ representation $\mathcal{D}_a$ that we call a boundary vector. Acting with the bulk-to-boundary Wilson line $W_a[x,z]$ we can obtain the following bra and ket vectors
\begin{equation}
\label{a_tilde}
\begin{array}{c}
|\tilde a\rangle = W_a[x,z]|a\rangle\;,
\\
\\
\langle\tilde a| = \langle a| W_a[z,x]\;,
\end{array}
\end{equation}
to be associated with some quasi-primary or secondary boundary operator $\mathcal{O}(z,\bar z)$ belonging to the conformal family $[\mathcal{O}_{\Delta_a}]$ of dimension $\Delta_a = -j_a$.
Bra and ket vectors \eqref{a_tilde} are the only elements of the Wilson network operator \eqref{Phi} which depend on positions of boundary operators. Thus, it is their properties that completely define how the resulting CFT correlation function (block) transforms with respect to the global conformal symmetry algebra. One can show that depending on the choice of particular $W_a$ and $|a\rangle \in \mathcal{D}_a$, the conformal invariance of the correlation function of quasi-primary operators is guaranteed by the following basic property \cite{Besken:2016ooo,Kraus:2017ezw,Fitzpatrick:2016mtp}
\begin{equation}
\label{conf_trans}
\left(\mathcal{L}_n-C_n{}^m J_m\right)|\tilde a\rangle = 0\;,
\qquad n=0,\pm1\;,
\qquad
\end{equation}
where $[C_n{}^m]$ is some $[3\times3]$ constant matrix.
It claims that holomorphic conformal transformation is generated by a combination of the chiral $sl(2,\mathbb{R})$ gauge transformations. Here, $J_n$ are (chiral) $sl(2,\mathbb{R})$ gauge algebra generators taken in the representation $\mathcal{D}_a$ and $\mathcal{L}_n$ are the boundary conformal generators represented by differential operators in coordinates $z$, satisfying (holomorphic) $sl(2,\mathbb{R})$ conformal algebra commutation relations $[\mathcal{L}_m,\mathcal{L}_n]=(m-n)\mathcal{L}_{m+n}$. Explicit form of $C_n{}^m$ is fixed by particular choice of the gravitational connections defining $W_a$ and boundary vectors $|a\rangle$ (see below).\footnote{Technically, in this paper, the matrix $C$ is calculated case by case and, moreover, just for two background gravitational connections \eqref{con-glob-12} and \eqref{con-glob-12}, both with constant coefficients. It would be important to formalize its possible properties like unitarity, etc. (The only obvious property now is that $C$ is invertible.) On the other hand, conceptually, it is obvious that the matrix $C$ is a derived object. Its exact definition and properties can be rigorously obtained from the holographic Ward identities of dual $3d$ Chern-Simons theory and CFT$_2$ along the lines discussed in the Appendix A of Ref. \cite{Fitzpatrick:2016mtp}. }
\paragraph{3-point vertex function.} Following the general definition of the Wilson network operator \eqref{Phi} we use the intertwiner \eqref{inter1} and introduce a trivalent {\it vertex} function (see Fig. \bref{34net}) as the following matrix element
\begin{equation}
\label{tri1}
V_{a, b, c}({\bf z}) = \langle\tilde a| \,I_{a; b, c}\, |\tilde b\rangle \otimes |\tilde c\rangle = \langle a| W_a[z_1,x] \,I_{a; b, c}\, W_b[x,z_2] W_c[x,z_3] | b\rangle \otimes | c\rangle\;,
\end{equation}
where ${\bf z} = (z_1, z_2, z_3)$ stands for positions of boundary conformal operators, $| a\rangle, | b\rangle, | c\rangle$ are arbitrary boundary vectors. Using the invariance condition \eqref{inter2} in the form
\begin{equation}
\label{tri20}
I_{a; b, c}\, W_b[x,z_2] = W_a[x,z_2]I_{a; b, c}\, W^{-1}_c[x,z_2] = W_a[x,z_2]I_{a; b, c}\, W_c[z_2,x]\;,
\end{equation}
along with the transition property \eqref{trans} the trivalent vertex function can be represented as
\begin{equation}
\label{tri2}
V_{a, b, c}({\bf z}) = \langle a| \, W_a[z_1,z_2] \,I_{a; b, c}\, W_c[z_2,z_3] \,| b\rangle \otimes | c\rangle\;.
\end{equation}
This expression can be equivalently obtained by choosing the bulk vertex point $x = z_2$ yielding $W_b[z_2,z_2]=\mathbb{1}$. This is legitimate since we noted earlier that the resulting Wilson network does not depend on location of bulk vertices. On the other hand, this freedom in choosing the vertex point is encoded in the intertwiner transformation property.
\begin{figure}[H]
\centering
\begin{tikzpicture}[line width=1pt,scale=0.60]
\draw (-20,-1) -- (-18,-1);
\draw (-18,-1) -- (-17,-2);
\draw (-18,-1) -- (-17,0);
\draw (-20.4,-1.0) node {$c$};
\draw (-16.7,.3) node {$a$};
\draw (-16.7,-2.1) node {$b$};
\draw (-18.1,-1.5) node {$x$};
\draw (-10,-1) -- (-8,-1);
\draw (-10,-1) -- (-11,-2);
\draw (-10,-1) -- (-11,0);
\draw (-8,-1) -- (-7,-2);
\draw (-8,-1) -- (-7,0);
\draw (-11.3,0.2) node {$b$};
\draw (-11.3,-2.2) node {$a$};
\draw (-6.8,-2.2) node {$d$};
\draw (-6.8,0.2) node {$c$};
\draw (-9.0,-0.6) node {$e$};
\draw (-10.0,-1.5) node {$x$};
\draw (-8.1,-1.5) node {$y$};
\end{tikzpicture}
\caption{Wilson networks: trivalent vertex (left) and four-valent vertex given by two trivalent vertices joined by an edge (right). }
\label{34net}
\end{figure}
Two comments are in order. First, in order to have a non-trivial intertwiner with the property \eqref{inter2} the weights of three representations must satisfy the triangle inequality. Indeed, tensoring two irreps as in \eqref{inter1} we find out the Clebsch-Gordon series
\begin{equation}
\label{fusion}
\mathcal{D}_{a} \otimes \mathcal{D}_{b}= \bigoplus_{j_c=|j_a-j_b|}^{j_a+j_b} \mathcal{D}_{c}\;.
\end{equation}
If a representation $\mathcal{D}_c$ of a given spin $j_c$ arises in the Clebsch-Gordon series then the intertwiner is just a projector, otherwise it is zero. Equivalently, $j_a+j_b- j_c \geq 0$. Second, one may rewrite \eqref{tri2} through the matrix elements in the standard basis \eqref{standard} by inserting resolutions of identities,
\begin{equation}
\label{resolve}
\mathbb{1} = \sum_{m=-j}^j |j,m\rangle \langle j, m|
\end{equation}
to obtain
\begin{equation}
\label{tri3}
V_{a, b, c}({\bf z}) = \sum_m \sum_k\sum_n\, \Big( \langle j_a,m|\,I_{a; b, c}\, | j_b,k\rangle \otimes |j_c,n\rangle \Big)\langle \tilde a|j_a, m\rangle \langle j_b,k| \tilde b\rangle \langle j_c, n| \tilde c\rangle \;.
\end{equation}
In this form the trivalent vertex function is represented as a product of four matrix elements which can be drastically simplified when $|a\rangle, | b\rangle$, and $|c\rangle$ are chosen in some canonical way like lowest-weight or highest-weight vectors. The last three factors are matrix elements of the Wilson operators, or, equivalently, coordinates of tilded vectors in the standard basis. The first factor is the matrix element of the intertwiner which in fact is the Wigner 3$j$ symbol.\footnote{Strictly speaking, we consider here $SL(2, \mathbb{R})\approx SU(1,1)$ Wigner $3mj$ symbols which are generally different from $SU(2)$ Wigner $3mj$ symbols for arbitrary (unitary or non-unitary, finite- or infinite-dimensional) representations. However, in this paper we deal only with finite-dimensional representations for which these two types of symbols are identical \cite{HOLMAN19661}. Note also that if we consider Wilson networks in Euclidean dS$_3$ gravity \cite{Castro:2020smu} where the spacetime isometry group is $SO(4) \approx SU(2)\times SU(2)$, then we can directly apply the standard Wigner 3$j$ symbol calculus. } Indeed, let us denote the matrix element of the intertwiner and 3$j$ symbol as
\begin{equation}
\label{mat_int}
[I_{a;b,c}]^m{}_{kn} = \langle j_a,m|\,I_{a; b, c}\, | j_b,k\rangle \otimes |j_c,n\rangle\;,
\;\;\qquad
[W_{a,b,c}]_{mkn} =
\begin{pmatrix}
j_a & j_b & j_c \\
m & k & n
\end{pmatrix}\;.
\end{equation}
Here, each of magnetic numbers $m,n,k$ runs its domain. Then, the two tensors are related as
\begin{equation}
\label{tri3-1}
[I_{a;b,c}]^m{}_{kn} = \sum_l\epsilon^{(a)}{}^{ml} [W_{a,b,c}]_{lkn}\;,
\end{equation}
where $\epsilon^{(a)}{}^{ml}$ is the Levi-Civita tensor in the $\mathcal{D}_a$ representation. Obviously, both tensors are $sl(2,\mathbb{R})$ invariant, while the 3$j$ symbol spans Inv$(\mathcal{D}_a \otimes \mathcal{D}_b \otimes \mathcal{D}_c)$. The Levi-Civita tensor in $\mathcal{D}_a$ is given by
\begin{equation}
\label{levi_civita}
\epsilon^{(a)}{}^{mn} = (-)^{j_a-m}\delta_{m,-n} =
\sqrt{2j_a+1}\begin{pmatrix}
j_a & j_a & 0 \\
m & m & 0
\end{pmatrix}=
\begin{pmatrix}
j_a \\
mn
\end{pmatrix}\;.
\end{equation}
The last equality introduces the 1$jm$ Wigner symbol which is considered as an invariant metric relating the standard and contragredient standard bases. In particular, this object allows introducing 2-point vertex function as
\begin{equation}
\label{tri1}
V_{a, b}({\bf z}) = \langle\tilde a| \,I_{a; b}\, |\tilde b\rangle
=
\delta_{ab} \langle a| I_{a; a}\, W_a[z_1,z_2] | a\rangle \;,
\end{equation}
where $I_{a;a}$ is 2-valent intertwiner belonging to Inv$(\mathcal{D}_a^* \otimes \mathcal{D}_a)$ which definition directly follows from \eqref{inter1}, \eqref{inter2} at $\mathcal{D}_c = \mathbb{1}$. Thus,
\begin{equation}
\label{valent2}
[I_{a; a}]_{m}{}^n = \begin{pmatrix}
j_a \\
mn
\end{pmatrix}\;.
\end{equation}
Coming back to the 3-point vertex functions one may explicitly check that choosing the boundary vectors as highest-weight elements of the respective spin-$j_\eta$ representations $\eta=a,b,c$ (see Appendix \bref{sec:sl2})
\begin{equation}
\label{HW_i}
|\eta\rangle =|\mathbb{hw}\rangle_\eta\;:
\qquad
J_{-1}|\mathbb{hw}\rangle_\eta = 0\;,
\qquad
J_{0}|\mathbb{hw}\rangle_\eta = j_\eta |\mathbb{hw}\rangle_\eta\;,
\end{equation}
along with the Wilson line operator in Euclidean AdS$_3$ space defined by the connection \eqref{con-glob-12},
one reproduces the 3-point function of quasi-primary operators on the plane \cite{Bhatta:2016hpz,Besken:2016ooo}:
\begin{equation}
\label{rel_3}
V_{a, b, c}({\bf z}) = \langle \mathcal{O}_{\Delta_a}(z_1, \bar z_1)\mathcal{O}_{\Delta_b}(z_2, \bar z_2)\mathcal{O}_{\Delta_c}(z_3, \bar z_3) \rangle\;.
\end{equation}
One can show that the basic transformation property \eqref{conf_trans} guaranteeing the conformal invariance of \eqref{rel_3} is defined by the backward identity matrix $C_{m}{}^{n}$, i.e.,
\begin{equation}
\label{backward}
\begin{aligned}
&\big(J_1+\mathcal{L}_{-1}\big)W|\mathbb{hw}\rangle=0\;,\\[1pt]
&\;\;\big(J_0+\mathcal{L}_{0}\big)W|\mathbb{hw}\rangle=0\;,\\[1pt]
&\big(J_{-1}+\mathcal{L}_{1}\big)W|\mathbb{hw}\rangle=0\;.
\end{aligned}
\end{equation}
\paragraph{4-point vertex function.} Further, we may consider 4-point vertex function between four representations $\mathcal{D}_a, \mathcal{D}_b, \mathcal{D}_c, \mathcal{D}_d$ built as two trivalent vertices attached to each other through an intermediate bulk-to-bulk Wilson line $W_e \equiv W_e[y,x]$ carrying the representation $\mathcal{D}_e$ (see Fig. \bref{34net}), namely,
\begin{equation}
\label{four1}
V_{a,b,c,d|e}({\bf z}) = \langle\tilde d| \,I_{d; c, e} \, W_e\,I_{e; a, b}\, |\tilde a\rangle \otimes |\tilde b\rangle \otimes |\tilde c\rangle \;.
\end{equation}
Using the transition property we can represent $W_e[y,x] = W_e[y,0]W_e[0,x]$ and then: (1) for the left factor we repeat arguments around \eqref{tri20} to neglect dependence on $y$, (2) for the right factor we use the intertwiner transformation property to neglect dependence on $x$. The result is that positions $x,y$ fall out of \eqref{four1}. Effectively, it means that we set $x=y=0$ so that the intermediate Wilson line operator trivializes $W_e[x,x] = \mathbb{1}$. All in all, we find that the vertex function can be cast into the form
\begin{equation}
\label{four2}
V_{a,b,c,d|e}({\bf z}) = \langle d| W_d[z_4,0] \,I_{d; c, e}\, I_{e; a, b}\, W_a[0,z_1]\,W_b[0,z_2]\,W_c[0,z_3]\,|a\rangle \otimes |b\rangle \otimes |c\rangle\;.
\end{equation}
Similarly to the previous consideration of the trivalent function one may reshuffle the Wilson operators using the intertwiner transformation property and by inserting the resolutions of identities represent the final expression as contractions of six matrix elements. Choosing $|a\rangle, | b\rangle, | c\rangle$, and $|d\rangle$ to be highest-weight vectors in their representations one directly finds 4-point conformal block on the sphere \cite{Bhatta:2016hpz,Besken:2016ooo}.
For our further purposes, the 3-point function \eqref{tri1} or \eqref{tri2} along with the 4-point function \eqref{four1} will prove convenient to build conformal blocks on the torus
(see Section~\bref{sec:toroidal}). Building the operator $\Phi$ \eqref{Phi} for the Wilson networks with more endpoints and edges is expected to give higher point conformal blocks on the sphere though this has not been checked explicitly (except for 5-point sphere block in the comb channel \cite{Bhatta:2016hpz}). In the next section we discuss $\Phi$ in terms of $n$-valent intertwiners.
\subsection{Further developments}
\label{sec:further}
Here we extend the general discussion in the previous section by considering some novel features of the Wilson network vertex functions.
\paragraph{Descendants.} Let us demonstrate that choosing the boundary vectors as descendants of highest-weight vectors we reproduce 3-point function of any three (holomorphic) secondary operators
\begin{equation}
\mathcal{O}_{\Delta}^{(l)}(z,\bar z) = (\mathcal{L}_{-1})^l \mathcal{O}_\Delta(z,\bar z)\;,
\end{equation}
where $\mathcal{L}_{-1}$ is one of three conformal generators on the plane, $\mathcal{L}_n = z^{n+1}\partial+(n+1)\Delta z^n$, $n = 0, \pm 1$. Taking descendants as
\begin{equation}
\label{descendants}
|\eta\rangle =|j_\eta, k\rangle = (J_{1})^k|\mathbb{hw}\rangle_\eta\;,
\qquad \eta = a,b,c\;,
\end{equation}
and using that: (1) the gravitational connection is given by \eqref{con-glob-12} so that $[W_a[x,y], J_1] = 0$, (2) the property $J_1 \sim \mathcal{L}_{-1}$ \eqref{backward}, we find that the respective (holomorphic) 3-point correlation function is given by
\begin{equation}
\begin{array}{l}
V_{a, b, c}({\bf z}) = \langle \mathcal{O}^{(k)}_{\Delta_a}(z_1, \bar z_1)\mathcal{O}^{(l)}_{\Delta_b}(z_2, \bar z_2)\mathcal{O}^{(m)}_{\Delta_c}(z_3, \bar z_3) \rangle
\\
\\
\hspace{30mm}=\left(\mathcal{L}^{(1)}_{-1}\right)^k \left(\mathcal{L}^{(2)}_{-1}\right)^l \left(\mathcal{L}_{-1}^{(3)}\right)^m \langle \mathcal{O}_{\Delta_a}(z_1, \bar z_1)\mathcal{O}_{\Delta_b}(z_2, \bar z_2)\mathcal{O}_{\Delta_c}(z_3, \bar z_3) \rangle\;,
\end{array}
\end{equation}
where superscript $(i)$ in the last line refers to $z_i$ coordinates.
Similarly, 4-point functions of secondary conformal operators can be obtained by choosing the boundary states to be descendants vectors in the respective representations. Indeed, 4-point correlation function of quasi-primary conformal operators decomposes as
\begin{equation}
\langle \mathcal{O}_{\Delta_a}(z_1, \bar z_1)\mathcal{O}_{\Delta_b}(z_2, \bar z_2)\mathcal{O}_{\Delta_c}(z_3, \bar z_3)\mathcal{O}_{\Delta_d}(z_4, \bar z_4) \rangle = \sum_{e, \tilde e} C_{ab,e\tilde e}\, C_{e\tilde e, cd} \,V_{a,b,c,d|e}({\bf z})\,\bar V_{a,b,c,d|\tilde e}\,({\bf \bar z})\;,
\end{equation}
where $C_{ab,e\tilde e}$ and $C_{e\tilde e, cd}$ are structure constants, $e, \tilde e$ stand for intermediate representations $\mathcal{D}_e$ and $\mathcal{D}_{\tilde e}$ in (anti)holomorphic sectors, and $|a\rangle,|b\rangle,|c\rangle,|d\rangle$ inside the vertex functions are boundary highest-weight vectors \eqref{HW_i} as discussed below the 4-point vertex function \eqref{four2}. Then, applying all forgoing arguments we obtain the 4-point correlation function of secondary operators.
\paragraph{Higher-valent intertwiners.} The 4-point vertex function \eqref{four2} is basically defined by contraction of two intertwiners by one index. The resulting $sl(2)$ invariant tensor is a 4-valent intertwiner,
\begin{equation}
I^{(1)}_{ab,cd|e} = I_{d; c, e}\, I_{e; a, b}\;.
\end{equation}
Similar to \eqref{mat_int}, using the definition of the Levi-Civita tensor \eqref{levi_civita} we can calculate the 4-valent intertwiner in the standard basis as
\begin{equation}
\left[I^{(1)}_{ab,cd|e}\right]_{n_1n_2n_3}{}^{n_4} = (-)^{j_d-n_4}\sum_m (-)^{j_e-m}
\begin{pmatrix}
j_a & j_b & j_e \\
n_1 & n_2 & m
\end{pmatrix}
\begin{pmatrix}
j_e & j_c & j_d \\
-m & n_3 & -n_4
\end{pmatrix}\;.
\end{equation}
Fixing the order of $a,b,c,d$ one can introduce one more 4-valent intertwiner with shuffled edges
\begin{equation}
I^{(2)}_{ac,bd|e} = I_{d; b, e}\, I_{e; a, c}\;,
\end{equation}
or, in components,
\begin{equation}
\left[I^{(2)}_{ac,bd|e}\right]_{n_1n_2n_3}{}^{n_4} = (-)^{j_d-n_4}\sum_m (-)^{j_e-m}
\begin{pmatrix}
j_a & j_c & j_e \\
n_1 & n_3 & m
\end{pmatrix}
\begin{pmatrix}
j_e & j_b & j_d \\
-m & n_2 & -n_4
\end{pmatrix}\;.
\end{equation}
The two intertwiners provide two bases in Inv$(\mathcal{D}^*_d\otimes \mathcal{D}_a\otimes \mathcal{D}_b \otimes \mathcal{D}_c)$. In the standard basis, one intertwiner is expressed in terms of the other by the following relation
\begin{equation}
\label{crossing}
I^{(2)}_{e|ac,bd} = \sum_{j_k} (-)^{j_b+j_c+j_e+j_k}\left(2j_k+1\right)\,
\begin{Bmatrix}
j_a & j_b & j_k \\
j_d & j_c & j_e
\end{Bmatrix} \,
I^{(1)}_{k|ab,cd}\;,
\end{equation}
where, by definition, the expansion coefficients are given by the 6$j$ Wigner symbol. In terms of the conformal block decomposition of the 4-point correlation function $\langle \mathcal{O}_{\Delta_a}\mathcal{O}_{\Delta_b}\mathcal{O}_{\Delta_c}\mathcal{O}_{\Delta_d} \rangle$ we say about exchanges in two OPE channels, while the change of basis \eqref{crossing} is the crossing relation. We see that the crossing matrix is given by the 6$j$ Wigner symbol which in its turn can be expressed from \eqref{crossing} as a contraction of two distinct 4-valent intertwiners or, equivalently, four 3-valent intertwiners.\footnote{The Wigner 6$j$ symbols for the conformal group $o(d-1,2)$ have attracted some interest recently for their role in the crossing (kernel) equations and for CFT$_d$ 4-point functions, see e.g. \cite{Ponsot:1999uf,Gadde:2017sjg,Liu:2018jhs,Meltzer:2019nbs,Sleight:2018epi,Albayrak:2020rxh}.}
Intertwiners of arbitrary higher valence can be introduced in the same manner to build $n$-point conformal blocks. Fixing the order of representations, an $n$-valent intertwiner can be defined by contracting $n-2$ copies of the 3-valent intertwiner by means of $n-3$ intermediate representations with representations ordered in different ways,
\begin{equation}
\label{n_valent}
I_{a_1,a_2,...,a_n |e_1,...,e_{n-3}} = I_{a_1; a_2, e_1}\, I_{e_1; a_3, a_4} ... I_{e_{n-3};a_{n-1},a_n}\;.
\end{equation}
Each of possible contractions defines a basis in Inv$(\mathcal{D}^*_{j_1}\otimes \cdots \otimes \mathcal{D}_{j_n})$ which can be changed by an appropriate Wigner $3(n-2)j$ symbol. E.g. in the five-point case the crossing matrices are given by Wigner 9$j$ symbol, etc.
The corresponding $n$-point blocks of conformal (quasi-primary/secondary) operators in $[\mathcal{O}_{\Delta_i}]$ with dimensions $\Delta_i = -j_{a_i}$ are built by acting with a given $n$-valent intertwiner on $n$ boundary states $|\tilde a_i\rangle=W_{i}[0,z_i]|a_i\rangle$, $i=1,...,n$, see \eqref{a_tilde}, as
\begin{equation}
\label{Phi_s}
F(\Delta_i, \tilde\Delta_j|{\bf z}) = \langle \tilde a_1|
I_{a_1,a_2,...,a_n |e_1,...,e_{n-3}}|\tilde a_2 \rangle \otimes \cdots \otimes |\tilde a_n\rangle\;,
\end{equation}
where the intertwiner is built by a particular ordering the representations that corresponds to given exchange channels in CFT$_2$ with dimensions $\tilde\Delta_l = -j_{e_l}$, $l=1,...,n-3$. In this way, we explicitly obtain the Wilson network operator on the sphere \eqref{Phi}.
\section{Toroidal Wilson networks}
\label{sec:toroidal}
As discussed in Section \bref{sec:general}, due to the gauge covariance, the Wilson networks do not depend on positions of vertex points in the bulk. It follows that bulk-to-bulk Wilson lines are effectively shrunk to points so that all diagrams with exchange channels expanded in trees and loops are given by contact diagrams only. However, on non-trivial topologies like (rigid) torus we discuss in this paper, there are non-contractible cycles. Then the associated Wilson networks will contain non-contractible loops given by non-trivial holonomies.
The general idea is that we can build torus blocks from the Wilson networks described in the sphere topology case simply by gluing together any two extra edges modulo $2\pi \tau$ (see Fig. \bref{12net}), and then identifying the corresponding representations. Taking a trace in this representation one retains the overall $sl(2,\mathbb{R})$ gauge covariance. More concretely, one takes $(n+2)$-point sphere function \eqref{Phi_s} with $n+2$ boundary states in $\mathcal{D}_{j_l}$, $l=1,...,n+2$ with any two of them belonging to the same representation, say $\mathcal{D}_{j_{1}} \approx \mathcal{D}_{j_{k}}$ for some $k$. Then, taking a trace over $\mathcal{D}_{j_{1}}$ naturally singles out a part of the original $(n+2)-$valent intertwiner involving two Wilson line operators and a number of constituent $3$-valent intertwiners ($k$, for the above choice). By means of the intertwiner invariance property, the two Wilson operators can be pulled through the intertwiners to form a single Wilson loop-like operator, schematically, ${\rm Tr}_{j_1}\Big(W_{j_1}[0,2\pi\tau] \,I_{j_1;a,b } \ldots I_{c;d,j_1} \Big)$. A true Wilson loop is obtained only when one starts from 2-point sphere function and in this case we get the $sl(2,\mathbb{R})$ character (see below), while for higher-point functions an operator under the trace necessarily contains at least one intertwiner.
\begin{figure}[H]
\centering
\begin{tikzpicture}[line width=1pt,scale=0.50]
\draw (-20,0) -- (-18,0);
\draw (-18,0) -- (-17,-1);
\draw (-18,0) -- (-17,1);
\draw (-18,-.5) node {$x$};
\draw (-20.5,0) node {$c$};
\draw (-17,1.5) node {$a$};
\draw (-17,-1.5) node {$b$};
\draw[blue, dashed] (-17,1) .. controls (-14,3) and (-14,-3) .. (-17,-1);
\draw (-9,-1) -- (-7,-1);
\draw (-9,-1) -- (-10,-2);
\draw (-9,-1) -- (-10,0);
\draw (-7,-1) -- (-6,-2);
\draw (-7,-1) -- (-6,0);
\draw (-9,-1.5) node {$x$};
\draw (-7,-1.5) node {$y$};
\draw (-8,-.5) node {$e$};
\draw (-10.5,-2.1) node {$a$};
\draw (-5.6,-2.1) node {$c$};
\draw (-10.5,.2) node {$b$};
\draw (-5.6,.2) node {$d$};
\draw[blue, dashed] (-10,0) .. controls (-12,3) and (-4,3) .. (-6,0);
\draw (0,0) -- (2,0);
\draw (0,0) -- (-1,-1);
\draw (0,0) -- (-1,1);
\draw (2,0) -- (3,-1);
\draw (2,0) -- (3,1);
\draw (0,-.5) node {$x$};
\draw (2,-.5) node {$y$};
\draw (-1.3,1.2) node {$b$};
\draw (3.0,1.4) node {$d$};
\draw (-1.3,-1.2) node {$a$};
\draw (3.0,-1.4) node {$c$};
\draw (1.0,.5) node {$e$};
\draw[blue, dashed] (3,1) .. controls (6,3) and (6,-3) .. (3,-1);
\end{tikzpicture}
\caption{Wilson networks with loops around non-contractible cycles on the (rigid) torus. Topologically different identifications of endpoints on the second and third graphs yield 2-point blocks in two OPE channels. }
\label{12net}
\end{figure}
Let us demonstrate how this works for the trivalent function \eqref{tri2} on the torus $\mathbb{T}^2$ with local coordinates $(w, \bar w)$ giving rise to a toroidal one-point Wilson network. The Wilson operators here are built using the respective background gravitational connection \eqref{con-glob-13}. We identify any two endpoints of the trivalent graph on Fig. \bref{12net}, which means that points $w_1 = -2\pi \tau $ and $w_2= 0$ lie on the thermal cycle. Identifying $\mathcal{D}_a \cong \mathcal{D}_b$, then choosing $|a\rangle = |b\rangle = | j_a,m\rangle$ and summing up over all basis states in $\mathcal{D}_a$ (recall that the standard basis is orthonormal) we find from \eqref{tri2} that
\begin{equation}
\label{tri3}
\begin{array}{c}
\displaystyle
\stackrel{\circ}{V}_{a|c}(\tau, {\bf w}) = \sum_m \Big(\langle j_a,m| \, W_a[-2\pi \tau,0] \,I_{a; a, c}\, |j_a,m\rangle\Big)\, W_c[0,w] | c\rangle\;
\\
\\
= {\rm Tr}_a \Big(W_a[0,2\pi\tau] \,I_{a; a, c} \Big)W_c[0,w] |c\rangle \;,
\end{array}
\end{equation}
where by $\stackrel{\circ}{V}$ we denote the resulting toroidal vertex function with some $|c\rangle \in \mathcal{D}_c$. If $\mathcal{D}_c$ is a trivial representation (i.e. $j_c=0$), then the Wilson line operator $W_c = \mathbb{1}_c$ and the intertwiner $I_{a;a,0} = \mathbb{1}_a$ so that \eqref{tri3} goes to the Wilson loop operator,
\begin{equation}
\label{tri4}
\begin{array}{c}
\displaystyle
\stackrel{\circ}{V}_{a|0}(\tau)
= {\rm Tr}_a \Big(W_a[0,2\pi \tau]\Big)\;,
\end{array}
\end{equation}
which is known to be a character of the representation $\mathcal{D}_a$ \cite{Witten:1988hf}. For non-trivial representations $\mathcal{D}_c$ we can choose $| c\rangle$ to be a lowest-weight vector in $\mathcal{D}_c$ and obtain the expression conjectured in \cite{Kraus:2017ezw}. In Section \bref{sec:gauge} we explicitly check that the expression \eqref{tri3} reproduces the 1-point torus block \eqref{glob_poly}.
Let us now turn to the two-point toroidal Wilson networks and consider the rightmost graph on Fig. \bref{12net}. Here, the representations labelled by $a,b,c,d$ are associated with endpoints ordered as $w_1,w_2,w_3,w_4$. In terms of the vertex function \eqref{four2} we identify representations $\mathcal{D}_d \cong \mathcal{D}_c$ and respective endpoints $w_4 = -2\pi \tau$ and $w_3 = 0$. Now, choosing $|d\rangle = |c\rangle = |j_c,m\rangle$ and summing up over all $m$ to produce a trace over $\mathcal{D}_c$, from \eqref{four2} we directly obtain
\begin{equation}
\label{four_t}
\stackrel{\circ}{V}_{\hspace{-1mm}{\rm (t)} \, c,e|a,b}(\tau, {\bf w}) = {\rm Tr}_c\Big( W_c[0,2\pi\tau] \,I_{c; c, e}\Big)I_{e; a, b} W_a[0,w_1] W_b[0,w_2]\, | a\rangle \otimes | b\rangle\;.
\end{equation}
The other possible toroidal two-point Wilson network corresponds to the middle graph on Fig. \bref{12net}. We fix endpoints as $w_4 = -2\pi \tau$ and $w_2 = 0$. Identifying representations $\mathcal{D}_d \cong \mathcal{D}_b$ and then summing up over states $|d\rangle = |b\rangle = |j_b,m\rangle$ we find
\begin{equation}
\label{four_s}
\stackrel{\circ}{V}_{\hspace{-1mm}{\rm (s)} \,b,e|a,c}(\tau, {\bf w})
={\rm Tr}_b\Big(W_b[0,2\pi\tau] \,I_{b; c, e}\, I_{e; a, b}\Big) W_a[0,w_1]\,W_c[0,w_3] \, |a\rangle \otimes |c\rangle \;.
\end{equation}
Using the crossing equations \eqref{crossing} we see that two-point toroidal vertex functions are related by means of the Wigner $6j$ symbols. In the next sections we check that the vertex functions \eqref{four_s} and \eqref{four_t} with $|a\rangle, |b\rangle$ chosen as lowest-weight vectors calculate two-point global torus conformal blocks in respectively $t$-channel and $s$-channel. Finally, let us note that both functions \eqref{four_t} and \eqref{four_s} are consistently reduced to \eqref{tri3} if one of external spins vanishes. E.g. we can set $j_a=0$ in which case $I_{e; 0, b}\sim \delta_{eb}\mathbb{1}_e$ \eqref{valent2} and $W_a[0,w_1] =\mathbb{1}_a$. The same is true at $j_b=0$. In other words, two-point vertex functions do reproduce one-point vertex functions provided one of extra spins is zero. The respective torus conformal blocks share the same property.
\section{Global torus blocks}
\label{sec:one-point}
In this section we review one-point and two-point torus global blocks and find their explicit form when quasi-primary operators are degenerate, which through the operator-state correspondence are described by finite-dimensional representations of the global conformal algebra.\footnote{\label{fn6}Global blocks are associated with $sl(2,\mathbb{C})$ subalgebra of Virasoro algebra $Vir$ which can be obtained by the \.In\"{o}n\"{u}-Wigner contraction at $1/c \to 0$. Various other limiting blocks can be obtained from $Vir$ by different types of contractions, for details see~\cite{Alkalaev:2016fok} and references therein. Global torus (thermal) blocks in CFT$_d$ and their dual AdS$_{d+1}$ interpretation in terms of the bulk (Witten) geodesic networks were studied in \cite{Alkalaev:2016ptm,Kraus:2017ezw,Alkalaev:2017bzx,Gobeil:2018fzy}.} In what follows we work in the planar coordinates $(z,\bar z)\in \mathbb{C}$ related to the cylindrical coordinates $(w, \bar w) \in \mathbb{T}^2$ on the torus by the conformal mapping $w = i \log z$.
Prior to describing global blocks let us shortly discuss the origin and relevance of degenerate $sl(2)$ operators. Since $sl(2)\subset Vir$, conformal dimensions of degenerate quasi-primaries can be seen as the large-$c$ limit of the Kac degenerate dimensions $h_{r,s}$ with integer $r,s\geq 1$ \cite{DiFrancesco:1997nk}. Expanding around $c=\infty $ we get
\begin{equation}
\label{kac}
h_{r,s} = c\,\frac{1-r^2}{24} - \frac{s-1}{2} -\frac{(1-r)(13r-12s+13)}{24} + \mathcal{O}(1/c)\;.
\end{equation}
It follows that in the large-$c$ regime one can distinguish between light ($h \sim \mathcal{O}(c^0)$) and heavy ($h \sim \mathcal{O}(c^1)$) degenerate operators. Moreover, those with $h_{1,s}$ are always light,
\begin{equation}
\label{sl_deg}
h_{1,s} = - \frac{s-1}{2} + \mathcal{O}(1/c)\;,
\end{equation}
while heavy operators have $h_{r,s}$ with $r>1$. The paradigmatic example here are the lowest dimension operators: the light operator with $h_{1,2}$ and heavy operator with $h_{2,1}$.\footnote{These are operational in the so-called monodromy method of calculating the large-$c$ (classical) $n$-point conformal blocks via $(n+1)$-point blocks with one light degenerate insertion $h_{1,2}$ \cite{Zamolodchikov1986}. It is intriguing that the monodromy method has a direct AdS dual interpretation as a worldline formulation of particle's configurations \cite{Hijano:2015rla,Alkalaev:2015lca,Banerjee:2016qca,Alkalaev:2016rjl}.} On the other hand, the $sl(2)$ subalgebra with all its representations can be viewed as the Virasoro algebra \.In\"{o}n\"{u}-Wigner contraction (see the footnote \bref{fn6}). Then, the formula \eqref{sl_deg} treats $1/c$ as a small contraction parameter and degenerate Virasoro primaries with $h_{1,s}$ in the large-$c$ limit go to the degenerate $sl(2)$ operators corresponding to finite-dimensional non-unitary $sl(2)$ modules with (half-)integer spins $j = -(s-1)/2$, where $s=1,2,3,...\,$.
From the general physical perspective, degenerate conformal operators constitute spectra of minimal $\mathcal{W}_N$ models, and, in particular, the Virasoro $Vir=\mathcal{W}_2$ minimal models relevant in our case. Moreover, a specific class of minimal coset CFT$_2$ models was conjectured to be dual to $3d$ higher-spin gravity \cite{Gaberdiel:2010pz,Prokushkin:1998bq}. Complementary to the standard 't Hooft limit in such models one may consider the large-$c$ regime limit \cite{Gaberdiel:2012ku,Perlmutter:2012ds}. Despite that the boundary theory becomes non-unitary (the conformal dimensions are negative, e.g. as discussed above) such a regime is interesting since the bulk gravity is semiclassical according to the Brown-Henneaux relation $G_3 \sim 1/c$ \cite{Brown:1986nw}. Moreover, a gravity dual theory can be formulated as the Chern-Simons theory that brings us back to the study of Wilson lines and conformal blocks, now for such non-unitary finite-dimensional representations \cite{Fitzpatrick:2016mtp,Besken:2017fsj,Hikida:2017ehf,Hikida:2018eih,Hikida:2018dxe}.\footnote{Yet another related direction where non-unitary large-$c$ Virasoro blocks are used is the study of the black hole thermodynamics and information loss in AdS$_3$/CFT$_2$ as a consequence of Virasoro symmetry both in the $c=\infty$ limit and with $1/c$ corrections \cite{Fitzpatrick:2016ive}.}
\subsection{One-point blocks}
The global one-point block in the torus CFT$_2$ is defined as the holomorphic contribution to the one-point correlation function of a given (quasi-)primary operator,
\begin{equation}
\label{1pt}
\langle \mathcal{O}_{\Delta, \bar \Delta}(z,\bar z)\rangle = {\rm Tr}\left(q^{L_0} \bar q^{\bar L_0} \mathcal{O}_{\Delta, \bar \Delta}(z,\bar z) \right) = \sum_{^{\tilde \Delta, \bar{\tilde \Delta}}} C^{^{\Delta \bar \Delta}}_{^{\tilde \Delta \bar{\tilde \Delta}}}\;\mathcal{F}_{^{\tilde\Delta,\Delta}}(z,q)\,\mathcal{F}_{^{\bar{\tilde \Delta}, \bar \Delta }}(\bar z,\bar q)\;,
\end{equation}
where the trace ${\rm Tr}$ is taken over the space of states, and the right-hand side defines the OPE expansion into (anti)holomorphic torus blocks, the expansion coefficients are the 3-point structure constants with two dimensions identified that corresponds to creating a loop (see Fig. \bref{1-point-fig}). The parameter $q = \exp{2\pi i \tau}$, where $\tau\in \mathbb{H}$ is the torus modulus, (holomorphic) conformal dimensions $\Delta, \tilde \Delta$ parameterize the external (quasi-)primary operator and the OPE channel, respectively. The convenient representation of the torus block is given by using the hypergeometric function~\cite{Hadasz:2009db}
\begin{equation}
\label{glob1pt}
\begin{aligned}
&\mathcal{F}_{^{\tilde\Delta,\Delta}}(q) = \frac{\;\;q^{ \tilde \Delta}}{1-q} \,\;{}_2 F_{1}(\Delta, 1-\Delta, 2\tilde \Delta\, |\, \frac{q}{q-1}) =
\\
& = 1+\Big[1+\frac{(\Delta - 1)\Delta}{2\tilde \Delta}\Big] q + \Big[1+\frac{(\Delta -1) \Delta }{2 \tilde\Delta }+\frac{(\Delta -1)
\Delta (\Delta ^2-\Delta +4 \tilde\Delta)}{4 \tilde \Delta (2 \tilde\Delta +1)}\Big]q^2+ ... \;.
\end{aligned}
\end{equation}
The one-point block function is $z$-independent due to the global $u(1)$ translational invariance of torus CFT$_2$. Note that at $\Delta=0$ the one-point function becomes the $sl(2,\mathbb{R})$ character
\begin{equation}
\label{sl2_char}
\mathcal{F}_{^{\tilde\Delta, 0}}(q) = \frac{\;\;q^{ \tilde \Delta}}{1-q} = q^{ \tilde \Delta}\left(1+q+q^2+q^3+\ldots\right)\,,
\end{equation}
showing that for generic dimension $\tilde\Delta$ there is one state on each level of the corresponding Verma module. The block function \eqref{glob1pt} can be shown to have poles at
$\tilde \Delta = -n/2$, where $n\in \mathbb{N}_0\,$ which is most manifest when the block is represented as the Legendre function (see Appendix \bref{appC}).
\begin{figure}[H]
\centering
\begin{tikzpicture}[line width=1pt]
\draw[black] (-9,0) circle (1.2cm);
\draw[black] (-10.2,0) -- (-12,0);
\fill[black] (-10.2,0) circle (0.7mm);
\fill[black] (-12,0) circle (0.7mm);
\draw (-8.,1.2) node {$\tilde\Delta_1$};
\draw (-11.2,.3) node {$\Delta_1$};
\draw (-11.2,-.3) node {$W_{j_1}$};
\draw (-7.9,-1.2) node {$W_{j_p}$};
\draw (-12.5,0) node {$z_1$};
\draw (-10.4,-0.3) node {$x$};
\draw (-9.4,-0.) node {$I_{j_p;j_p,j_1}$};
\centerarc[dashed,blue](-9,0)(155:205:3cm);
\end{tikzpicture}
\caption{One-point conformal block on a torus. $\Delta$ and $\tilde\Delta$ are external and intermediate conformal dimensions. The same graph shows the Wilson network with Wilson line $W_{j_1}$ and loop $W_{j_p}$, along with the intertwiner $I_{j_p;j_p,j_1}$ in the bulk point $x$. Dashed line is the boundary of the thermal AdS$_3$. }
\label{1-point-fig}
\end{figure}
In general, conformal dimensions $\Delta,\tilde\Delta$ are assumed to be arbitrary. The corresponding operators are related to (non)-unitary infinite-dimensional $sl(2,\mathbb{C})$ representations. For integer negative dimensions\footnote{See Appendix \bref{sec:sl2}. In this paper we consider only bosonic (integer spin) representations.}
\begin{equation}
\label{deg_del}
\Delta=-j_1\quad\text{and}\quad \tilde\Delta=-j_p\;,
\qquad
j_1, j_p \in \mathbb{N}_0\;,
\end{equation}
these representations contain singular submodules on level $2j$ with the conformal dimension $\Delta' = -\Delta+1$ so that one may consider quotient representations $\mathcal{D}_j$ which are finite-dimensional non-unitary spin-$j$ representations of dimension $2j+1$.
Note that the degenerate dimension $\tilde\Delta$ of the loop channel \eqref{deg_del} defines poles of the block function. It follows that in order to find torus blocks for $\mathcal{D}_j$ one may proceed in two equivalent ways. The first one is to use the BPZ procedure \cite{Belavin:1984vu} to impose the singular vector decoupling condition on correlation functions. E.g., for the zero-point blocks which are characters \eqref{sl2_char} the BPZ condition is solved by subtracting the singular submodule character $\mathcal{F}_{^{\tilde\Delta',0}}$ from the original character $\mathcal{F}_{^{\tilde\Delta,0}}$ to obtain the well-known expression for finite-dimensional $sl(2,\mathbb{C})$ character
\begin{equation}
\label{fin_char}
\chi_p \equiv \mathcal{F}_{^{\tilde\Delta,0}}(q) - \mathcal{F}_{^{\tilde\Delta',0}}(q)
= q^{-j_p}\left(1 + q + q^2 +... + q^{2j_p}\right)
= q^{-j_p} \frac{\;\;1-q^{j_p+1}}{1-q}\;.
\end{equation}
Similarly, one may formulate and solve the respective BPZ condition for one-point blocks.
The alternative way is to define the torus block as (anti)holomorphic constituents of the correlation functions built by evaluating the trace already in the finite-dimensional quotient modules. In this case, the holomorphic one-point block is simply
\begin{equation}
\label{fin_1pt}
\mathcal{F}_{{j_p, j_1}}(q) = {\rm Tr}_{j_p} \left(q^{L_0}\mathcal{O}_{j_1}\right)\;,
\end{equation}
where the quasi-primary operator $\mathcal{O}_{j_1}$ corresponds to $\mathcal{D}_{j_1}$, and the trace is taken over $\mathcal{D}_{j_p}$, cf. \eqref{1pt}. It becomes the order-$2j_p$ polynomial in the modular parameter,
\begin{equation}
\label{glob_poly}
\mathcal{F}_{{j_p, j_1}}(q)=q^{-j_p}\left(f_{0} + q f_{1} + q^2 f_{2} +... + q^{2j_p} f_{2j_p}\right)\;,
\qquad
\end{equation}
where the coefficients are given by
\begin{equation}
\label{1ptcoefGLOBAL}
f_{n} = \, _3F_2(-j_1,j_1+1,-n;1,-2 j_p;1)=
\sum_{m=0}^n
\,\frac{(-)^m n!}{(n-m)!(m!)^2}\,\frac{(j_1+m)_{2m}}{(2j_p)_m}
\end{equation}
and $(x)_n = x(x-1)... (x-n+1)$ is the falling factorial. At $j_1 =0$ we reproduce \eqref{fin_char}. Note that imposing the BPZ condition enforces the conformal dimensions to satisfy the fusion rule
\begin{equation}
\label{restrict}
0 \leq j_1 \leq 2j_p\;.
\end{equation}
\subsection{Two-point blocks }
The global two-point torus correlation functions can be expanded in two OPE channels that below are referred to as $s$-channel and $t$-channel, see Fig. \bref{2-point-fig}.
\paragraph{$s$-channel.} The two-point $s$-channel global block is given by \cite{Alkalaev:2017bzx}:
\begin{equation}
\label{glob-s}
\begin{aligned}
&\mathcal{F}_s^{^{\Delta_{1,2}, \tilde \Delta_{1,2}}}(q, z_{1,2}) =\\
&\quad=z_1^{-\Delta_1+\tilde\Delta_1-\tilde\Delta_2} z_2^{-\Delta_2-\tilde\Delta_1+\tilde\Delta_2} \sum_{m,n=0}^\infty \frac{\tau_{n,m}(\tilde\Delta_1, \Delta_1, \tilde\Delta_2)\tau_{m,n}(\tilde\Delta_2, \Delta_2, \tilde \Delta_1)}{m!\, n!\,(2\tilde\Delta_1)_n (2\tilde\Delta_2)_m}\, q^{\tilde \Delta_1+n}\, \left(\frac{z_1}{z_2}\right)^{n-m}\;,
\end{aligned}
\end{equation}
where the coefficients $\tau_{m,n} = \tau_{m,n}(\Delta_i, \Delta_j, \Delta_k)$ defining the $sl(2)$ 3-point function of a primary operator $\Delta_j$ and descendant operators $\Delta_{i,k}$ on the levels $n,m$ are~\cite{Alkalaev:2015fbw}
\begin{equation}
\label{A-tau}
\tau_{n,m}(\Delta_{i,j,k}) = \sum_{p = 0}^{\min[n,m]}
\left(\begin{array}{c}
n \\
p \\
\end{array}
\right)
(2\Delta_k +m-1)^{(p)} m^{(p)}
(\Delta_k+\Delta_j - \Delta_i)_{m-p}(\Delta_i + \Delta_j -\Delta_k+p-m)_{n-p}\;,
\end{equation}
where $(x)_l = x(x+1)...(x+l-1)$ and $(x)^{(l)} = x(x-1)... (x-l+1)$ are raising (falling) factorials.
The conformal dimensions of the degenerate (external and internal) quasi-primary operators read
\begin{equation}
\label{degen}
\Delta_a=-j_a\quad\text{and}\quad \tilde\Delta_b=-j_{p_b}\;,
\qquad
a,b=1,2\;,
\qquad
j_a, j_{p_b} \in \mathbb{N}_0\;.
\end{equation}
These values of the intermediate channel dimensions define poles of the block function. It follows that
in order to use~\eqref{glob-s} for the degenerate operators the summation is to be restricted to the region $n< -2\tilde\Delta_1+1$ and $m< -2\tilde\Delta_2+1$ which is an implementation of the BPZ decoupling condition.
\begin{figure}[H]
\centering
\begin{tikzpicture}[line width=1pt]
\def8{8}
\def-.5{-.5}
\draw[black] (-9,0) circle (1.2cm);
\draw[black] (-10.2,0) -- (-11.7,0);
\draw[black] (-9,-1.2) -- (-9,-2.7);
\draw (-10.1,-1.) node {$\tilde\Delta_2$};
\draw (-8.,1.18) node {$\tilde\Delta_1$};
\draw (-8.6-0.7,-2.4) node {$\Delta_2$};
\draw (-11.3,.25) node {$\Delta_1$};
\draw[black] (-9+8,0+-.5) circle (1.2cm);
\draw[black] (-10.2+8,0+-.5) -- (-11.9+8,0+-.5);
\draw[black] (-12.5+8,1.3+-.5) -- (-11.9+8,0+-.5);
\draw[black] (-12.5+8,-1.3+-.5) -- (-11.9+8,0+-.5);
\draw (-8+8,1.2+-.5) node {$\tilde\Delta_1$};
\draw (-11.1+8,.3+-.5) node {$\tilde\Delta_2$};
\draw (-12.1+8,1.4+-.5) node {$\Delta_1$};
\draw (-12.1+8,-1.4+-.5) node {$\Delta_2$};
\draw (-9,2) node {$s\text{-channel}$};
\draw (-2.5,2) node {$t\text{-channel}$};
\draw (-12.1,0) node {$z_1$};
\draw (-9.,-3) node {$z_2$};
\fill[black] (-10.2,0) circle (0.7mm);
\fill[black] (-11.7,0) circle (0.7mm);
\fill[black] (-9,-2.7) circle (0.7mm);
\fill[black] (-9,-1.2) circle (0.7mm);
\fill[black] (-10.2+8,0+-.5) circle (0.7mm);
\fill[black] (-11.9+8,0+-.5) circle (0.7mm);
\fill[black] (-12.5+8,1.3+-.5) circle (0.7mm);
\fill[black] (-12.5+8,-1.3+-.5) circle (0.7mm);
\draw (-12.1+8-.8,1.4+-.5) node {$z_1$};
\draw (-12.1+8-.8,-1.4+-.5) node {$z_2$};
\draw (-9.4,-0.) node {$I_{j_{p_1};j_1,j_{p_2}}$};
\draw (-8.2,-1.45) node {$I_{j_{p_2};j_2,j_{p_1}}$};
\centerarc[dashed,blue](-9,0)(150:295:2.7cm);
\draw (-9.25+8,-0.+-.5) node {$I_{j_{p_1};j_{p_1},j_{p_2}}$};
\draw (-11.2+8,-0.+-.5-.3) node {$I_{j_{p_2};j_{1},j_{2}}$};
\centerarc[dashed,blue](-10.2+8,0+-.5)(135:230:2.65cm);
\end{tikzpicture}
\caption{Two-point conformal blocks in $s$-channel and $t$-channel. The same graphs show the Wilson networks, with dashed lines representing the boundary of the thermal AdS$_3$. }
\label{2-point-fig}
\end{figure}
\paragraph{$t$-channel.} The two-point global block in the $t$-channel can be calculated either by solving the $sl(2, \mathbb{C})$ Casimir equation \cite{Kraus:2017ezw} or by summing over 3-point functions of three descendant operators \cite{Alkalaev:2017bzx}:
\begin{equation}
\label{glob-t}
\begin{aligned}
&\mathcal{F}_{\, t}^{^{\Delta_{1,2}, \tilde \Delta_{1,2}}}(q, z_{1,2}) =\\
&\quad=(z_1-z_2)^{\tilde\Delta_2-\Delta_1-\Delta_2} z_2^{-\tilde\Delta_2}
\sum_{m,n=0}^\infty \frac{\sigma_{m}(\Delta_{1}, \Delta_{2}, \tilde\Delta_2)\tau_{n,n}(\tilde\Delta_1, \tilde\Delta_2, \tilde\Delta_1)}{m!\,n!\,(2\tilde\Delta_1)_n(2\tilde\Delta_2)_m} \, q^{\tilde \Delta_1+n} \, \left(\frac{z_1-z_2}{z_2}\right)^m,
\end{aligned}
\end{equation}
where the $\tau$-function is given by \eqref{A-tau} and $\sigma_{m}(\Delta_{1}, \Delta_{2}, \tilde\Delta_2) = (-)^m (\tilde\Delta_2+\Delta_1-\Delta_2)_{m}(\tilde \Delta_2)_{m}$. The $t$-channel block for the degenerate quasi-primary operators is obtained by applying arguments given around \eqref{degen}.
\section{One-point toroidal Wilson networks}
\label{sec:gauge}
In this section we explicitly calculate the one-point toroidal vertex function \eqref{tri3} in the thermal AdS$_3$ \eqref{con-glob-13} which is equal to the one-point block function \eqref{glob_poly}--\eqref{1ptcoefGLOBAL}.
\subsection{Diagonal gauge}
\label{sec:diagonal}
Let us choose the boundary state as the lowest-weight vector in the respective spin-$j_a$ representation $\mathcal{D}_a$ (see Appendix \bref{sec:sl2})
\begin{equation}
\label{LW_i}
|a\rangle =|\mathbb{lw}\rangle_a\in \mathcal{D}_a\;:
\qquad
J_{1}|\mathbb{lw}\rangle_a = 0\;,
\qquad
J_{0}|\mathbb{lw}\rangle_a = -j_a |\mathbb{lw}\rangle_a\;,
\end{equation}
along with the Wilson line operator in Euclidean thermal AdS$_3$ space defined by the gravitational connection \eqref{con-glob-13}. Performing a gauge transformation, $\Omega=U \,\tilde{\Omega} \, U^{-1} +UdU^{-1}$, where the $SL(2,\mathbb{R})$ gauge group element is $U = \exp{\frac{i}{2}J_{-1}}\exp{-iJ_1}\exp{-\frac{i \pi}{2}J_{0}}$, the chiral connection~\eqref{con-glob-13} can be cast into the {\it diagonal} form
$\tilde{\Omega}= -i J_0\, dw$ as compared to the original {\it off-diagonal} form.\footnote{See \cite{Castro:2011fm} for more about the diagonal gauge in the $sl(N, \mathbb{R})$ higher-spin gravity case.} In the diagonal gauge, the chiral Wilson line operators \eqref{chiral_wilson} take the following general form
\begin{equation}
W_a[x_1,x_2] = \exp{(-ix_{12}J_0)}\;.
\end{equation}
The Wilson loop traversing the thermal circle can now be given in terms of the modular parameter as
\begin{equation}
\label{Wilson_loop}
W_{a}[0,2\pi \tau]=\exp{(2\pi i \tau J_0)} \equiv q^{J_0}\;,
\qquad
q = \exp{2\pi i\tau}\;,
\end{equation}
where $J_0$ is taken in the representation $\mathcal{D}_a$. Due to the intertwiner transformation property, using the Wilson operators in the diagonal gauge does not change toroidal vertex functions of Section \bref{sec:toroidal} except for that the boundary lowest-weight vectors \eqref{LW_i} are to be transformed accordingly,
\begin{equation}
\label{U_trans}
|\mathbb{lw}\rangle_a \to |\hat{\mathbb{lw}\,}\rangle_a = U^{-1}_a |\mathbb{lw}\rangle_a \in \mathcal{D}_a\;,
\end{equation}
with the gauge group (constant) element $U_a$ given above and evaluated in the representation $\mathcal{D}_a$. In the diagonal gauge, the conformal invariance of the toroidal vertex functions is guaranteed by the property \eqref{conf_trans} which is now given by the identity matrix $C_m{}^n = \delta_m^n$ and the holomorphic conformal algebra generators $\mathcal{L}_n = -\exp{(inw)}(i \partial_w-n\Delta)$ in the cylindrical coordinates on $\mathbb{T}^2$.
The transformed boundary vector \eqref{U_trans} obeys the following linear relations \cite{Kraus:2017ezw}
\begin{equation}
\begin{aligned}
\label{special-state-1}
\left(J_1+J_{-1}-2J_0\right) |\hat{\mathbb{lw}\,}\rangle =0\;,\\
\left(J_1 - J_{-1} +2j\right) |\hat{\mathbb{lw}\,}\rangle=0\;,
\end{aligned}
\end{equation}
which are the two transformed lowest-weight conditions \eqref{LW_i}. Representing $|\hat{\mathbb{lw}\,}\rangle \in \mathcal{D}_j$ in the standard basis as
\begin{equation}
\label{trans1}
|\hat{\mathbb{lw}\,}\rangle=\beta_{j,j}|j,j\rangle+\beta_{j,j-1} |j,j-1\rangle+ \cdots+ \beta_{j,-j} |j,-j\rangle\;,
\end{equation}
acting on this state with $J_1$ or $J_{-1}$ and then using the defining relations \eqref{special-state-1} we obtain the recurrence
relations
\begin{equation}
\label{rec_vec}
\begin{aligned}
&\beta_{j,k} (h_j+k)=M_-(j,k+1) \beta_{j,k+1}\;,\\
&M_+(j,k-1) \beta_{j,k-1}=(k-h_j)\beta_{j,k}\;,
\end{aligned}
\end{equation}
where $k=-j,-j+1, ...,j-1, j$ and $M_{\pm}(j,k)$ are defined in~\eqref{standard}. Fixing the overall normalization as $\beta_{j,j} = 1$ and solving~\eqref{rec_vec} we find
\begin{equation}
\label{trans2}
\beta_{j,k}=- \prod _{r=0}^{j-k-1} \frac{M_-(j,j-r)}{r+1}
=
\prod _{r=0}^{j-k-1} \frac{2j-r}{M_+(j,j-r-1)}\;.
\end{equation}
In what follows we will need the transformed boundary state in the fundamental (two-dimensional) representation $\mathcal{D}_{\half}$ which is read off from \eqref{trans1}, \eqref{trans2} as
\begin{equation}
\label{fund-state-tilde}
|\hat{\mathbb{lw}\,}\rangle=|\half,\half\rangle+|\half,-\half\rangle\;.
\end{equation}
\subsection{Wigner 3$j$ symbol representation}
\label{sec:wigner}
Let us consider the one-point toroidal Wilson network in the diagonal gauge. To this end, using the translation invariance we set $w=0$ so that $W_c = \mathbb{1}_c$, then insert the resolutions of identities \eqref{resolve} that allows representing the vertex function \eqref{tri3} in the form
\begin{equation}
\label{tri_m}
\begin{array}{c}
\displaystyle
\stackrel{\circ}{V}_{a|c}(\tau) = \sum_{m,n = -j_a}^{j_a} \sum_{l=-j_c}^{j_c} \Big(\langle j_a,m| \, W_a[0,2\pi \tau] \,|j_a,n\rangle\Big) \Big(\langle j_a,n|I_{a; a, c}\, |j_a,m\rangle\otimes |j_c, l\rangle\Big)\langle j_c, l|\hat{\mathbb{lw}\,}\rangle_c\;,
\end{array}
\end{equation}
where the Wilson operator is given by \eqref{Wilson_loop} and $|\hat{\mathbb{lw}\,}\rangle_c$ is the transformed boundary vector \eqref{U_trans}. The first factor in \eqref{tri_m} is the Wigner D-matrix for the $SU(1,1)$ group element \eqref{Wilson_loop},
\begin{equation}\label{D-matrix}
D^{(j_a)}{}^m{}_n \equiv \langle j_a,m| \, q^{J_0} \,|j_a,n\rangle = \delta^{m}_n \,q^n\;,
\qquad
m,n = -j_a, -j_a+1, ... , j_a\;,
\end{equation}
where the last equality is obtained by using the standard basis \eqref{standard}.\footnote{Note that taking the trace of the D-matrix \eqref{D-matrix} we directly obtain the $su(1,1)$ character \eqref{fin_char}.} The last factor is the $l$-th coordinate of the transformed boundary vector in the standard basis,
\begin{equation}
V_{(j_c)}^l \equiv \langle j_c,l|\hat{\mathbb{lw}\,}\rangle_c\;,
\qquad
l = -j_c, -j_c+1, ... , j_c\;,
\end{equation}
It is given by \eqref{trans1}, \eqref{trans2}. Finally, the second factor is the matrix element of the intertwiner which is related to the Wigner 3$j$ symbol by \eqref{tri3-1}. Gathering all matrix elements together we obtain
\begin{equation}
\label{tri_mat}
\begin{array}{c}
\displaystyle
\stackrel{\circ}{V}_{a|c}(q) = \sum_{m,n = -j_a}^{j_a} \sum_{l=-j_c}^{j_c}
D^{(j_a)}{}^m{}_n [I_{a;a,c}]^n{}_{ml} V_{(j_c)}^l
=
(-)^{j_a}V_{(j_c)}^0\sum_{m = -j_a}^{j_a} (-)^m q^m
\begin{pmatrix}
j_c & j_a &\, j_a \\
0 & m & \,-m
\end{pmatrix}\;,
\end{array}
\end{equation}
where when obtaining the second equality we used: (1) relations \eqref{mat_int}, \eqref{tri3-1}, (2) the 3$j$ symbol $[W_{a,b,c}]_{mkn}$ property $m+n+k=0$, (3) the D-matrix $D^{(j_a)}{}^m{}_n$ is diagonal, $m-n=0$. The last two properties allow reducing three sums to one. Also, it follows that only the middle (the magnetic number =0) component of the boundary transformed vector contributes by giving an overall factor.
Now, we adjust representations $\mathcal{D}_a\approx \mathcal{D}_{j_p}$ and $\mathcal{D}_c\approx \mathcal{D}_{j_1}$ to the loop and the external leg. In this case, the sum in the right-hand side of \eqref{tri_mat} can be cast into the form
\begin{equation}
\label{tri_mat2}
\sum_{m=-j_p}^{j_p} (-)^m
\begin{pmatrix}
j_1 & j_p &\, j_p \\
0 & m & \,-m
\end{pmatrix}\,q^m =
q^{-j_p}\sum_{n=0}^{2j_p} (-)^{n-j_p}
\begin{pmatrix}
j_1 & j_p & j_p \\
0 & n-j_p & j_p-n
\end{pmatrix}\,q^n\;,
\end{equation}
which is obtained by changing $n=m+j_p$.
On the other hand, the expansion coefficients of the one-point block~\eqref{glob_poly} are given by the hypergeometric function \eqref{1ptcoefGLOBAL}. Hence, in order to verify the representation~\eqref{tri_mat} we need to check the identity
\begin{equation}
\label{two-hyper-geoms}
\, _3F_2(-j_1,j_1+1,-n;1,-2 j_p;1)=\varkappa\, (-)^{n-j_p}
\begin{pmatrix}
j_1 & j_p & j_p \\
0 & n-j_p & j_p-n
\end{pmatrix}\,,
\end{equation}
where $\varkappa$ is an $n$-independent factor. This relation holds for
\begin{equation}
\varkappa=(-)^{j_p}
\begin{pmatrix}
j_1 & j_p &\, j_p \\
0 & j_p &\, -j_p
\end{pmatrix}^{-1}=
(-)^{j_p}\, \frac{(2 j_p+1)\sqrt{ \Gamma (1+2 j_p-j_1) \Gamma (2+2 j_p+j_1)}}{\Gamma (2 j_p+2)}\,.
\end{equation}
To see this we use the explicit representation for the Wigner 3$j$ symbol~\cite{varshalovich}
\begin{equation}
\label{3j-explicit}
\begin{aligned}
&\begin{pmatrix}
j_1 & j_2 & j_3 \\
m_1 & m_2 & m_3
\end{pmatrix} =\delta _{m_1+m_2+m_3,0}\,
\frac{\sqrt{\left(j_3-j_1+j_2\right)!} \sqrt{\left(-j_3+j_1+j_2\right)!} \sqrt{\left(j_3+j_1+j_2+1\right)!}}{\Gamma \left(j_3+j_1+j_2+2\right)\sqrt{\left(j_3+j_1-j_2\right)!}}\times\\
&\times\frac{\sqrt{(j_3-m_3)!} \sqrt{\left(j_1-m_1\right)!}}{\sqrt{(j_3+m_3)!} \sqrt{\left(j_1+m_1\right)!} \sqrt{\left(j_2-m_2\right)!} \sqrt{\left(j_2+m_2\right)!}}\,
\frac{(-)^{j_1+m_2-m_3}(2 j_3)! \left(j_3+j_2+m_1\right)!}{\left(j_3-j_1+j_2\right)! (j_3-m_3)!}\\
&\times \, _3F_2\left(-j_3+m_3,-j_3-j_1-j_2-1,-j_3+j_1-j_2;-2 j_3,-j_3-j_2-m_1;1\right)\;,
\end{aligned}
\end{equation}
which gives for the right-hand side of~\eqref{two-hyper-geoms}
\begin{equation}
\varkappa \, (-)^{n-j_p}
\begin{pmatrix}
j_1 & j_p & j_p \\
0 & n-j_p & j_p-n
\end{pmatrix} =
\frac{(2 j_p)! (-1)^{n-j_1} \, _3F_2(-j_1-2 j_p-1,j_1-2 j_p,-n;-2 j_p,-2 j_p;1)}{n! (2 j_p-n)!}\;.
\end{equation}
The right-hand side here can be transformed by making use of the (Euler-type) transformation for the generalized hypergeometric function (see e.g.~\cite{prudnikov1986integrals}),
\begin{equation}\label{Euler-transform}
\frac{\Gamma (d) \Gamma (-a-b-c+d+e) \, _3F_2(e-a,e-b,c;-a-b+d+e,e;1)}{\Gamma (d-c) \Gamma (-a-b+d+e)}= \, _3F_2(a,b,c;d,e;1)\;,
\end{equation}
so that we obtain the equality~\eqref{two-hyper-geoms} that proves the representation~\eqref{tri_mat} for the one-point torus block~\eqref{glob_poly}.
\subsection{Symmetric tensor product representation }
\label{sec:one_point}
Yet another possible realization of the intertwiners is given when finite-dimensional $sl(2,\mathbb{R})$ representations $\mathcal{D}_j$ are realized as components of the symmetric tensor products of fundamental (spinor) representation $\mathcal{D}_{\half}$ (for notation and conventions, see Appendix \bref{sec:multi}). This multispinor technique was used in \cite{Besken:2016ooo} to calculate three-point and four-point sphere blocks. In what follows we explicitly calculate the one-point torus block for degenerate operators using the toroidal vertex function \eqref{tri3} realized via multispinors. In particular, this realization of the Wilson network formulation brings to light an interesting decomposition of the torus block function in terms of $sl(2,\mathbb{R})$ characters (see Section \bref{sec:character}).
We start with the toroidal one-point vertex function given in the form~\eqref{tri_m} or \eqref{tri_mat}, which is a product of three matrix elements which are now to be calculated using the multispinor approach.
\paragraph{(I)} \hspace{-3mm} The first matrix element in \eqref{tri_m} is the Wigner $D$-matrix,
\begin{equation}
\label{M1}
D_{\alpha_1\cdots\alpha_{\lambda_a}}^{\beta_1\cdots\,\beta_{\lambda_a}}
= \frac{1}{\lambda_a!} \,D_{(\alpha_1}^{\beta_1}\cdots D_{\alpha_{\lambda_a})}^{\beta_{\lambda_a}} \;,
\end{equation}
where $\lambda_a = 2j_a$ and $D_\alpha^\beta$ is the Wigner $D$-matrix of the Wilson line wrapping the thermal cycle in the fundamental representation,
\begin{equation}
\label{Q-tilde-a}
D_\alpha^\beta=\langle e_\alpha|q^{J_0}|e_\beta\rangle=\left(
\begin{array}{cc}
q^{\half} & 0 \\
0 & q^{-\half} \\
\end{array}
\right)\;,
\end{equation}
where $|e_\alpha\rangle = |\half, (-)^{1+\alpha}\half\rangle$ denote the standard basis elements.\footnote{Note that taking the trace of the D-matrix \eqref{M1} one can obtain the $su(1,1)$ character \eqref{fin_char} \cite{Kraus:2017ezw}.}
\paragraph{(II)} \hspace{-3mm} The third matrix element in \eqref{tri_m} is coordinates of the transformed boundary state $|c\rangle = |\hat{\mathbb{lw}\,}\rangle_c$ defined by \eqref{special-state-1}. Then, using the product formulas for spinors and representing $|\hat{\mathbb{lw}\,}\rangle_c$ as a product of elements \eqref{fund-state-tilde} we find
\begin{equation}
\label{M2}
\langle j_c,l|\hat{\mathbb{lw}\,}\rangle_c \sim V_{\gamma_1\cdots\gamma_{\lambda_c}}=V_{\gamma_1}\cdots V_{\gamma_{\lambda_c}}\;,
\end{equation}
where a spinor $V_\gamma$ is now coordinates of the transformed boundary vector \eqref{fund-state-tilde} in the fundamental representation,
\begin{equation}
\label{V-tilde-a}
V_\gamma=\langle e_\gamma|\Big(|e_1\rangle+|e_2\rangle\Big)=\delta_{\gamma,1}+\delta_{\gamma,2}\;.
\end{equation}
\paragraph{(III)} \hspace{-3mm} The second matrix element in \eqref{tri_m} is the intertwiner which in the spinor form is just the projector \eqref{proj} with $\lambda_1 = \lambda_3 = \lambda_a$ and $\lambda_2 = \lambda_c$, so that from \eqref{k} we have $k = \lambda_c/2$.
\vspace{3mm}
Gathering all matrix elements together we assemble the following Wilson network matrix element
\begin{equation}
\begin{array}{l}
\displaystyle
W^{\rho_1 ... \rho_{\lambda_a}}_{\gamma_1 ... \gamma_{\lambda_a}}=
\\
\\
\displaystyle
\hspace{20mm}= \epsilon^{\alpha_{1}\beta_1}\cdots \epsilon^{\alpha_{\frac{\lambda_c}{2}}\beta_{\frac{\lambda_c}{2}}}\;
D^{\rho_1 \ldots \rho_{\frac{\lambda_1}{2}} \rho_{\frac{\lambda_c}{2}+1} \ldots \rho_{\lambda_a}}_{\alpha_1 \ldots \alpha_{\frac{\lambda_c}{2}} (\gamma_{1} \ldots \gamma_{\lambda_a-\frac{\lambda_1}{2}}}
\;V_{\gamma_{\lambda_a-\frac{\lambda_1}{2}+1}\ldots \gamma_{\lambda_a}) \beta_1 \ldots \beta_{\frac{\lambda_c}{2}}}\;,
\end{array}
\end{equation}
so that the vertex function \eqref{tri_m} is given by
\begin{equation}
\stackrel{\circ}{V}_{a|c} = W_{\gamma_1\cdots\gamma_{\lambda_a}}^{\gamma_1\cdots\gamma_{\lambda_a}} \;.
\end{equation}
In the rest of this section we show that the vertex function calculates the one-point torus block with degenerate operators as
\begin{equation}\label{one-point-tensor}
\mathcal{F}_{{j_p, j_1}}(q) = W_{\gamma_1\cdots\gamma_{2j_p}}^{\gamma_1\cdots\gamma_{2j_p}}\;.
\end{equation}
Substituting \eqref{M1} and \eqref{M2} into \eqref{one-point-tensor} we obtain
\begin{equation}
\label{Fpk}
\begin{array}{l}
\displaystyle
\mathcal{F}_{{j_p, j_1}}(q) = \frac{1}{(2j_p)!}\,\epsilon^{\alpha_1 \beta_1}\, \cdots\, \epsilon^{\alpha_{j_1} \beta_{j_1}}
\,
D^{\gamma_1}_{\alpha_1}\, \cdots\, D^{\gamma_{j_1}}_{\alpha_{j_1}}
\,
V_{\beta_1} \, \cdots\, V_{\beta_{j_1}}\, \times
\\
\\
\displaystyle
\hspace{60mm}\times\, D^{\gamma_{{j_1}+1}}_{(\gamma_1} \, \cdots\, D^{\gamma_{2j_p}}_{\gamma_{{2j_p}-{j_1}}}
\,
V_{\gamma_{{2j_p}-{j_1}+1}} \, \cdots\, V_{\gamma_{2j_p})}\,.
\end{array}
\end{equation}
In order to calculate \eqref{Fpk} we parameterize $2j_p = p$ and $j_1 = k$ along with the fusion condition $k\leq p$ \eqref{restrict}. This expression can be simplified by introducing new spinor and scalar functions,
\begin{equation}
E^\gamma = \epsilon^{\alpha\beta} D_\alpha^\gamma V_\beta = (q^{\half}, -q^{-\half})
\end{equation}
and
\begin{equation}
\label{mathCn}
\mathbb{C}_n = E^{\gamma}\,\left( D^{\alpha}_{\beta_1}D^{\beta_1}_{\beta_2} D^{\beta_2}_{\beta_3} \cdots D^{\beta_{n-1}}_{\gamma}\right)\, V_\alpha \equiv
E^{\gamma}\,\left(D^{n}\right)^\alpha_\gamma\, V_\alpha = q^{\frac{n+1}{2}} - q^{-\frac{n+1}{2}}\;,
\end{equation}
which are calculated using the definitions \eqref{Q-tilde-a} and \eqref{V-tilde-a}.
Then, \eqref{Fpk} can be cast into the form
\begin{equation}
\label{Fpk1}
\mathcal{F}_{{j_p, j_1}}(q) \equiv F_{_{p,k}} = \frac{1}{p!}\,E^{\gamma_1} \cdots E^{\gamma_k}\,
D^{\gamma_{k+1}}_{(\gamma_1} \, \cdots\, D^{\gamma_p}_{\gamma_{p-k}}
\,
V_{\gamma_{p-k+1}} \, \cdots\, V_{\gamma_p)}\;.
\end{equation}
Let us introduce the following matrix element,
\begin{equation}
\label{Inm}
[\mathbb{T}^{(n,m)}]^{\alpha_1 ... \alpha_m}_{\beta_1 ... \beta_m} =\frac{1}{n!} \,D^{\alpha_1}_{(\beta_1} \cdots D^{\alpha_m}_{\beta_m} D^{\gamma_{m+1}}_{\gamma_{m+1}} \cdots D^{\gamma_{n}}_{\gamma_{n})}
\end{equation}
at $m=0,...,n$. E.g. at $m=0$ we get the character $[\mathbb{T}^{(n,0)}] \equiv \chi_n$ of the spin-$n/2$ representations \cite{Kraus:2017ezw}, while at higher $m$ these elements serve as building blocks of the matrix element \eqref{Fpk1}. To calculate \eqref{Inm} it is convenient to classify all contractions in terms of cycles of the symmetric group $S_n$ acting on the lower indices. Noting that a length-$s$ cycle is given by $s$-th power of the Wigner D-matrix $D^\alpha_\beta$ we find
\begin{equation}
\label{square}
[\mathbb{T}^{(n,m)}]^{\alpha_1 ... \alpha_m}_{\beta_1 ... \beta_m} = \frac{1}{n!}\; \sum_{m \leq s_1 + ... +s_m \leq n} b_{s_1, ... , s_m} (D^{s_1})^{\alpha_1}_{(\beta_1} \cdots (D^{s_m})^{\alpha_m}_{\beta_m)}\, [\mathbb{T}^{(n - s_1- ... -s_m,0)}]\;,
\end{equation}
with the coefficients
\begin{equation}
\label{5.33}
b_{s_1, ... , s_m} =
(n - s_1- ... -s_m)!\,\prod_{i=1}^m A_{n-m-s_1-...-s_{i-1}+i-1}^{s_i-1} = (n-m)!\;,
\end{equation}
where
\begin{equation}
A_s^t = \frac{s!}{(s-t)!}
\end{equation}
denotes the number of partial permutations (sequences without repetitions). In \eqref{5.33} the first factorial corresponds to the number of terms in $[\mathbb{T}^{(n - s_1- ... -s_m,0)}]$, while each factor in the product counts a number of independent cycles of length $s_i$ in the original symmetrization of $n$ indices. Remarkably, the result does not depend on $s_i$.
Using the matrix elements \eqref{Inm} the original expression \eqref{Fpk1} can be represented as
\begin{equation}
F_{p,k}
=\sum_{l=0}^{k} a_{l}\, (\mathbb{C}_0)^{l}
\,E^{\beta_1} \cdots E^{\beta_{k-l}}
\;[\mathbb{T}^{(p-k,k-l)}]^{\alpha_1\cdots \alpha_{k-l}}_{\beta_1 \cdots \beta_{k-l}}
\;V_{\alpha_1} \cdots V_{\alpha_{k-l}}\;,
\end{equation}
where coefficients are given by the triangle sequence
\begin{equation}
\label{al}
a_l = \frac{(p-k)! }{p!}\,\left[ C_k^l \, A_k^l \, A_{p-k}^{k-l} \right]\;,
\end{equation}
where $C_k^l$ are binomial coefficients $\binom{k}{l}$. We omit a combinatorial consideration that leads us to this formula. As a consistency check, one can show that the coefficients satisfy the natural condition $\sum_{l=0}^k a_l = 1$ meaning that we enumerated all possible permutations of originally symmetrized indices by re-organizing them in terms of the matrix elements \eqref{Inm}.\footnote{This can be directly seen by expressing $A^m_n = m! C^m_n$ and using the relation $ \sum_{i=0}^s C_{n}^i C_l^{s-i} = C_{n+l}^s$.}
Now, using explicit form of the matrix elements \eqref{Inm} we find (up to an overall normalization)
\begin{equation}\label{BlockintemrsCm}
F_{p,k} = \sum_{s=0}^{k}\;\;\; C_k^{s}\;\; (\mathbb{C}_0)^{k-s-1}
\sum_{s \leq m_1 + ... +m_{s} \leq p-k}
\mathbb{C}_{m_1}\mathbb{C}_{m_2} \cdots \mathbb{C}_{m_{s}}\mathbb{C}_{p-k-m_1-...-m_{s}}\;,
\end{equation}
where factors $\mathbb{C}_n$ are given by \eqref{mathCn}. Expressing $\mathbb{C}_n$ in terms of the modular parameter $q$ the multiple summation can be reduced to just four sums. To this end, we split the multiple sum in two parts, which take into account two terms in the last factor $\mathbb{C}_{p-k-m_1-...-m_{s}}$,
\begin{equation}
F_{p,k} =q^{-\frac{p}{2}}\sum_{n=0}^{k}\;\;\; C_k^{n}\;\; (q-1)^{k-n-1}\bigg( q^{p-k+1} J_2(n,q)-J_1(n,q) \bigg)\;.
\end{equation}
Here,
\begin{equation}
\begin{aligned}
&J_1(n,q)=\sum_{n \leq m_1 + ... +m_{n} \leq p-k} (q^{m_1+1}-1)\cdots (q^{m_n+1}-1)=\sum_{r=0}^n (-)^{n+r} \,C^r_n \,q^r \tilde{J}_1(n,r,q) \,,\\
&J_2(n,q)=\!\!\!\!\sum_{n \leq m_1 + ... +m_{n} \leq p-k} \!\!\!\!q^{-\sum_{j=1}^n m_j}(q^{m_1+1}-1)\cdots (q^{m_n+1}-1)=\sum_{r=0}^n (-)^{n+r} \, C^r_n\,q^r \tilde{J}_2(n,r,q) \;,
\end{aligned}
\end{equation}
and
\begin{equation}
\tilde{J}_1(n,r,q)=\!\!\!\sum_{n \leq m_1 + ... +m_{n} \leq p-k} \!\!\!q^{-\sum_{j=1}^r m_j}\;,\qquad
\tilde{J}_2(n,r,q)=\!\!\!\sum_{n \leq m_1 + ... +m_{n} \leq p-k} \!\!\!q^{-\sum_{j=1}^n m_j-\sum_{j=1}^r m_j}\;.
\end{equation}
We find
\begin{equation}
\tilde{J}_1(n,r,q)=\sum _{i=n-r}^{(p-k) (n-r)} C_{i-1}^{n-r-1} \sum _{j=r}^{p-k-i} C_{j-1}^{r-1}\,q^j \;,
\end{equation}
where the first binomial coefficient takes into account different ways to choose a set of $(n-r)$ elements $m_j$ not arising in the summand and the second coefficient
counts the restricted compositions (see e.g. \cite{restr-comps}) for the set of the rest $r$ elements $m_j$. We note that for $\tilde{J}_2(n,r,q)$ the evaluation differs only by
interchanging the two sets and simultaneously replacing $q$ by $1/q$:
\begin{equation}
\tilde{J}_2(n,r,q)=\tilde{J}_1(n,n-r,1/q)\;.
\end{equation}
This gives
\begin{equation}
\begin{array}{l}
\displaystyle
F_{p,k} = q^{-\frac{p}{2}} \sum _{n=0}^k \,\sum _{r=0}^n\; (-1)^{r+k-1}\; C_k^n\, C^r_n\, (1-q)^{k-n-1}\,\times
\\
\\
\displaystyle
\hspace{10mm}\times\,\bigg( q^{p-k+1}\sum _{i=r}^{r (p-k)} C_{i-1}^{r-1} \sum _{j=n-r}^{p-k-i} C_{j-1}^{n-r-1}\,q^{r-j} -\sum _{i=n-r}^{(p-k) (n-r)} C_{i-1}^{n-r-1} \sum _{j=r}^{p-k-i} C_{j-1}^{r-1}\, q^{r+j}\bigg)\;,
\end{array}\end{equation}
that can be directly manipulated and after a somewhat tedious but straightforward re-summation yields the conformal block function \eqref{glob_poly}--\eqref{1ptcoefGLOBAL}.
Let us note the representation \eqref{BlockintemrsCm} has the triangle degree of complexity in the sense that it is simple when $k=0$ ($j_1=0$, the 0-point function, i.e. the character \eqref{fin_char}) or $k=p$ ($j_1 = 2j_p$, maximal admissible value of the external dimension \eqref{restrict})
\begin{equation}
F_{_{p,0}}= \mathbb{C}_0^{-1} \mathbb{C}_p = q^{-\frac{p}{2}} \frac{1-q^{p+1}}{1-q}\;,
\qquad
F_{_{p,p}} = (\mathbb{C}_0)^{p}= (-)^{p} q^{-\frac{p}{2}} (1-q)^{p}\;,
\end{equation}
while the most complicated function arises at $k=p/2$, when all multiple sums contribute.
\subsection{Character decomposition}
\label{sec:character}
The one-point block in the form \eqref{BlockintemrsCm} can be represented as a combination of zero-point blocks, i.e. characters, of various dimensions. To this end, we note that the character \eqref{fin_char} can be expressed in terms of variables $\mathbb{C}_n$ as
\begin{equation}
\chi_n = \frac{\mathbb{C}_n}{\mathbb{C}_0}\;,
\end{equation}
where, in its turn, $\mathbb{C}_0$ can be interpreted as the inverse character of the weight $\Delta=1/2$ representation,
\begin{equation}
\mathbb{C}_0 = -\frac{1}{\hat\chi_{\half}}\;,
\qquad\text{where}\qquad
\hat\chi_{\half} = \frac{q^{\half}}{1-q}\;,
\end{equation}
see \eqref{sl2_char}. Then, rewriting \eqref{BlockintemrsCm} in terms of the characters we arrive at the following representation of the one-point block (up to a prefactor, recall that $p=2j_p$ and $k=j_1$)
\begin{equation}
\label{final}
F_{p,k} = \frac{1}{\left(\hat\chi_{\half}\right)^{k}}\,\sum_{s=0}^{k}\; \binom{k}{s}\;\;
\sum_{s \leq m_1 + ... +m_{s} \leq p-k}
\chi_{p-k-m_1-...-m_{s}} \prod_{i=1}^s \chi_{m_i}\;.
\end{equation}
This form suggests that one can alternatively evaluate this expression by using the Clebsch-Gordon rule for characters. As an example, let the external dimension take the minimal admissible (bosonic) value, $j_1=k=1$. In this case, from \eqref{final} we obtain the relation (recall that $p$ is even)
\begin{equation}
\label{Fp1_char}
F_{p,1} = \frac{1}{\hat\chi_{\half}}\,\sum_{m=0}^{\frac{p}{2}-1}\chi_m\chi_{p-m-1}\;.
\end{equation}
Now we recall the Clebsch-Gordon series \eqref{fusion} in terms of the characters
\begin{equation}
\chi_m \chi_{p-m-1} = \sum_{i=0}^m \chi_{p-2m-1+2i}\;.
\end{equation}
Substituting this expression into \eqref{Fp1_char} we find
\begin{equation}
\label{Fp1_char1}
F_{p,1} = \frac{1}{\hat\chi_{\half}}\, \sum_{n=0}^{\frac{p}{2}-1} \left(\frac{p}{2}-n\right)\chi_{p-2n-1} \;,
\end{equation}
which after substituting the explicit form of characters in terms of $q$ gives back the one-point torus block function.
To perform this calculation for general values of external dimension $j_1 = k$ one needs to know the Clebsch-Gordon series for tensoring any number $n$ of irreps of weights ${\bf j} = \{j_i = m_i/2\}$. It essentially reduces to knowing the Clebsch-Gordon numbers $N_j({\bf j})$ which are multiplicities of occurrence of a spin-$j$ in the range $[j_{min}, j_{max}]$, where min(max) weights are simply defined as $2j_{max} = \sum_{i=1}^n m_i$ and $2j_{min} = \sum_{i=1}^n (-)^{n+i-1}m_i$.\footnote{Alternatively, in order to evaluate $n$-fold tensor product one can apply the Clebsch-Gordon procedure to each tensor couple of irreps to eliminate all tensor products in favour of direct sums.} However, a closed formula for the general Clebsch-Gordon numbers is an unsolved mathematical problem (for recent developments see e.g. \cite{Louck_2008}).
This consideration leads to the following representation\footnote{Note that the similar character decomposition is used to calculate the partition functions and correlators in $SU(N)$ lattice gauge theories in the strong coupling regime, see e.g. review \cite{Caselle:2000tn}.}
\begin{equation}
\label{final_char}
\mathcal{F}_{{j_p, j_1}}(q) = \frac{1}{\left(\hat\chi_{\half}\right)^{j_1}}\,\sum_{m\in D(j_p,j_1) }\; d_m\; \chi_{m}(q)\;,
\end{equation}
which realizes the one-point block as the linear combination of characters. Here, unknown constant coefficients $d_m$ and the summation range $D(j_p,j_1)$ depend on the Clebsch-Gordon numbers $N_j({\bf j})$ for strings of characters and factorial coefficients arising when re-summing multiple sums in the original formula \eqref{final}. An example is given by \eqref{Fp1_char1}.
\section{Two-point toroidal Wilson networks}
\label{sec:Two-point}
In what follows we represent two-point toroidal vertex functions of Section \bref{sec:toroidal} in terms of the symmetric tensor products along the lines of Section \bref{sec:one_point} and using the 3$j$ Wigner symbols as in Section \bref{sec:wigner}.
\subsection{$s$-channel toroidal network}
\label{sec:2s}
Let us consider the toroidal vertex function \eqref{four_s} and insert resolutions of identities to express all ingredients as matrix elements in the standard basis (see Appendix \bref{sec:sl2})
\begin{equation}
\label{four_s1}
\begin{array}{l}
\displaystyle
\stackrel{\circ}{V}_{\hspace{-1mm}{\rm (s)} \,b,e|a,c}(\tau, {\bf z})
=\sum_m\sum_n\sum_k\sum_l\sum_r
\,\Big(\langle j_b,m| \,W_b[0,2\pi\tau] \,|j_b, n\rangle\Big)\,\times
\\
\\
\displaystyle
\hspace{12mm}\times \,\Big(\langle j_b, n|\,I_{b; c, e}\,\,|j_e, k\rangle\otimes |j_c, l\rangle \Big)\,
\Big(\langle j_e,k| I_{e; a, b}\,|j_b, m\rangle \otimes |j_a,r\rangle\Big)
\,\langle j_c,l|\tilde c\rangle
\,\langle j_a, r|\tilde a\rangle\;,
\end{array}
\end{equation}
where the last two matrix elements are given by coordinates of the tilded boundary vectors,
\begin{equation}
\label{four_s2}
\begin{array}{c}
\displaystyle
\langle j_c,l|\tilde c\rangle = \langle j_c,l| W_c[0,w_1] |\hat{\mathbb{lw}\,}\rangle_c \;,
\\
\\
\displaystyle
\langle j_a,r|\tilde a\rangle = \langle j_a,r| W_a[0,w_2] |\hat{\mathbb{lw}\,}\rangle_a \;,
\end{array}
\end{equation}
where the transformed boundary vectors are defined by expressions \eqref{trans1}, \eqref{trans2} in the respective representations. Now, we identify representations in two internal edges as $\mathcal{D}_b \approx \mathcal{D}_{j_{p_1}}$ and $\mathcal{D}_e \approx \mathcal{D}_{j_{p_2}}$, and two external edges as $\mathcal{D}_c \approx \mathcal{D}_{j_{1}}$ and $\mathcal{D}_a \approx \mathcal{D}_{j_{2}}$. The direct computation of the two-point torus blocks in $s$-channel from Wilson line networks according to \eqref{four_s1}--\eqref{four_s2} is given in Section~\bref{S2pt-proof}.
Following the discussion in Section \bref{sec:one_point} we can also explicitly calculate each of the matrix elements entering \eqref{four_s1}--\eqref{four_s2}. The Wigner $D$-matrix \eqref{M1} in the present case reads as
\begin{equation}
\label{twoM1}
D_{\alpha_1\cdots\alpha_{\lambda_{p_1}}}^{\beta_1\cdots\beta_{\lambda_{p_1}}}=\frac{1}{\lambda_{p_1}!}\, D_{(\alpha_1}^{\beta_1}\cdots D_{\alpha_{\lambda_{p_1})}}^{\beta_{\lambda_{p_1}}} \;,
\end{equation}
where $\lambda_{p_1} = 2j_{p_1}$, while the projections of the boundary states are calculated as
\begin{equation}
\label{M1-2-3}
\tilde V^{(m)}_{\gamma_1\cdots\gamma_{\lambda_m}}=\tilde V^{(m)}_{\gamma_1}\cdots \tilde V^{(m)}_{\gamma_{\lambda_m}}\;,
\qquad
m=1,2\;,
\end{equation}
where $\tilde V^{(1,2)}_\gamma$ are coordinates of the tilded transformed boundary vectors \eqref{four_s2} in the basis of the fundamental representation,
\begin{equation}
\label{V12-tilde-a}
\tilde V^{(m)}_\gamma=\delta_{\gamma,1}\exp{\frac{i w_m}{2}}+ \delta_{\gamma,2}\exp{-\frac{i w_m}{2}}\;,\qquad m=1,2\;,
\end{equation}
cf. \eqref{V-tilde-a}. Now, we gather the above matrix elements together contracting them by means of the two intertwiners \eqref{proj}--\eqref{k}. Using the first intertwiner we obtain
\begin{equation}
\begin{array}{l}
\displaystyle
(W_1)^{\rho_1 ... \rho_{p_1}}_{\gamma_1 ... \gamma_{p_2}}=
\epsilon^{\alpha_{1}\beta_1}\cdots \epsilon^{\alpha_{\delta}\beta_{\delta}}\;
D^{\rho_1 \ldots \rho_{\delta} \, \rho_{\delta+1} \ldots \rho_{\lambda_{p_1}}}_{\alpha_1 \ldots \alpha_{\delta} (\gamma_{1} \ldots \gamma_{\lambda_{p_1}-\delta}}
\;\tilde V^{(1)}_{\gamma_{\lambda_{p_1}-\delta+1}\ldots \gamma_{\lambda_{p_2}}) \beta_1 \ldots \beta_{\delta}}\;,
\end{array}
\end{equation}
where $\delta=\frac{\lambda_{p_1}-\lambda_{p_2}+\lambda_1}{2}$. The second intertwiner gives
\begin{equation}
\label{M123}
\begin{array}{l}
\displaystyle
(W_2)^{\rho_1 ... \rho_{p_1}}_{\gamma_1 ... \gamma_{p_1}}=
\epsilon^{\alpha_{1}\beta_1}\cdots \epsilon^{\alpha_{\varkappa}\beta_{\varkappa}}\;
(W_1)^{\rho_1 \ldots \rho_{\varkappa} \rho_{\varkappa+1} \ldots \rho_{\lambda_{p_1}}}_{\alpha_1 \ldots \alpha_{\varkappa} (\gamma_{1} \ldots \gamma_{\lambda_{p_2}-\varkappa}}
\;\tilde V^{(2)}_{\gamma_{\lambda_{p_1}-\varkappa+1}\ldots \gamma_{\lambda_{p_1}}) \beta_1 \ldots \beta_{\varkappa}}\;,
\end{array}
\end{equation}
where $\varkappa=\frac{\lambda_{p_2}-\lambda_{p_1}+\lambda_2}{2}$. Finally, the overall contraction yields the two-point torus block in the $s$-channel as
\begin{equation}
\label{two-point-s-tensor}
\mathcal{F}_s^{^{\Delta_{1,2}, \tilde \Delta_{1,2}}}\left(q, w_{1,2}\right) = (W_{2})_{\gamma_1\cdots\gamma_{\lambda_{p_1}}}^{\gamma_1\cdots\gamma_{\lambda_{p_1}}}\;.
\end{equation}
The cylindrical coordinates on the torus can be related to the planar coordinates by $w = i \log z $ so that the block function \eqref{glob-s} given in the planar coordinates is obtained from \eqref{two-point-s-tensor} by the standard conformal transformation for correlation functions.
Explicit calculation of this matrix representation along the lines of the one-point block analysis in Section \bref{sec:one_point} will be considered elsewhere. Here, we just give one simple example with spins $j_{p_1} = j_{p_2}=j_{1}=j_{2}=1$ demonstrating that the resulting function is indeed \eqref{glob-s}, see Appendix \bref{app:ex}.
\subsection{$t$-channel toroidal network}
\label{sec:2t}
Let us consider the toroidal vertex function \eqref{four_t} and insert resolutions of identities to express all ingredients as matrix elements in the standard basis
\begin{equation}
\label{four_t1}
\begin{array}{l}
\displaystyle
\stackrel{\circ}{V}_{\hspace{-1mm}{\rm (t)} \, c,e|a,b}(\tau, {\bf z})
=\sum_m\sum_n\sum_k\sum_l\sum_r
\,\Big(\langle j_c,m| \,W_c[0,2\pi\tau] \,|j_c,n\rangle\Big)\,\times
\\
\\
\displaystyle
\hspace{12mm}\times \,\Big(\langle j_c,n|I_{c; c, e}\, |j_c,m\rangle\otimes|j_e,l\rangle\Big)\,
\Big(\langle j_e,l| I_{e; a, b}\,|j_a,r\rangle\otimes|j_b,k\rangle\,\Big)
\,\langle j_a,r|\tilde a\rangle
\,\langle j_b,k|\tilde b\rangle\;,
\end{array}
\end{equation}
where the last two matrix elements are given by
\begin{equation}
\label{four_t2}
\begin{array}{c}
\displaystyle
\langle j_a,r|\tilde a\rangle = \langle j_a,r| W_a[0,w_1] |\hat{\mathbb{lw}\,}\rangle_a\;,
\\
\\
\displaystyle
\langle j_b,k|\tilde b\rangle = \langle j_b,k| W_b[0,w_2] |\hat{\mathbb{lw}\,}\rangle_b \;.
\end{array}
\end{equation}
The representations are identified as $\mathcal{D}_c \approx \mathcal{D}_{j_{p_1}}$ and $\mathcal{D}_e \approx \mathcal{D}_{j_{p2}}$ for intermediate edges, $\mathcal{D}_a \approx \mathcal{D}_{j_{1}}$ and $\mathcal{D}_b \approx \mathcal{D}_{j_{2}}$ for external edges. Using the matrix elements \eqref{twoM1} and \eqref{M1-2-3}, \eqref{V12-tilde-a} and then evaluating the second intertwiner in \eqref{four_t1} we obtain
\begin{equation}
\begin{array}{l}
\displaystyle
(W_1)_{\gamma_1 ... \gamma_{p_2}}=
\epsilon^{\alpha_{1}\beta_1}\cdots \epsilon^{\alpha_{\delta}\beta_{\delta}}\;
\tilde V^{(1)}_{\alpha_1 \ldots \alpha_{\delta} (\gamma_{1} \ldots \gamma_{\lambda_{1}-\delta}}
\;\tilde V^{(2)}_{\gamma_{\lambda_{1}-\delta+1}\ldots \gamma_{\lambda_{p_2}}) \beta_1 \ldots \beta_{\delta}}\;,
\end{array}
\end{equation}
where $\delta=\frac{\lambda_{1}+\lambda_{2}-\lambda_{p_2}}{2}$. The first intertwiner gives
\begin{equation}
\begin{array}{l}
\displaystyle
(W_2)^{\rho_1 ... \rho_{p_1}}_{\gamma_1 ... \gamma_{p_1}}=
\epsilon^{\alpha_{1}\beta_1}\cdots \epsilon^{\alpha_{\varkappa}\beta_{\varkappa}}\;
D^{\rho_1 \ldots \rho_{\varkappa} \rho_{\varkappa+1} \ldots \rho_{\lambda_{p_1}}}_{\alpha_1 \ldots \alpha_{\varkappa} (\gamma_{1} \ldots \gamma_{\lambda_{p_1}-\varkappa}}
\;(W_1)_{\gamma_{\lambda_{p_1}-\varkappa+1}\ldots \gamma_{\lambda_{p_1}}) \beta_1 \ldots \beta_{\varkappa}}\;,
\end{array}
\end{equation}
where $\varkappa=\frac{\lambda_{p_2}}{2}$. Finally, the overall contraction yields the two-point torus block in the $t$-channel \eqref{glob-t} in the cylindrical coordinates as
\begin{equation}
\label{two-point-t-tensor}
\mathcal{F}_{t}^{_{\Delta_{1,2}, \tilde \Delta_{1,2}}}(q, w_{1,2}) = (W_2)_{\gamma_1\cdots\gamma_{\lambda_{p_1}}}^{\gamma_1\cdots\gamma_{\lambda_{p_1}}}\;.
\end{equation}
Just as for $s$-channel blocks, we leave aside the straightforward check of this matrix representation and explicitly calculate just the simplest example given by spins $j_{p_1} = j_{p_2}=j_{1}=j_{2}=1$ to demonstrate that the resulting function is indeed \eqref{glob-t}, see Appendix \bref{app:ex}.
\subsection{Wigner 3$j$ symbol representation of the $s$-channel block}
\label{S2pt-proof}
Similarly to the one-point block the expansion coefficients of the two-point block in $s$-channel~\eqref{glob-s} with degenerate dimensions can be written in terms of hypergeometric functions
\begin{equation}
\label{glob-s-1}
\begin{aligned}
&\mathcal{F}_s^{^{\Delta_{1,2}, \tilde \Delta_{1,2}}}(q, z_{1,2}) =
q^{-j_{p_1}} z_1^{j_1-j_{p_1}+j_{p_2}} z_2^{j_2+j_{p_1}-j_{p_2}} \sum_{k=0}^{2j_{p_1}}\,\sum_{m=0}^{2j_{p_2}} f_{k,m}(j_{1,2}|j_{p_{1,2}})\, q^k\, \left(\frac{z_1}{z_2}\right)^{k-m},
\end{aligned}
\end{equation}
where we used \eqref{degen} and\footnote{See eq. (2.13) in \cite{Alkalaev:2015fbw}.}
\begin{align}
\label{fmn}
&f_{k,m}(j_{1,2}|j_{p_{1,2}})=
\frac{\tau_{k,m}(-j_{p_1},-j_1,-j_{p_2})\tau_{m,k}(-j_{p_2},-j_2, -j_{p_1})}{k!\, m!\,(-2j_{p_1})_k (-2j_{p_2})_m}\;,
\end{align}
where the $\tau$-coefficients are defined in~\eqref{A-tau}.
It can be written more explicitly in terms of generalized hypergeometric function as
\begin{align}
&f_{k,m}(j_{1,2}|j_{p_{1,2}})=\frac{(j_{p_1}-j_{p_2}-j_1)_m (j_{p_2}-j_{p_1}-j_2)_k (j_{p_2}-j_{p_1}-j_1-m)_k (j_{p_1}-j_{p_2}-j_2-k)_m}{k! \,m!\,(-2j_{p_1})_k (-2 j_{p_2})_m }\times\nonumber\\
&\hspace{10mm}\times\, _3F_2(2 j_{p_1}-k+1,\,-k,-m;\,j_{p_1}-j_{p_2}-j_2-k,\,j_{p_1}-j_{p_2}+j_2-k+1;\,1) \nonumber\\
&\hspace{10mm}\times \,_3F_2(2 j_{p_2}-m+1,-k,-m;j_{p_2}-j_{p_1}-j_1-m,j_{p_2}-j_{p_1}+j_1-m+1;1)\,\;.
\end{align}
On the other hand, the Wilson line network representation~\eqref{four_s1} reads
\begin{equation}
\label{four_s1_1}
\begin{array}{l}
\displaystyle
\stackrel{\circ}{V}_{\hspace{-1mm}{\rm (s)} \,b,e|a,c}(\tau, {w_{1,2}})
=\sum_m\sum_n\sum_k\sum_l\sum_r
\,\Big(\langle j_b,m| \,W_b[0,2\pi\tau] \,|j_b, n\rangle\Big)\,\times
\\
\\
\displaystyle
\hspace{12mm}\times \,\Big(\langle j_b, n|\,I_{b; c, e}\,\,|j_e, k\rangle\otimes |j_c, l\rangle \Big)\,
\Big(\langle j_e,k| I_{e; a, b}\,|j_b, m\rangle \otimes |j_a,r\rangle\Big)
\,\langle j_c,l|\tilde c\rangle
\,\langle j_a, r|\tilde a\rangle\;,
\end{array}
\end{equation}
where $z_{1,2} = \exp{(-iw_{1,2})}$.
Or, using the notation introduced in Section~\ref{sec:wigner},
\begin{equation}
\label{four_s1_2}
\begin{array}{c}
\displaystyle
\stackrel{\circ}{V}_{\hspace{-1mm}{\rm (s)} \,b,e|a,c}(\tau, {w_{1,2}})
=
\sum_m\sum_n\sum_k\sum_l\sum_r
D^{(j_b)}{}^m{}_n [I_{b;c,e}]^n{}_{kl} [I_{e;a,b}]^k{}_{mr}\,V_{(j_c)}^l
\,V_{(j_a)}^r\;.
\end{array}
\end{equation}
Substituting the Wigner D-matrix~\eqref{D-matrix}, the intertwiners in terms of 3$j$ symbol \eqref{tri3-1}, and the two boundary vectors according to \eqref{four_s2},
\begin{equation}
\label{6.2_new}
V_{(j)}^l = \langle j,l| W[0,w] |\hat{\mathbb{lw}\,}\rangle = \beta_{j,l}\exp{(iwl)} \;,
\end{equation}
we find
\begin{align}
\label{four_s1_3}
&\stackrel{\circ}{V}_{\hspace{-1mm}{\rm (s)} \,b,e|a,c}(q, {w_{1,2}})
=\nonumber\\
&=\sum_{m,k,l,r} q^m \epsilon^{(b)}{}^{m,-k-l} \epsilon^{(e)}{}^{k,-m-r}
\begin{pmatrix}
j_b & j_c &\, j_e \\
-k-l & l & \,k
\end{pmatrix}
\begin{pmatrix}
j_e & j_a &\, j_b \\
-m-r & r & \,m
\end{pmatrix}
\beta_{j_c,l} e^{iw_2 l}\,\beta_{j_a,r} e^{iw_1 r}\;,
\end{align}
where the $\beta$-coefficients are defined in~\eqref{trans2}, the sum over $n$ is removed by $\delta_n^m$ factor in the D-matrix~\eqref{D-matrix} and the 3$j$ symbol $[W_{a,b,c}]_{mkn}$ property $m+n+k=0$ is used. The Levi-Civita symbols $\epsilon^{(b)}{}^{m,-k-l}=(-)^{j_b-m}\delta_{m,k+l}$ and $\epsilon^{(e)}{}^{k,-m-r}=(-)^{j_e-k}\delta_{k,m+r}$ \eqref{levi_civita} remove other two summations
\begin{align}
\label{four_s1_4}
&\stackrel{\circ}{V}_{\hspace{-1mm}{\rm (s)} \,b,e|a,c}(q, {w_{1,2}})
=\nonumber\\
&=\sum_{m, k} (-)^{j_b+j_e-m-k} q^m
\begin{pmatrix}
j_b & j_c &\, j_e \\
-m & m-k & \,k
\end{pmatrix}
\begin{pmatrix}
j_e & j_a &\, j_b \\
-k & k-m & \,m
\end{pmatrix}
\beta_{j_c,m-k} e^{iw_2 (m-k)}\,\beta_{j_a,k-m} e^{iw_1 (k-m)}\;,
\end{align}
In order to compare the Wilson line network representation~\eqref{four_s1_1} with the CFT result~\eqref{glob-s-1} we need to identify representation labels as $a=j_2,\, b=j_{p_1},\, c =j_1,\, e = j_{p_2}$, and take into account the Jacobian
of the transformation to the planar coordinates $z_{1,2} = \exp{(-iw_{1,2})}$, see~\eqref{two-point-s-tensor-1},
\begin{align}
\label{four_s1_5}
&\hspace{-5mm}\mathcal{F}_{s}^{_{\Delta_{1,2}, \tilde \Delta_{1,2}}}(q, z_{1,2})
=z_1^{j_1} z_2^{j_2} \sum _{k=-j_{p_1}}^{j_{p_1}} \sum _{m=-j_{p_2}}^{j_{p_2}} q^k (-)^{j_{p_1}+j_{p_2}-k-m} \left(\frac{z_1}{z_2}\right)^{k-m} \times\nonumber\\
&
\hspace{40mm}\times
\beta_{j_1,k-m} \beta_{j_2,m-k}
\begin{pmatrix}
\; j_{p_1} & j_1 & j_{p_2} \\
-k & k-m & m \\
\end{pmatrix}
\begin{pmatrix}
j_{p_2} & j_2 & j_{p_1} \\
-m & m-k & k \\
\end{pmatrix}.
\end{align}
Changing $k\rightarrow k+j_{p_1}$ and $m\rightarrow m+j_{p_2}$ we obtain
\begin{align}
\label{four_s1_6}
\mathcal{F}_{s}^{_{\Delta_{1,2}, \tilde \Delta_{1,2}}}(q, z_{1,2})
=
q^{-j_{p_1}} z_1^{j_1-j_{p_1}+j_{p_2}} z_2^{j_2+j_{p_1}-j_{p_2}} \sum _{k=0}^{2 j_{p_1}} \sum _{m=0}^{2 j_{p_2}} \tilde{f}_{k,m}(j_{1,2}|j_{p_{1,2}})\, q^k \left(\frac{z_1}{z_2}\right)^{k-m}\;,
\end{align}
where
\begin{align}
\label{ftmn}
&\hspace{-5mm}\tilde{f}_{k,m}(j_{1,2}|j_{p_{1,2}})=(-)^{-k-m} \beta_{j_1,j_{p_2}-j_{p_1}+k-m}\,\beta_{j_2,j_{p_1}-j_{p_2}-k+m}\,\times\nonumber
\\
&
\hspace{10mm}\begin{pmatrix}
j_{p_1} & j_1 & j_{p_2} \\
j_{p_1}-k\,\,\,\, &j_{p_2} -j_{p_1}+k-m \,\,\,\,& m-j_{p_2} \\
\end{pmatrix}
\begin{pmatrix}
j_{p_2} & j_2 & j_{p_1} \\
j_{p_2}-m \,\,\,\,& j_{p_1}-j_{p_2}-k+m \,\,\,\,& k-j_{p_1} \\
\end{pmatrix}
.
\end{align}
Now, comparing~\eqref{four_s1_6} and~\eqref{glob-s-1}, we see that in order to verify the representation~\eqref{four_s1} we need to check
\begin{equation}
\label{fmn-ftmn}
\tilde{f}_{k,m}(j_{1,2}|j_{p_{1,2}})=\varkappa_2 \, f_{k,m}(j_{1,2}|j_{p_{1,2}})\;,
\end{equation}
where the LHS is defined in~\eqref{ftmn} and the RHS in~\eqref{fmn} and $\varkappa_2$ is $(k,m)$-independent factor. Using the explicit representation~\eqref{3j-explicit} for 3$j$ symbol in terms of the generalized hypergeometric function as well as the following relation (see e.g.~\cite{prudnikov1986integrals}):
\begin{align}\label{Euler-transform-2}
&\, _3F_2(a,b,-k;c,d;1)=\nonumber\\
&(-)^k\, \frac{(d-a)_k (d-b)_k}{(c)_k(d)_k} \, _3F_2(1-d-k,a+b-c-d-k+1,-k;a-d-k+1,b-d-k+1;1)\,,
\end{align}
one can check that the parameter $\varkappa_2$ is given by
\begin{align}
&\varkappa_2 =
\frac{2 \Gamma (2 j_{p_1}+1) \Gamma (2 j_{p_2}+1) }{\Gamma (j_1-j_{p_1}+j_{p_2}+1) \Gamma (j_2+j_{p_1}-j_{p_2}+1)}\sqrt{\frac{j_1\, j_2 }{\Gamma (j_1+j_{p_1}+j_{p_2}+2) \Gamma (j_2+j_{p_1}+j_{p_2}+2)}}\,\times\nonumber\\
&\times\sqrt{\frac{\Gamma (2 j_1) \Gamma (2 j_2) \Gamma (j_1-j_{p_1}+j_{p_2}+1) \Gamma (j_2+j_{p_1}-j_{p_2}+1)}{\Gamma (j_1+j_{p_1}-j_{p_2}+1) \Gamma (-j_1+j_{p_1}+j_{p_2}+1) \Gamma (j_2-j_{p_1}+j_{p_2}+1) \Gamma (-j_2+j_{p_1}+j_{p_2}+1) }}
\;.
\end{align}
Thus, the relation~\eqref{fmn-ftmn} holds which proves the Wilson line network representation~\eqref{four_s1} for the two-point block in $s$-channel~\eqref{glob-s}.
A few comments are in order. First, we note that the general idea behind the proof is to observe that the $\tau$-function defining the block coefficients \eqref{fmn} can be expressed via the hypergeometric function $_3F_2$. On the other hand, one of the convenient representations of the Wigner 3$j$ symbols is also given in terms of $_3F_2$. This ultimately allows comparing two dual representations of two-point configurations. Second, using this observation one can directly calculate the Wilson network representation for the $n$-point global torus block function in the $s$-channel (also known as a necklace channel) \cite{Alkalaev:2017bzx}.
\subsection{Wigner 3$j$ symbol representation of the $t$-channel block}
\label{T2pt-proof}
In this section we follow the general calculation pattern elaborated in Sections \eqref{sec:wigner} and \eqref{S2pt-proof}. To this end, we rewrite the two-point $t$-channel block~\eqref{glob-t}
as\footnote{It can be shown that this factorized form of the block function is reduced to the product of two hypergeometric functions which are 1-point torus block and 4-point sphere $t$-channel block \cite{Kraus:2017ezw}.}
\begin{align}
\label{global_t_p1}
\mathcal{F}_{t}^{_{\Delta_{1,2}, \tilde \Delta_{1,2}}}(q, z_{1,2})
=
z_2^{-\Delta_1-\Delta_2} \sum _{m=0}^{\infty} g_{m}(\tilde \Delta_{1,2})q^{m+ \tilde \Delta_{1}}\sum _{l=0}^{\infty} h_{l}(\Delta_{1,2}| \tilde\Delta_{2})\, \left(\frac{z_1}{z_2}\right)^{l}\;,
\end{align}
where the coefficients
\begin{equation}\label{g}
g_{m}(\tilde \Delta_{1,2})=\frac{\tau_{m,m}(\tilde\Delta_1, \tilde\Delta_2, \tilde\Delta_1)}{m!\,(2\tilde\Delta_1)_m}
\end{equation}
and
\begin{equation}
\label{h}
h_{l}(\Delta_{1,2}| \tilde\Delta_{2})=(-)^{\tilde\Delta_2-\Delta_1-\Delta_2+l}\!\!\!\;\;\sum_{s=0}^\infty (-)^{s}
\binom{\tilde\Delta_2-\Delta_1-\Delta_2+s}{l}\,\frac{\sigma_{s}(\Delta_{1}, \Delta_{2}, \tilde\Delta_2)}{s!\,(2\tilde\Delta_2)_s}\;,
\end{equation}
where to simplify the summation domain over parameter $s$ in the last formula we have adopted a formal rule that $\binom{x}{y}=0$ if $x<y$. All conformal dimensions are kept arbitrary and later on we choose those ones corresponding to the degenerate operators \eqref{degen}.
On the other hand, the Wilson line network representation~\eqref{four_t1} reads
\begin{equation}
\label{four_t1_new}
\begin{array}{l}
\displaystyle
\stackrel{\circ}{V}_{\hspace{-1mm}{\rm (t)} \, c,e|a,b}(\tau, {\bf z})
=\sum_m\sum_n\sum_k\sum_l\sum_r
\,\Big(\langle j_c,m| \,W_c[0,2\pi\tau] \,|j_c,n\rangle\Big)\,\times
\\
\\
\displaystyle
\hspace{12mm}\times \,\Big(\langle j_c,n|I_{c; c, e}\, |j_c,m\rangle\otimes|j_e,l\rangle\Big)\,
\Big(\langle j_e,l| I_{e; a, b}\,|j_a,r\rangle\otimes|j_b,k\rangle\,\Big)
\,\langle j_a,r|\tilde a\rangle
\,\langle j_b,k|\tilde b\rangle\;.
\end{array}
\end{equation}
Using the notation introduced in Section~\ref{sec:wigner},
\begin{equation}
\label{four_s1_2}
\begin{array}{c}
\displaystyle
\stackrel{\circ}{V}_{\hspace{-1mm}{\rm (t)} \,b,e|a,c}(\tau, {w_{1,2}})
=
\sum_m\sum_n\sum_l\sum_k\sum_r
D^{(j_c)}{}^m{}_n [I_{c;c,e}]^n{}_{ml} [I_{e;a,b}]^l{}_{rk}\,V_{(j_a)}^r
\,V_{(j_b)}^k\;.
\end{array}
\end{equation}
Substituting the Wigner D-matrix~\eqref{D-matrix}, the intertwiners in terms of 3$j$ symbol \eqref{tri3-1}, and the two boundary vectors \eqref{6.2_new}, we find
\begin{align}
\label{four_t1_3}
&\stackrel{\circ}{V}_{\hspace{-1mm}{\rm (t)} \,b,e|a,c}(q, {w_{1,2}})
=\nonumber\\
&=\sum_{m,k,l,r} q^m \epsilon^{(c)}{}^{m,-m-l} \epsilon^{(e)}{}^{l,-r-k}
\begin{pmatrix}
j_c & j_c &\, j_e \\
-m-l & m & \,l
\end{pmatrix}
\begin{pmatrix}
j_e & j_a &\, j_b \\
-r-k & r & \,k
\end{pmatrix}
\beta_{j_a,r} e^{iw_1 r}\,\beta_{j_b,k} e^{iw_2 k}\;,
\end{align}
where the $\beta$-coefficients are defined in~\eqref{trans2}. Now, we identify representation labels as $a=j_1$, $b=j_2$, $c =j_{p_1}$, $e=j_{p_2}$, and then use the Levi-Civita symbols and change to $z_{1,2} = \exp{(-iw_{1,2})}$,
\begin{align}
\label{four_t1_4}
\mathcal{F}_{t}^{_{\Delta_{1,2}, \tilde \Delta_{1,2}}}(q, z_{1,2})
=z_1^{j_1} z_2^{j_2}\sum_{m=-j_{p_1}}^{j_{p_1}} (-)^{j_{p_1}+j_{p_2}-m}
\begin{pmatrix}
j_{p_1} & j_{p_1} &\, j_{p_2} \\
-m & m & \,0
\end{pmatrix}q^m \;\times
\nonumber\\
\times \sum_{k=-j_2}^{j_2}
\beta_{j_1,-k} \,\beta_{j_2,k}\begin{pmatrix}
j_{p_2} & j_1 &\, j_2 \\
0 & -k & \,k
\end{pmatrix}
\left(\frac{z_1}{z_2}\right)^k\;.
\end{align}
Changing $m\to m+j_{p_1}$ and $k\to l-j_1$ and assuming, for definiteness, that $j_1>j_2$ we obtain the expression
\begin{align}
\label{four_t1_5}
\mathcal{F}_{t}^{_{\Delta_{1,2}, \tilde \Delta_{1,2}}}(q, z_{1,2})
=z_2^{j_1+j_2}
\left[\sum_{m=0}^{2j_{p_1}} (-)^{j_{p_2}-m}
\begin{pmatrix}
j_{p_1} & j_{p_1} &\, j_{p_2} \\
j_{p_1}-m & m-j_{p_1} & \,0
\end{pmatrix}q^{m-j_{p_1}} \right]\;\times
\nonumber\\
\times \left[\sum_{l=j_1-j_2}^{j_1+j_2}
\beta_{j_1,j_1-l} \,\beta_{j_2,l-j_1}\begin{pmatrix}
j_{p_2} & j_1 &\, j_2 \\
0 & j_1-l & \,l-j_1
\end{pmatrix}
\left(\frac{z_1}{z_2}\right)^l\right]\;,
\end{align}
that has the following structure
\begin{align}
\label{four_t1_6}
\mathcal{F}_{t}^{_{\Delta_{1,2}, \tilde \Delta_{1,2}}}(q, z_{1,2})
=
z_2^{j_1+j_2} \sum _{m=0}^{2 j_{p_1}} \tilde{g}_{m}(j_{p_{1,2}})q^{m-j_{p_1}}\sum _{l=j_1-j_2}^{j_1+j_2} \tilde{h}_{l}(j_{1,2}|j_{p_{2}})\, \left(\frac{z_1}{z_2}\right)^{l}\;.
\end{align}
Now, comparing the last relation ~\eqref{four_t1_6} and the original block function ~\eqref{global_t_p1} at integer negative weights \eqref{degen}, we see that in order to verify the representation~\eqref{four_t1} we need to check
\begin{equation}
\label{chi12}
\tilde{g}_{m}(j_{p_{1,2}})=\chi_1 \, g_{m}(-j_{p_{1,2}})\quad
\text{and}\quad
\tilde{h}_{l}(j_{1,2}|j_{p_2})=\chi_2 \, h_{l}(-j_{1,2}|-j_{p_2}) \;,
\end{equation}
where the coefficients $\chi_{1,2}$ should not depend on $m,l$ and the coefficient functions $g_m,h_l$ are defined in~\eqref{g} and \eqref{h}. Note that $h_{l}(-j_{1,2}|-j_{p_2}) = 0$ if $l \notin [j_1-j_2,j_1+j_2]$.
Using the results of Section~\ref{sec:wigner} we get
\begin{align}
\label{chi1}
\chi_1=
\frac{(-)^{j_{p_2}}(2j_{p_1})!}{\sqrt{(2j_{p_1}-j_{p_2})!\Gamma(2+2j_{p_1}+j_{p_2})}}
\;.
\end{align}
Similarly, using explicit representation for 3$j$ symbol \cite{varshalovich} one can find that the second equality in~\eqref{chi12} holds with
\begin{align}
\label{chi2}
\chi_2=\frac{(j_2-j_1-j_{p_2})_{j_{p_2}} \sqrt{\frac{\Gamma (2 j_2+1) \Gamma (j_1-j_2+1) \Gamma (j_1-j_2+j_{p_2}+1) \Gamma (j_{p_2}-j_1+j_2+1) \Gamma (j_1+j_2+j_{p_2}+2)}
{(-2 j_1)_{j_1+j_2} \Gamma (j_1+j_2-j_{p_2}+1)}}}{(-1)^{j_1+j_2+j_{p_2}} (-2 j_{p_2})_{j_{p_2}} \Gamma (2 j_{p_2}+1) \, _2F_1(-j_1-j_2-j_{p_2}-1,j_1-j_2-j_{p_2};-2 j_{p_2};1)}
\;.
\end{align}
Thus, we conclude that the Wilson toroidal network operator ~\eqref{four_t1} does calculate the two-point torus block in $t$-channel~\eqref{glob-t}.
\section{Concluding remarks}
\label{sec:concl}
In this work we discussed toroidal Wilson networks in the thermal AdS$_3$ and how they compute $sl(2,\mathbb{R})$ conformal blocks in torus CFT$_2$. We extensively discussed the general formulation of the Wilson line networks which are actually $SU(1,1)$ spin networks, paying particular attention to key features that allow interpreting these networks as invariant parts of the conformal correlation functions, i.e. conformal blocks, on different topologies. We explicitly formulated toroidal Wilson line networks in the thermal AdS$_3$ and built corresponding vertex functions which calculate one-point and two-point torus conformal blocks with degenerate quasi-primary operators. In particular, both in the one-point and two-point cases we described two equivalent representations: the first is in terms of symmetric tensor products (multispinors), while the second involves 3$j$ Wigner symbols. It turned out that the calculation based on the 3$j$ symbols is obviously shorter than the one based on multispinors. However, this is because the multispinor approach makes all combinatorial calculations manifest while using the 3$j$ symbols we package this combinatorics into the known relations from the mathematical handbook.
Our general direction for further research is to use the spin network approach, which is a quite developed area (for review see e.g. \cite{Baez:1994hx}), in order to generalize Wilson line network representation of the $sl(2,\mathbb{R})$ conformal blocks to the full Virasoro algebra $Vir$ conformal blocks. In this respect, recent papers \cite{Fitzpatrick:2016mtp,Besken:2017fsj,Hikida:2017ehf,Hikida:2018eih,Hikida:2018dxe} dealing with $1/c$ corrections to the sphere CFT$_2$ global blocks are interesting and promising. It would be tempting, for instance, to formulate the Wilson line network representation of quantum conformal blocks in $1/c$ perturbation theory for CFT$_2$ on general Riemann surfaces $\Sigma_g$.
Obviously, this problem is quite non-trivial already in the leading approximation since even global blocks on $\Sigma_g$ are unknown. In this respect one can mention the Casimir equation approach that characterizes global blocks as eigenfunctions of Casimir operators in OPE channels \cite{Dolan:2011dv}. As argued in \cite{Kraus:2017ezw} there are general group-theoretical arguments based on the gauge/conformal algebra relation \eqref{conf_trans} that force the Wilson network operators in the bulk to satisfy the Casimir equations on the boundary. It would be important to elaborate an exact procedure which identifies the Wilson network operators with solutions of the Casimir equations for arbitrary OPE channels thereby showing the Wilson/block correspondence explicitly.
Going beyond the global $c=\infty$ limit essentially leads to calculating multi-stress tensor correlators (as in the sphere CFT$_2$ case originally treated in \cite{Fitzpatrick:2016mtp} and \cite{Besken:2017fsj}). However, the multi-stress tensor correlators on higher genus Riemann surfaces, and, in particular, on the torus, are quite complicated. In part, this is due to non-trivial modular properties of double-periodic functions (in the torus case).
In general, it might be that the Wilson line approach will prove efficient to calculate block functions on arbitrary $\Sigma_g$ because the underlying spin networks are essentially the same as in the sphere topology case except for loops corresponding to non-trivial holonomies of the bulk space. It would be an alternative to the direct operator approach to calculating conformal blocks in CFT on Riemann surfaces.
\paragraph{Acknowledgements.} The work of K.A. was supported by the RFBR grant No 18-02-01024 and by the Foundation for the Advancement of Theoretical Physics and Mathematics “BASIS”.
| proofpile-arXiv_065-315 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Outline}
\section{Introduction}
Basketball is a global and growing sport with interest from fans of all ages. This growth has coincided with a rise in data availability and innovative methodology that has inspired fans to study basketball through a statistical lens. Many of the approaches in basketball analytics can be traced to pioneering work in baseball~\citep{schwartz_2013}, beginning with Bill James' publications of \emph{The Bill James Baseball Abstract} and the development of the field of ``sabermetrics''~\citep{james1984the-bill, james1987bill, james2010new}. James' sabermetric approach captivated the larger sports community when the 2002 Oakland Athletics used analytics to win a league-leading 102 regular season games despite a prohibitively small budget. Chronicled in Michael Lewis' \textit{Moneyball}, this story demonstrated the transformative value of analytics in sports~\citep{lewis2004moneyball}.
In basketball, Dean Oliver and John Hollinger were early innovators who argued for evaluating players on a per-minute basis rather than a per-game basis and developed measures of overall player value, like Hollinger's Player Efficiency Rating (PER)~\citep{oliver2004basketball, hollingerper, hollinger2004pro}. The field of basketball analytics has expanded tremendously in recent years, even extending into popular culture through books and articles by data-journalists like Nate Silver and Kirk Goldsberry, to name a few~\citep{silver2012signal, goldsberry2019sprawlball}. In academia, interest in basketball analytics transcends the game itself, due to its relevance in fields such as psychology \citep{gilovich1985hot, vaci2019large, price2010racial}, finance and gambling \citep{brown1993fundamentals, gandar1998informed}, economics (see, for example, the Journal of Sports Economics), and sports medicine and health \citep{drakos2010injury, difiori2018nba}.
Sports analytics also has immense value for statistical and mathematical pedagogy. For example, \citet{drazan2017sports} discuss how basketball can broaden the appeal of math and statistics across youth. At more advanced levels, there is also a long history of motivating statistical methods using examples from sports, dating back to techniques like shrinkage estimation \citep[e.g.][]{efron1975data} up to the emergence of modern sub-fields like deep imitation learning for multivariate spatio-temporal trajectories \citep{le2017data}. Adjusted plus-minus techniques (Section \ref{reg-section}) can be used to motivate important ideas like regression adjustment, multicollinearity, and regularization \citep{sill2010improved}.
\subsection{This review}
Our review builds on the early work of \citet{kubatko2007starting} in ``A Starting Point for Basketball Analytics,'' which aptly establishes the foundation for basketball analytics. In this review, we focus on modern statistical and machine learning methods for basketball analytics and highlight the many developments in the field since their publication nearly 15 years ago. Although we reference a broad array of techniques, methods, and advancements in basketball analytics, we focus primarily on understanding team and player performance in gameplay situations. We exclude important topics related to drafting players~\citep[e.g.][]{, mccann2003illegal,groothuis2007early,berri2011college,arel2012NBA}, roster construction, win probability models, tournament prediction~\citep[e.g.][]{brown2012insights,gray2012comparing,lopez2015building, yuan2015mixture, ruiz2015generative, dutta2017identifying, neudorfer2018predicting}, and issues involving player health and fitness~\citep[e.g.][]{drakos2010injury,mccarthy2013injury}. We also note that much of the literature pertains to data from the National Basketball Association (NBA). Nevertheless, most of the methods that we discuss are relevant across all basketball leagues; where appropriate, we make note of analyses using non-NBA data.
We assume some basic knowledge of the game of basketball, but for newcomers, \url{NBA.com} provides a useful glossary of common NBA terms~\citep{nba_glossary}. We begin in Section~\ref{datatools} by summarizing the most prevalent types of data available in basketball analytics. The online supplementary material highlights various data sources and software packages. In Section~\ref{teamsection} we discuss methods for modeling team performance and strategy. Section~\ref{playersection} follows with a description of models and methods for understanding player ability. We conclude the paper with a brief discussion on our view on the future of basketball analytics.
\subsection{Data and tools}
\label{datatools}
\noindent \textbf{Box score data:} The most available datatype is box score data. Box scores, which were introduced by Henry Chadwick in the 1900s~\citep{pesca_2009}, summarize games across many sports. In basketball, the box score includes summaries of discrete in-game events that are largely discernible by eye: shots attempted and made, points, turnovers, personal fouls, assists, rebounds, blocked shots, steals, and time spent on the court. Box scores are referenced often in post-game recaps.
\url{Basketball-reference.com}, the professional basketball subsidiary of \url{sports-reference.com}, contains preliminary box score information on the NBA and its precursors, the ABA, BAA, and NBL, dating back to the 1946-1947 season; rebounds first appear for every player in the 1959-60 NBA season \citep{nbaref}. There are also options for variants on traditional box score data, including statistics on a per 100-possession, per game, or per 36-minute basis, as well as an option for advanced box score statistics. Basketball-reference additionally provides data on the WNBA and numerous international leagues. Data on further aspects of the NBA are also available, including information on the NBA G League, NBA executives, referees, salaries, contracts, and payrolls as well as numerous international leagues. One can find similar college basketball information on the \url{sports-reference.com/cbb/} site, the college basketball subsidiary of \url{sports-reference.com}.
For NBA data in particular, \url{NBA.com} contains a breadth of data beginning with the 1996-97 season~\citep{nbastats}. This includes a wide range of summary statistics, including those based on tracking information, a defensive dashboard, ''hustle''-based statistics, and other options. \url{NBA.com} also provides a variety of tools for comparing various lineups, examining on-off court statistics, and measuring individual and team defense segmented by shot type, location, etc. The tools provided include the ability to plot shot charts for any player on demand.
\hfill
\noindent \textbf{Tracking data}: Around 2010, the emergence of ``tracking data,'' which consists of spatial and temporally referenced player and game data, began to transform basketball analytics. Tracking data in basketball fall into three categories: player tracking, ball tracking, and data from wearable devices. Most of the basketball literature that pertains to tracking data has made use of optical tracking data from SportVU through Stats, LLC and Second Spectrum, the current data provider for the NBA. Optical data are derived from raw video footage from multiple cameras in basketball arenas, and typically include timestamped $(x, y)$ locations for all 10 players on the court as well as $(x, y, z)$ locations for the basketball at over 20 frames per second.\footnote{A sample of SportVU tracking data can currently be found on Github \citep{github-tracking}.} Many notable papers from the last decade use tracking data to solve a range of problems: evaluating defense \citep{franks2015characterizing}, constructing a ``dictionary'' of play types \citep{miller2017possession}, evaluating expected value of a possession \citep{cervonepointwise}, and constructing deep generative models of spatio-temporal trajectory data \citep{yu2010hidden, yue2014learning, le2017data}. See \citet{bornn2017studying} for a more in-depth introduction to methods for player tracking data.
Recently, high resolution technology has enabled $(x,y,z)$ tracking of the basketball to within one centimeter of accuracy. Researchers have used data from NOAH~\citep{noah} and RSPCT~\citep{rspct}, the two largest providers of basketball tracking data, to study several aspects of shooting performance~\citep{marty2018high, marty2017data, bornn2019using, shah2016applying, harmon2016predicting}, see Section \ref{sec:shot_efficiency}. Finally, we also note that many basketball teams and organizations are beginning to collect biometric data on their players via wearable technology. These data are generally unavailable to the public, but can help improve understanding of player fitness and motion~\citep{smith_2018}. Because there are few publications on wearable data in basketball to date, we do not discuss them further.
\hfill
\noindent \textbf{Data sources and tools:} For researchers interested in basketball, we have included two tables in the supplementary material. Table 1 contains a list of R and Python packages developed for scraping basketball data, and Table 2 enumerates a list of relevant basketball data repositories.
\section{Team performance and strategy}
\label{teamsection}
Sportswriters often discuss changes in team rebounding rate or assist rate after personnel or strategy changes, but these discussions are rarely accompanied by quantitative analyses of how these changes actually affect the team's likelihood of winning. Several researchers have attempted to address these questions by investigating which box score statistics are most predictive of team success, typically with regression models \citep{hofler2006efficiency, melnick2001relationship, malarranha2013dynamic, sampaio2010effects}. Unfortunately, the practical implications of such regression-based analyses remains unclear, due to two related difficulties in interpreting predictors for team success: 1) multicollinearity leads to high variance estimators of regression coefficients~\citep{ziv2010predicting} and 2) confounding and selection bias make it difficult to draw any causal conclusions. In particular, predictors that are correlated with success may not be causal when there are unobserved contextual factors or strategic effects that explain the association (see Figure \ref{fig:simpsons} for an interesting example). More recent approaches leverage spatio-temporal data to model team play within individual possessions. These approaches, which we summarize below, can lead to a better understanding of how teams achieve success.
\label{sec:team}
\subsection{Network models}
One common approach to characterizing team play involves modeling the game as a network and/or modeling transition probabilities between discrete game states. For example, \citet{fewell2012basketball} define players as nodes and ball movement as edges and compute network statistics like degree and flow centrality across positions and teams. They differentiate teams based on the propensity of the offense to either move the ball to their primary shooters or distribute the ball unpredictably.~\citet{fewell2012basketball} suggest conducting these analyses over multiple seasons to determine if a team's ball distribution changes when faced with new defenses.~\citet{xin2017continuous} use a similar framework in which players are nodes and passes are transactions that occur on edges. They use more granular data than \citet{fewell2012basketball} and develop an inhomogeneous continuous-time Markov chain to accurately characterize players' contributions to team play.
\citet{skinner2015method} motivate their model of basketball gameplay with a traffic network analogy, where possessions start at Point A, the in-bounds, and work their way to Point B, the basket. With a focus on understanding the efficiency of each pathway, Skinner proposes that taking the highest percentage shot in each possession may not lead to the most efficient possible game. He also proposes a mathematical justification of the ``Ewing Theory'' that states a team inexplicably plays better when their star player is injured or leaves the team~\citep{simmons}, by comparing it to a famous traffic congestion paradox~\citep{skinner2010price}. See \citet{skinner2015optimal} for a more thorough discussion of optimal strategy in basketball.
\subsection{Spatial perspectives}
Many studies of team play also focus on the importance of spacing and spatial context.~\citet{metulini2018modelling} try to identify spatial patterns that improve team performance on both the offensive and defensive ends of the court. The authors use a two-state Hidden Markov Model to model changes in the surface area of the convex hull formed by the five players on the court. The model describes how changes in the surface area are tied to team performance, on-court lineups, and strategy.~\citet{cervone2016NBA} explore a related problem of assessing the value of different court-regions by modeling ball movement over the course of possessions.
Their court-valuation framework can be used to identify teams that effectively suppress their opponents' ability to control high value regions.
Spacing also plays a crucial role in generating high-value shots. ~\citet{lucey2014get} examined almost 20,000 3-point shot attempts from the 2012-2013 NBA season and found that defensive factors, including a ``role swap'' where players change roles, helped generate open 3-point looks.
In related work, \citet{d2015move} stress the importance of ball movement in creating open shots in the NBA. They show that ball movement adds unpredictability into offenses, which can create better offensive outcomes. The work of D'Amour and Lucey could be reconciled by recognizing that unpredictable offenses are likely to lead to ``role swaps'', but this would require further research.~\citet{sandholtz2019measuring} also consider the spatial aspect of shot selection by quantifying a team's ``spatial allocative efficiency,'' a measure of how well teams determine shot selection. They use a Bayesian hierarchical model to estimate player FG\% at every location in the half court and compare the estimated FG\% with empirical field goal attempt rates. In particular, the authors identify a proposed optimum shot distribution for a given lineup and compare the true point total with the proposed optimum point total. Their metric, termed Lineup Points Lost (LPL), identifies which lineups and players have the most efficient shot allocation.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{figs/playbook}
\caption{Unsupervised learning for play discovery \citep{miller2017possession}. A) Individual player actions are clustered into a set of discrete actions. Cluster centers are modeled using Bezier curves. B) Each possession is reduced to a set of co-occurring actions. C) By analogy, a possession can be thought of as a ``document'' consisting of ``words.'' ``Words'' correspond to all pairs of co-occurring actions. A ``document'' is the possession, modeled using a bag-of-words model. D) Possessions are clustered using Latent Dirichlet Allocation (LDA). After clustering, each possession can be represented as a mixture of strategies or play types (e.g. a ``weave'' or ``hammer'' play).}
\label{fig:playbook}
\end{figure}
\subsection{Play evaluation and detection}
Finally, \citet{lamas2015modeling} examine the interplay between offensive actions, or space creation dynamics (SCDs), and defensive actions, or space protection dynamics (SPDs). In their video analysis of six Barcelona F.C. matches from Liga ACB, they find that setting a pick was the most frequent SCD used but it did not result in the highest probability of an open shot, since picks are most often used to initiate an offense, resulting in a new SCD. Instead, the SCD that led to the highest proportion of shots was off-ball player movement. They also found that the employed SPDs affected the success rate of the SCD, demonstrating that offense-defense interactions need to be considered when evaluating outcomes.
Lamas' analysis is limited by the need to watch games and manually label plays. Miller and Bornn address this common limitation by proposing a method for automatically clustering possessions using player trajectories computed from optical tracking data~\citep{miller2017possession}. First, they segment individual player trajectories around periods of little movement and use a functional clustering algorithm to cluster individual segments into one of over 200 discrete actions. They use a probabilistic method for clustering player trajectories into actions, where cluster centers are modeled using Bezier curves. These actions serve as inputs to a probabilistic clustering model at the possession level. For the possession-level clustering, they propose Latent Dirichlet Allocation (LDA), a common method in the topic modeling literature~\citep{blei2003latent}. LDA is traditionally used to represent a document as a mixture of topics, but in this application, each possession (``document'') can be represented as a mixture of strategies/plays (``topics''). Individual strategies consist of a set of co-occurring individual actions (``words''). The approach is summarized in Figure \ref{fig:playbook}. This approach for unsupervised learning from possession-level tracking data can be used to characterize plays or motifs which are commonly used by teams. As they note, this approach could be used to ``steal the opponent's playbook'' or automatically annotate and evaluate the efficiency of different team strategies. Deep learning models \citep[e.g.][]{le2017data, shah2016applying} and variational autoencoders could also be effective for clustering plays using spatio-temporal tracking data.
It may also be informative to apply some of these techniques to quantify differences in strategies and styles around the world. For example, although the US and Europe are often described as exhibiting different styles~\citep{hughes_2017}, this has not yet been studied statistically. Similarly, though some lessons learned from NBA studies may apply to The EuroLeague, the aforementioned conclusions about team strategy and the importance of spacing may vary across leagues.
\begin{comment}
\citep{fichman2018three, fichman2019optimal}
\citep{kozar1994importance} Importance of free throws at different stages of games. \franks{better here than in the individual performance section?} \Terner{will take a look at the papers and see where they belong}
\citep{ervculj2015basketball} also use hierarchical multinomial logistic regression, to explore differences in shot types across multiple levels of play, across multiple levels of play.
\Terner{This paper looks like it would be a good inclusion in the paper (sandholtz and bornn) but not sure where it fits:}
\citep{sandholtz2018transition} Game-theoretic approach to strategy
\end{comment}
\section{Player performance}
\label{playersection}
In this section, we focus on methodologies aimed at characterizing and quantifying different aspects of individual performance. These include metrics which reflect both the overall added value of a player and specific skills like shot selection, shot making, and defensive ability.
When analyzing player performance, one must recognize that variability in metrics for player ability is driven by a combination of factors. This includes sampling variability, effects of player development, injury, aging, and changes in strategy (see Figure \ref{fig:player_variance}). Although measurement error is usually not a big concern in basketball analytics, scorekeepers and referees can introduce bias \citep{van2017adjusting, price2010racial}. We also emphasize that basketball is a team sport, and thus metrics for individual performance are impacted by the abilities of their teammates. Since observed metrics are influenced by many factors, when devising a method targeted at a specific quantity, the first step is to clearly distinguish the relevant sources of variability from the irrelevant nuisance variability.
To characterize the effect of these sources of variability on existing basketball metrics, \citet{franks2016meta} proposed a set of three ``meta-metrics": 1) \emph{discrimination}, which quantifies the extent to which a metric actually reflects true differences between player skill rather than chance variation 2) \emph{stability}, which characterizes how a player-metric evolves over time due to development and contextual changes and 3) \emph{independence}, which describes redundancies in the information provided across multiple related metrics. Arguably, the most useful measures of player performance are metrics that are discriminative and reflect robust measurement of the same (possibly latent) attributes over time.
One of the most important tools for minimizing nuisance variability in characterizing player performance is shrinkage estimation via hierarchical modeling. In their seminal paper, \citet{efron1975data} provide a theoretical justification for hierarchical modeling as an approach for improving estimation in low sample size settings, and demonstrate the utility of shrinkage estimation for estimating batting averages in baseball. Similarly, in basketball, hierarchical modeling is used to leverage commonalities across players by imposing a shared prior on parameters associated with individual performance. We repeatedly return to these ideas about sources of variability and the importance of hierarchical modeling below.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{figs/sources_of_variance2}
\caption{Diagram of the sources of variance in basketball season metrics. Metrics reflect multiple latent player attributes but are also influenced by team ability, strategy, and chance variation. Depending on the question, we may be interested primarily in differences between players, differences within a player across seasons, and/or the dependence between metrics within a player/season. Player 2 in 2018-2019 has missing values (e.g. due to injury) which emphasizes the technical challenge associated with irregular observations and/or varying sample sizes.}
\label{fig:player_variance}
\end{figure}
\subsection{General skill}
\label{sec:general_skill}
One of the most common questions across all sports is ``who is the best player?'' This question takes many forms, ranging from who is the ``most valuable'' in MVP discussions, to who contributes the most to helping his or her team win, to who puts up the most impressive numbers. Some of the most popular metrics for quantifying player-value are constructed using only box score data. These include Hollinger's PER \citep{kubatko2007starting}, Wins Above Replacement Player (WARP)~\citep{pelton}, Berri's quantification of a player's win production~\citep{berri1999most}, Box Plus-Minus (BPM), and Value Over Replacement Player (VORP)~\citep{myers}. These metrics are particularly useful for evaluating historical player value for players who pre-dated play-by-play and tracking data. In this review, we focus our discussion on more modern approaches like the regression-based models for play-by-play data and metrics based on tracking data.
\subsubsection{Regression-based approaches}
\label{reg-section}
One of the first and simplest play-by-play metrics aimed at quantifying player value is known as ``plus-minus''. A player's plus-minus is computed by adding all of the points scored by the player's team and subtracting all the points scored against the player's team while that player was in the game. However, plus-minus is particularly sensitive to teammate contributions, since a less-skilled player may commonly share the floor with a more-skilled teammate, thus benefiting from the better teammate's effect on the game. Several regression approaches have been proposed to account for this problem. \citet{rosenbaum} was one of the first to propose a regression-based approach for quantifying overall player value which he terms adjusted plus-minus, or APM~\citep{rosenbaum}. In the APM model, Rosenbaum posits that
\begin{equation}
\label{eqn:pm}
D_i = \beta_0 + \sum_{p=1}^P\beta_p x_{ip} + \epsilon_i
\end{equation}
\noindent where $D_i$ is 100 times the difference in points between the home and away teams in stint $i$; $x_{ip} \in \lbrace 1, -1, 0 \rbrace $ indicates whether player $p$ is at home, away, or not playing, respectively; and $\epsilon$ is the residual. Each stint is a stretch of time without substitutions. Rosenbaum also develops statistical plus-minus and overall plus-minus which reduce some of the noise in pure adjusted plus-minus~\citep{rosenbaum}. However, the major challenge with APM and related methods is multicollinearity: when groups of players are typically on the court at the same time, we do not have enough data to accurately distinguish their individual contributions using plus-minus data alone. As a consequence, inferred regression coefficients, $\hat \beta_p$, typically have very large variance and are not reliably informative about player value.
APM can be improved by adding a penalty via ridge regression~\citep{sill2010improved}. The penalization framework, known as regularized APM, or RAPM, reduces the variance of resulting estimates by biasing the coefficients toward zero~\citep{jacobs_2017}. In RAPM, $\hat \beta$ is the vector which minimizes the following expression
\begin{equation}
\mathbf{\hat \beta} = \underset{\beta}{\argmin }(\mathbf{D} - \mathbf{X} \beta)^T (\mathbf{D} - \mathbf{X}\beta) + \lambda \beta^T
\beta
\end{equation}
\noindent where $\mathbf{D}$ and $\mathbf{X}$ are matrices whose rows correspond to possessions and $\beta$ is the vector of skill-coefficients for all players. $\lambda \beta^T \beta$ represents a penalty on the magnitude of the coefficients, with $\lambda$ controlling the strength of the penalty. The penalty ensures the existence of a unique solution and reduces the variance of the inferred coefficients. Under the ridge regression framework, $\hat \beta = (X^T X + \lambda I)^{-1}X^T D$ with $\lambda$ typically chosen via cross-validation. An alternative formulation uses the lasso penalty, $\lambda \sum_p |\beta_p|$, instead of the ridge penalty~\citep{omidiran2011pm}, which encourages many players to have an adjusted plus-minus of exactly zero.
Regularization penalties can equivalently be viewed from the Bayesian perspective, where ridge regression estimates are equivalent to the posterior mode when assuming mean-zero Gaussian prior distributions on $\beta_p$ and lasso estimates are equivalent to the posterior mode when assuming mean-zero Laplace prior distributions. Although adding shrinkage priors ensures identifiability and reduces the variance of resulting estimates, regularization is not a panacea: the inferred value of players who often share the court is sensitive to the precise choice of regularization (or prior) used. As such, careful consideration should be placed on choosing appropriate priors, beyond common defaults like the mean-zero Gaussian or Laplace prior. More sophisticated informative priors could be used; for example, a prior with right skewness to reflect beliefs about the distribution of player value in the NBA, or player- and position-specific priors which incorporate expert knowledge. Since coaches give more minutes to players that are perceived to provide the most value, a prior on $\beta_p$ which is a function of playing time could provide less biased estimates than standard regularization techniques, which shrink all player coefficients in exactly the same way. APM estimates can also be improved by incorporating data across multiple seasons, and/or by separately inferring player's defensive and offensive contributions, as explored in \citet{fearnhead2011estimating}.
\begin{comment}
discuss regression in terms of apm. This starts with Rosenbaum's APM and regressions there, continues with Sill. Sill adds ridge regression; Omidiran adds lasso.
Fearnhead is similar to Rosenbaum, but "
The main difference compared with Rosenbaum (2004) is that the author estimates only a combined ability for each player. The model presented here further uses a structured approach to combining information from multiple seasons."
\end{comment}
Several variants and alternatives to the RAPM metrics exist. For example,~\citet{page2007using} use a hierarchical Bayesian regression model to identify a position's contribution to winning games, rather than for evaluating individual players.~\citet{deshpande2016estimating} propose a Bayesian model for estimating each player's effect on the team's chance of winning, where the response variable is the home team's win probability rather than the point spread. Models which explicitly incorporate the effect of teammate interactions are also needed. \citet{piette2011evaluating} propose one approach based on modeling players as nodes in a network, with edges between players that shared the court together. Edge weights correspond to a measure of performance for the lineup during their shared time on the court, and a measure of network centrality is used as a proxy for player importance. An additional review with more detail on possession-based player performance can be found in \citet{engelmann2017possession}.
\begin{comment}
Deshpande, Jensen: "We propose instead to regress the change in the home team’s win probability during a shift onto signed indicators corresponding to the five home team players and five away team players in order to estimate each player’s partial effect on his team’s chances of winning."
This paper usefully explains Rosenbaum's approach too:
"To compute Adjusted Plus-Minus, one first breaks the game into several “shifts,” periods of play between substitutions, and measures both the point differential and total number of possessions in each shift. One then regresses the point differential per 100 possessions from the shift onto indicators corresponding to the ten players on the court."
\end{comment}
\subsubsection{Expected Possession Value}
\label{sec:epv}
The purpose of the Expected Possession Value (EPV) framework, as developed by~\citet{cervone2014multiresolution}, is to infer the expected value of the possession at every moment in time. Ignoring free throws for simplicity, a possession can take on values $Z_i \in \{0, 2, 3\}$. The EPV at time $t$ in possession $i$ is defined as
\begin{equation}
\label{eqn:epv}
v_{it}=\mathbb{E}\left[Z_i | X_{i0}, ..., X_{it}\right]
\end{equation}
\noindent where $X_{i0}, ..., X_{it}$ contain all available covariate information about the game or possession for the first $t$ timestamps of possession $i$. The EPV framework is quite general and can be applied in a range of contexts, from evaluating strategies to constructing retrospectives on the key points or decisions in a possession. In this review, we focus on its use for player evaluation and provide a brief high-level description of the general framework.
~\citet{cervone2014multiresolution} were the first to propose a tractable multiresolution approach for inferring EPV from optical tracking data in basketball. They model the possession at two separate levels of resolution. The \emph{micro} level includes all spatio-temporal data for the ball and players, as well as annotations of events, like a pass or shot, at all points in time throughout the possession. Transitions from one micro state to another are complex due to the high level of granularity in this representation. The \emph{macro} level represents a coarsening of the raw data into a finite collection of states. The macro state at time $t$, $C_t = C(X_t)$, is the coarsened state of the possession at time $t$ and can be classified into one of three state types: $\mathcal{C}_{poss}, \mathcal{C}_{trans},$ and $\mathcal{C}_{end}.$ The information used to define $C_t$ varies by state type. For example,
$\mathcal{C}_{poss}$ is defined by the ordered triple containing the ID of the player with the ball, the location of the ball in a discretized court region, and an indicator for whether the player has a defender within five feet of him or her. $\mathcal{C}_{trans}$ corresponds to ``transition states'' which are typically very brief in duration, as they include moments when the ball is in the air during a shot, pass, turnover, or immediately prior to a rebound: $\mathcal{C}_{trans} = $\{shot attempt from $c \in \mathcal{C}_{poss}$, pass from $c \in \mathcal{C}_{poss}$ to $c' \in \mathcal{C}_{poss}$, turnover in progress, rebound in progress\}. Finally, $\mathcal{C}_{end}$ corresponds to the end of the possession, and simply encodes how the possession ended and the associated value: a made field goal, worth two or three points, or a missed field goal or a turnover, worth zero points. Working with macrotransitions facilitates inference, since the macro states are assumed to be semi-Markov, which means the sequence of new states forms a homogeneous Markov chain~\citep{bornn2017studying}.
Let $C_t$ be the current state and $\delta_t > t$ be the time that the next non-transition state begins, so that $C_{\delta_t} \notin \mathcal{C}_{trans}$ is the next possession state or end state to occur after $C_t$. If we assume that coarse states after time $\delta_t$ do not depend on the data prior to $\delta_t$, that is
\begin{equation}
\textrm{for } s>\delta_{t}, P\left(C_s \mid C_{\delta_{t}}, X_{0}, \ldots, X_{t}\right)=P\left(C_{s} | C_{\delta_{t}}\right),
\end{equation}
\noindent then EPV can be defined in terms of macro and micro factors as
\begin{equation}
v_{it}=\sum_{c} \mathbb{E}\left[Z_i | C_{\delta_{t}}=c\right] P\left(C_{\delta_{t}}=c | X_{i0}, \ldots, X_{it}\right)
\end{equation}
\noindent since the coarsened Markov chain is time-homogeneous. $\mathbb{E}\left[Z | C_{\delta_{t}}=c\right]$ is macro only, as it does not depend on the full resolution spatio-temporal data. It can be inferred by estimating the transition probabilities between coarsened-states and then applying standard Markov chain results to compute absorbing probabilities. Inferring macro transition probabilities could be as simple as counting the observed fraction of transitions between states, although model-based approaches would likely improve inference.
The micro models for inferring the next non-transition state (e.g. shot outcome, new possession state, or turnover) given the full resolution data, $P(C_{\delta_{t}}=c | X_{i0}, \ldots, X_{it}),$ are more complex and vary depending on the state-type under consideration.~\citet{cervone2014multiresolution} use log-linear hazard models~\citep[see][]{prentice1979hazard} for modeling both the time of the next major event and the type of event (shot, pass to a new player, or turnover), given the locations of all players and the ball. \citet{sicilia2019deephoops} use a deep learning representation to model these transitions. The details of each transition model depend on the state type: models for the case in which $C_{\delta_t}$ is a shot attempt or shot outcome are discussed in Sections \ref{sec:shot_efficiency} and \ref{sec:shot_selection}. See~\citet{masheswaran2014three} for a discussion of factors relevant to modeling rebounding and the original EPV papers for a discussion of passing models~\citep{cervone2014multiresolution, bornn2017studying}.
~\citet{cervone2014multiresolution} suggested two metrics for characterizing player ability that can be derived from EPV: Shot Satisfaction (described in Section \ref{sec:shot_selection}) and EPV Added (EPVA), a metric quantifying the overall contribution of a player. EPVA quantifies the value relative to the league average of an offensive player receiving the ball in a similar situation. A player $p$ who possesses the ball starting at time $s$ and ending at time $e$ contributes value $v_{t_e} - v_{t_s}^{r(p)}$ over the league average replacement player, $r(p)$. Thus, the EPVA for player $p$, or EPVA$(p)$, is calculated as the average value that this player brings over the course of all times that player possesses the ball:
\begin{equation}
\text{EPVA(p)} = \frac{1}{N_p}\sum_{\{t_s, t_e\} \in \mathcal{T}^{p}} v_{t_e} - v_{t_s}^{r(p)}
\end{equation}
\noindent where $N_p$ is the number of games played by $p$, and $\mathcal{T}^{p}$ is the set of starting and ending ball-possession times for $p$ across all games. Averaging over games, instead of by touches, rewards high-usage players. Other ways of normalizing EPVA, e.g. by dividing by $|\mathcal{T}^p|$, are also worth exploring.
Unlike RAPM-based methods, which only consider changes in the score and the identities of the players on the court, EPVA leverages the high resolution optical data to characterize the precise value of specific decisions made by the ball carrier throughout the possession. Although this approach is powerful, it still has some crucial limitations for evaluating overall player value. The first is that EPVA measures the value added by a player only when that player touches the ball. As such, specialists, like three point shooting experts, tend to have high EPVA because they most often receive the ball in situations in which they are uniquely suited to add value. However, many players around the NBA add significant value by setting screens or making cuts which draw defenders away from the ball. These actions are hard to measure and thus not included in the original EPVA metric proposed by \citet{cervone2014multiresolution}. In future work, some of these effects could be captured by identifying appropriate ways to measure a player's ``gravity''~\citep{visualizegravity} or through new tools which classify important off-ball actions. Finally, EPVA only represents contributions on the offensive side of the ball and ignores a player's defensive prowess; as noted in Section~\ref{defensive ability}, a defensive version of EPVA would also be valuable.
In contrast to EPVA, the effects of off-ball actions and defensive ability are implicitly incorporated into RAPM-based metrics. As such, RAPM remains one of the key metrics for quantifying overall player value. EPVA, on the other hand, may provide better contextual understanding of how players add value, but a less comprehensive summary of each player's total contribution. A more rigorous comparison between RAPM, EPVA and other metrics for overall ability would be worthwhile.
\subsection{Production curves}
\label{sec:production_curves}
A major component of quantifying player ability involves understanding how ability evolves over a player's career. To predict and describe player ability over time, several methods have been proposed for inferring the so-called ``production curve'' for a player\footnote{Production curves are also referred to as ``player aging curves'' in the literature, although we prefer ``production curves'' because it does not imply that changes in these metrics over time are driven exclusively by age-related factors.}. The goal of a production curve analysis is to provide predictions about the future trajectory of a current player's ability, as well as to characterize similarities in production trajectories across players. These two goals are intimately related, as the ability to forecast production is driven by assumptions about historical production from players with similar styles and abilities.
Commonly, in a production curve analysis, a continuous measurement of aggregate skill (i.e. RAPM or VORP), denoted $\mathbf Y$ is considered for a particular player at time t:
$$Y_{pt} = f_p(t) + \epsilon_{pt}$$
\noindent where $f_p$ describes player $p$'s ability as a function of time, $t$, and $\epsilon_{pt}$ reflects irreducible errors which are uncorrelated over time, e.g. due to unobserved factors like minor injury, illness and chance variation. Athletes not only exhibit different career trajectories, but their careers occur at different ages, can be interrupted by injuries, and include different amounts of playing time. As such, the statistical challenge in production curve analysis is to infer smooth trajectories $f_p(t)$ from sparse irregular observations of $Y_{pt}$ across players \citep{wakim2014functional}.
There are two common approaches to modeling production curves: 1) Bayesian hierarchical modeling and 2) methods based on functional data analysis and clustering. In the Bayesian hierarchical paradigm, ~\citet{berry1999bridging} developed a flexible hierarchical aging model to compare player abilities across different eras in three sports: hockey, golf, and baseball. Although not explored in their paper, their framework can be applied to basketball to account for player-specific development and age-related declines in performance. ~\citet{page2013effect} apply a similar hierarchical method based on Gaussian Process regressions to infer how production evolves across different basketball positions. They find that production varies across player type and show that point guards (i.e. agile ball-handlers) generally spend a longer fraction of their career improving than other player types. \citet{vaci2019large} also use a Bayesian hierarchical modeling with distinct parametric curves to describe trajectories before and after peak-performance. They assume pre-peak performance reflects development whereas post-peak performance is driven by aging. Their findings suggest that athletes which develop more quickly also exhibit slower age-related declines, an observation which does not appear to depend on position.
In contrast to hierarchical Bayesian models, \citet{wakim2014functional} discuss how the tools of functional data analysis can be used to model production curves. In particular, functional principal components metrics can be used in an unsupervised fashion to identify clusters of players with similar trajectories. Others have explicitly incorporated notions of player similarity into functional models of production. In this framework, the production curve for any player $p$ is then expressed as a linear combination of the production curves from a set of similar players: $f_p(t) \approx \sum_{k \neq p} \alpha_{pk} f_k(t)$. For example, in their RAPTOR player rating system, \url{fivethirtyeight.com} uses a nearest neighbor algorithm to characterize similarity between players~\citep{natesilver538_2015, natesilver538_2019}. The production curve for each player is an average of historical production curves from a distinct set of the most similar athletes. A related approach, proposed by \citet{vinue2019forecasting}, employs the method of archetypoids \citep{vinue2015archetypoids}. Loosely speaking, the archetypoids consist of a small set of players, $\mathcal{A}$, that represent the vertices in the convex hull of production curves. Different from the RAPTOR approach, each player's production curve is represented as a convex combination of curves from the \emph{same set} of archetypes, that is, $\alpha_{pk} = 0 \; \forall \ k \notin \mathcal{A}$.
One often unaddressed challenge is that athlete playing time varies across games and seasons, which means sampling variability is non-constant. Whenever possible, this heteroskedasticity in the observed outcomes should be incorporated into the inference, either by appropriately controlling for minutes played or by using other relevant notions of exposure, like possessions or attempts.
Finally, although the precise goals of these production curve analyses differ, most current analyses focus on aggregate skill. More work is needed to capture what latent player attributes drive these observed changes in aggregate production over time. Models which jointly infer how distinct measures of athleticism and skill co-evolve, or models which account for changes in team quality and adjust for injury, could lead to further insight about player ability, development, and aging (see Figure \ref{fig:player_variance}). In the next sections we mostly ignore how performance evolves over time, but focus on quantifying some specific aspects of basketball ability, including shot making and defense.
\subsection{Shot modeling}
\label{sec:shooting}
Arguably the most salient aspect of player performance is the ability to score. There are two key factors which drive scoring ability: the ability to selectively identify the highest value scoring options (shot selection) and the ability to make a shot, conditioned on an attempt (shot efficiency). A player's shot attempts and his or her ability to make them are typically related. In \emph{Basketball on Paper}, Dean Oliver proposes the notion of a ``skill curve,'' which roughly reflects the inverse relationship between a player's shot volume and shot efficiency \citep{oliver2004basketball, skinner2010price, goldman2011allocative}. Goldsberry and others gain further insight into shooting behavior by visualizing how both player shot selection and efficiency vary spatially with a so-called ``shot chart.'' (See \citet{goldsberry2012courtvision} and \citet{goldsberry2019sprawlball} for examples.) Below, we discuss statistical models for inferring how both shot selection and shot efficiency vary across players, over space, and in defensive contexts.
\subsubsection{Shot efficiency}
\label{sec:shot_efficiency}
Raw FG\% is usually a poor measure for the shooting ability of an athlete because chance variability can obscure true differences between players. This is especially true when conditioning on additional contextual information like shot location or shot type, where sample sizes are especially small. For example, \citet{franks2016meta} show that the majority of observed differences in 3PT\% are due to sampling variability rather than true differences in ability, and thus is a poor metric for player discrimination. They demonstrate how these issues can be mitigated by using hierarchical models which shrink empirical estimates toward more reasonable prior means. These shrunken estimates are both more discriminative and more stable than the raw percentages.
With the emergence of tracking data, hierarchical models have been developed which target increasingly context-specific estimands. \citet{franks2015characterizing} and \citet{cervone2014multiresolution} propose similar hierarchical logistic regression models for estimating the probability of making a shot given the shooter identity, defender distance, and shot location. In their models, they posit the logistic regression model
\begin{equation}
E[Y_{ip} \mid \ell_{ip}, X_{ijp}] = \textrm{logit}^{-1} \big( \alpha_{\ell_i,p} + \sum_{j=1}^J \beta_{j} X_{ij} \big)
\end{equation} where $Y_{ip}$ is the outcome of the $i$th shot by player $p$ given $J$ covariates $X_{ij}$ (i.e. defender distance) and $\alpha_{{\ell_i}, p}$ is a spatial random effect describing the baseline shot-making ability of player $p$ in location $\ell_i$. As shown in Figure \ref{fig:simpsons}, accounting for spatial context is crucial for understanding defensive impact on shot making.
Given high resolution data, more complex hierarchical models which capture similarities across players and space are needed to reduce the variance of resulting estimators. Franks et al. propose a conditional autoregressive (CAR) prior distribution for $\alpha_{\ell_i,p}$ to describe similarity in shot efficiencies between players. The CAR prior is simply a multivariate normal prior distribution over player coefficients with a structured covariance matrix. The prior covariance matrix is structured to shrink the coefficients of players with low attempts in a given region toward the FG\%s of players with similar styles and skills. The covariance is constructed from a nearest-neighbor similarity network on players with similar shooting preferences. These prior distributions improve out-of-sample predictions for shot outcomes, especially for players with fewer attempts. To model the spatial random effects, they represent a smoothed spatial field as a linear combination of functional bases following a matrix factorization approach proposed by \citet{miller2013icml} and discussed in more detail in Section \ref{sec:shot_selection}.
More recently, models which incorporate the full 3-dimensional trajectories of the ball have been proposed to further improve estimates of shot ability. Data from SportVU, Second Spectrum, NOAH, or RSPCT include the location of the ball in space as it approaches the hoop, including left/right accuracy and the depth of the ball once it enters the hoop.~\citet{marty2017data} and~\citet{marty2018high} use ball tracking data from over 20 million attempts taken by athletes ranging from high school to the NBA. From their analyses, \citet{marty2018high} and \citet{daly2019rao} show that the optimal entry location is about 2 inches beyond the center of the basket, at an entry angle of about $45^{\circ}$.
Importantly, this trajectory information can be used to improve estimates of shooter ability from a limited number of shots. \citet{daly2019rao} use trajectory data and a technique known as Rao-Blackwellization to generate lower error estimates of shooting skill. In this context, the Rao-Blackwell theorem implies that one can achieve lower variance estimates of the sample frequency of made shots by conditioning on sufficient statistics; here, the probability of making the shot. Instead of taking the field goal percentage as $\hat \theta_{FG} = \sum Y_{i} / n$, they infer the percentage as $\hat \theta_{FG\text{-}RB} = \sum p_{i} / n$, where $p_i = E[Y_i \mid X]$ is the inferred probability that shot $i$ goes in, as inferred from trajectory data $X$. The shot outcome is not a deterministic function of the observed trajectory information due to the limited precision of spatial data and the effect of unmeasured factors, like ball spin. They estimate the make probabilities, $p_i$, from the ball entry location and angle using a logistic regression.
~\citet{daly2019rao} demonstrate that Rao-Blackwellized estimates are better at predicting end-of-season three point percentages from limited data than empirical make percentages. They also integrate the RB approach into a hierarchical model to achieve further variance reduction. In a follow-up paper, they focus on the effect that defenders have on shot trajectories~\citep{bornn2019using}. Unsurprisingly, they demonstrate an increase in the variance of shot depth, left-right location, and entry angle for highly contested shots, but they also show that players are typically biased toward short-arming when heavily defended.
\begin{figure}
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{figs/court_colored.pdf}
\end{subfigure}
~~
\begin{subfigure}[b]{0.55 \textwidth}
\includegraphics[width=\textwidth]{figs/bball_simpsons.pdf}
\end{subfigure}
\caption{Left) The five highest-volume shot regions, inferred using the NMF method proposed by \citet{miller2013icml}. Right) Fitted values in a logistic regression of shot outcome given defender distance and NMF shot region from over 115,000 shot attempts in the 2014-2015 NBA season \citep{franks2015characterizing, simpsons_personal}. The make probability increases approximately linearly with increasing defender distance in all shot locations. The number of observed shots at each binned defender distance is indicated by the point size. Remarkably, when ignoring shot region, the coefficient of defender distance has a slightly \emph{negative} coefficient, indicating that the probability of making a shot increases slightly with the closeness of the defender (gray line). This effect, which occurs because defender distance is also dependent on shot region, is an example of a ``reversal paradox''~\citep{tu2008simpson} and highlights the importance of accounting for spatial context in basketball. It also demonstrates the danger of making causal interpretations without carefully considering the role of confounding variables. }
\label{fig:simpsons}
\end{figure}
\subsubsection{Shot selection}
\label{sec:shot_selection}
Where and how a player decides to shoot is also important for determining one's scoring ability. Player shot selection is driven by a variety of factors including individual ability, teammate ability, and strategy~\citep{goldman2013live}. For example, \citet{alferink2009generality} study the psychology of shot selection and how the positive ``reward'' of shot making affects the frequency of attempted shot types. The log relative frequency of two-point shot attempts to three-point shot attempts is approximiately linear in the log relative frequency of the player's ability to make those shots, a relationship known to psychologists as the generalized matching law~\citep{poling2011matching}. \citet{neiman2011reinforcement} study this phenomenon from a reinforcement learning perspective and demonstrate that a previous made three point shot increases the probability of a future three point attempt. Shot selection is also driven by situational factors, strategy, and the ability of a player's teammates. \citet{zuccolotto2018big} use nonparametric regression to infer how shot selection varies as a function of the shot clock and score differential, whereas \citet{goldsberry2019sprawlball} discusses the broader strategic shift toward high volume three point shooting in the NBA.
The availability of high-resolution spatial data has spurred the creation of new methods to describe shot selection.~\citet{miller2013icml} use a non-negative matrix factorization (NMF) of player-specific shot patterns across all players in the NBA to derive a low dimensional representation of a pre-specified number of approximiately disjoint shot regions. These identified regions correspond to interpretable shot locations, including three-point shot types and mid-range shots, and can even reflect left/right bias due to handedness. See Figure \ref{fig:simpsons} for the results of a five-factor NMF decomposition. With the inferred representation, each player's shooting preferences can be approximated as a linear combination of the canonical shot ``bases.'' The player-specific coefficients from the NMF decomposition can be used as a lower dimensional characterization of the shooting style of that player \citep{bornn2017studying}.
While the NMF approach can generate useful summaries of player shooting styles, it incorporates neither contextual information, like defender distance, nor hierarchical structure to reduce the variance of inferred shot selection estimates. As such, hierarchical spatial models for shot data, which allow for spatially varying effects of covariates, are warranted \citep{reich2006spatial, franks2015characterizing}. \citet{franks2015characterizing} use a hierarchical multinomial logistic regression to predict who will attempt a shot and where the attempt will occur given defensive matchup information. They consider a 26-outcome multinomial model, where the outcomes correspond to shot attempts by one of the five offensive players in any of five shot regions, with regions determined \textit{a priori} using the NMF factorization. The last outcome corresponds to a possession that does not lead to a shot attempt. Let $\mathcal{S}(p, b)$ be an indicator for a shot by player $p$ in region $b$. The shot attempt probabilities are modeled as
\begin{equation}
\label{eqn:shot_sel}
E[\mathcal{S}(p, b) \mid \ell_{ip}, X_{ip}] = \frac{\exp \left(\alpha_{p b}+\sum_{j=1}^{5} F_{n}(j, p) \beta_{j b}\right)}{1+\sum_{\tilde p,\tilde b} \exp \left(\alpha_{\tilde p \tilde b}+\sum_{j=1}^{5} F_{n}(j, \tilde p) \beta_{j \tilde b}\right)}
\end{equation}
\noindent where $\alpha_{pb}$ is the propensity of the player to shoot from region $b$, and $F(j, p)$ is the fraction of time in the possession that player $p$ was guarded by defender $j$. Shrinkage priors are again used for the coefficients based on player similarity. $\beta_{jb}$ accounts for the effect of defender $j$ on offensive player $p$'s shooting habits (see Section \ref{defensive ability}).
Beyond simply describing the shooting style of a player, we can also assess the degree to which players attempt high value shots. \citet{chang2014quantifying} define effective shot quality (ESQ) in terms of the league-average expected value of a shot given the shot location and defender distance.~\citet{shortridge2014creating} similarly characterize how expected points per shot (EPPS) varies spatially. These metrics are useful for determining whether a player is taking shots that are high or low value relative to some baseline, i.e., the league average player.
\citet{cervonepointwise} and \citet{cervone2014multiresolution} use the EPV framework (Section \ref{sec:epv}) to develop a more sophisticated measure of shot quality termed ``shot satisfaction''. Shot satisfaction incorporates both offensive and defensive contexts, including shooter identity and all player locations and abilities, at the moment of the shot. The ``satisfaction'' of a shot is defined as the conditional expectation of the possession value at the moment the shot is taken, $\nu_{it}$, minus the expected value of the possession conditional on a counterfactual in which the player did not shoot, but passed or dribbled instead. The shot satisfaction for player $p$ is then defined as the average satisfaction, averaging over all shots attempted by the player:
$$\textrm{Satis}(p)=\frac{1}{\left|\mathcal{T}_{\textrm{shot }}^{p}\right|} \sum_{(i, t) \in \mathcal{T}_{\textrm{shot }}^{p}} \left(v_{it}-\mathbb{E}\left[Z_i | X_{it}, C_{t} \textrm{ is a non-shooting state} \right]\right)$$
\noindent where $\mathcal{T}_{\textrm{shot }}^{p}$ is the set of all possessions and times at which a player $p$ took a shot, $Z_i$ is the point value of possession $i$, $X_{it}$ corresponds to the state of the game at time $t$ (player locations, shot clock, etc) and $C_t$ is a non-shooting macro-state. $\nu_t$ is the inferred EPV of the possession at time $t$ as defined in Equation \ref{eqn:epv}. Satisfaction is low if the shooter has poor shooting ability, takes difficult shots, or if the shooter has teammates who are better scorers. As such, unlike other metrics, shot satisfaction measures an individual's decision making and implicitly accounts for the shooting ability of both the shooter \emph{and} the ability of their teammates. However, since shot satisfaction only averages differential value over the set $\mathcal{T}_{\textrm{shot}}^{p}$, it does not account for situations in which the player passes up a high-value shot. Additionally, although shot satisfaction is aggregated over all shots, exploring spatial variability in shot satisfaction would be an interesting extension.
\subsubsection{The hot hand}
\label{hothand}
One of the most well-known and debated questions in basketball analytics is about the existence of the so-called ``hot-hand''. At a high level, a player is said to have a ``hot hand'' if the conditional probability of making a shot increases given a prior sequence of makes. Alternatively, given $k$ previous shot makes, the hot hand effect is negligible if $E[Y_{p,t}|Y_{p, t-1}=1, ..., Y_{p, t-k}=1, X_t] \approx E[Y_{p,t}| X_t]$ where $Y_{p, t}$ is the outcome of the $t$th shot by player $p$ and $X_t$ represents contextual information at time $t$ (e.g. shot type or defender distance). In their seminal paper,~\citet{gilovich1985hot} argued that the hot hand effect is negligible. Instead, they claim streaks of made shots arising by chance are misinterpreted by fans and players \textit{ex post facto} as arising from a short-term improvement in ability. Extensive research following the original paper has found modest, but sometimes conflicting, evidence for the hot hand~\citep[e.g.][]{bar2006twenty, yaari2011hot,hothand93online}.
Amazingly, 30 years after the original paper,~\citet{miller2015surprised} demonstrated the existence of a bias in the estimators used in the original and most subsequent hot hand analyses. The bias, which attenuates estimates of the hot hand effect, arises due to the way in which shot sequences are selected and is closely related to the infamous Monty Hall problem~\citep{sciam, miller2017bridge}. After correcting for this bias, they estimate that there is an 11\% increase in the probability of making a three point shot given a streak of previous makes, a significantly larger hot-hand effect than had been previously reported.
Relatedly,~\citet{stone2012measurement} describes the effects of a form of ``measurement error'' on hot hand estimates, arguing that it is more appropriate to condition on the \emph{probabilities} of previous makes, $E\left[Y_{p,t}|E[Y_{p, t-1}], ... E[Y_{p, t-k}], X_t\right]$, rather than observed makes and misses themselves -- a subtle but important distinction. From this perspective, the work of \citet{marty2018high} and \citet{daly2019rao} on the use of ball tracking data to improve estimates of shot ability could provide fruitful views on the hot hand phenomenon by exploring autocorrelation in shot trajectories rather than makes and misses. To our knowledge this has not yet been studied. For a more thorough review and discussion of the extensive work on statistical modeling of streak shooting, see \citet{lackritz2017probability}.
\subsection{Defensive ability}
\label{defensive ability}
Individual defensive ability is extremely difficult to quantify because 1) defense inherently involves team coordination and 2) there are relatively few box scores statistics related to defense. Recently, this led Jackie MacMullan, a prominent NBA journalist, to proclaim that ``measuring defense effectively remains the last great frontier in analytics''~\citep{espnmac}. Early attempts at quantifying aggregate defensive impact include Defensive Rating (DRtg), Defensive Box Plus/Minus (DBPM) and Defensive Win Shares, each of which can be computed entirely from box score statistics \citep{oliver2004basketball, bbref_ratings}. DRtg is a metric meant to quantify the ``points allowed'' by an individual while on the court (per 100 possessions). Defensive Win Shares is a measure of the wins added by the player due to defensive play, and is derived from DRtg. However, all of these measures are particularly sensitive to teammate performance, and thus are not reliable measures of individual defensive ability.
Recent analyses have targeted more specific descriptions of defensive ability by leveraging tracking data, but still face some of the same difficulties. Understanding defense requires as much an understanding about what \emph{does not} happen as what does happen. What shots were not attempted and why? Who \emph{did not} shoot and who was guarding them? \citet{goldsberry2013dwight} were some of the first to use spatial data to characterize the absence of shot outcomes in different contexts. In one notable example from their work, they demonstrated that when Dwight Howard was on the court, the number of opponent shot attempts in the paint dropped by 10\% (``The Dwight Effect'').
More refined characterizations of defensive ability require some understanding of the defender's goals. \citet{franks2015characterizing} take a limited view on defenders' intent by focusing on inferring whom each defender is guarding. Using tracking data, they developed an unsupervised algorithm, i.e., without ground truth matchup data, to identify likely defensive matchups at each moment of a possession. They posited that a defender guarding an offensive player $k$ at time $t$ would be normally distributed about the point $\mu_{t k}=\gamma_{o} O_{t k}+\gamma_{b} B_{t}+\gamma_{h} H$, where $O_t$ is the location of the offensive player, $B_t$ is the location of the ball, and $H$ is the location of the hoop. They use a Hidden Markov model to infer the weights $\mathbf{\gamma}$ and subsequently the evolution of defensive matchups over time. They find that the average defender location is about 2/3 of the way between the segment connecting the hoop to the offensive player being guarded, while shading about 10\% of the way toward the ball location.~\citet{keshri2019automatic} extend this model by allowing $\mathbf{\gamma}$ to depend on player identities and court locations for a more accurate characterization of defensive play that also accounts for the ``gravity'' of dominant offensive players.
Defensive matchup data, as derived from these algorithms, is essential for characterizing the effectiveness of individual defensive play. For example, \citet{franks2015characterizing} use matchup data to describe the ability of individual defenders to both suppress shot attempts and disrupt attempted shots at different locations. To do so, they include defender identities and defender distance in the shot outcome and shot attempt models described in Sections \ref{sec:shot_efficiency} and \ref{sec:shot_selection}. Inferred coefficients relate to the ability of a defensive player to either reduce the propensity to make a shot given that it is taken, or to reduce the likelihood that a player attempts a shot in the first place.
These coefficients can be summarized in different ways. For example,~\citet{franks2015characterizing} introduce the defensive analogue of the shot chart by visualizing where on the court defenders reduce shot attempts and affect shot efficiency. They found that in the 2013-2014 season, Kawhi Leonard reduced the percentage of opponent three attempts more than any other perimeter defender; Roy Hibbert, a dominant big that year, faced more shots in the paint than any other player, but also did the most to reduce his opponent's shooting efficiency. In~\citet{franks2015counterpoints}, matchup information is used to derive a notion of ``points against''-- the number of points scored by offensive players when guarded by a specific defender. Such a metric can be useful in identifying the weak links in a team defense, although this is very sensitive to the skill of the offensive players being guarded.
Ultimately, the best matchup defenders are those who encourage the offensive player to make a low value decision. The EPVA metric discussed in Section \ref{sec:general_skill} characterizes the value of offensive decisions by the ball handler, but a similar defender-centric metric could be derived by focusing on changes in EPV when ball handlers are guarded by a specific defender. Such a metric could be a fruitful direction for future research and provide insight into defenders which affect the game in unique ways. Finally, we note that a truly comprehensive understanding of defensive ability must go beyond matchup defense and incorporate aspects of defensive team strategy, including strategies for zone defense. Without direct information from teams and coaches, this is an immensely challenging task. Perhaps some of the methods for characterizing team play discussed in Section \ref{sec:team} could be useful in this regard. An approach which incorporates more domain expertise about team defensive strategy could also improve upon existing methods.
\section{Discussion}
Basketball is a game with complex spatio-temporal dynamics and strategies. With the availability of new sources of data, increasing computational capability, and methodological innovation, our ability to characterize these dynamics with statistical and machine learning models is improving. In line with these trends, we believe that basketball analytics will continue to move away from a focus on box-score based metrics and towards models for inferring (latent) aspects of team and player performance from rich spatio-temporal data. Structured hierarchical models which incorporate more prior knowledge about basketball and leverage correlations across time and space will continue to be an essential part of disentangling player, team, and chance variation. In addition, deep learning approaches for modeling spatio-temporal and image data will continue to develop into major tools for modeling tracking data.
However, we caution that more data and new methods do not automatically imply more insight. Figure \ref{fig:simpsons} depicts just one example of the ways in which erroneous conclusions may arise when not controlling for confounding factors related to space, time, strategy, and other relevant contextual information. In that example, we are able to control for the relevant spatial confounder, but in many other cases, the relevant confounders may not be observed. In particular, strategic and game-theoretic considerations are of immense importance, but are typically unknown. As a related simple example, when estimating field goal percentage as a function of defender distance, defenders may strategically give more space to the poorest shooters. Without this contextual information, this would make it appear as if defender distance is \emph{negatively} correlated with the probability of making the shot.
As such, we believe that causal thinking will be an essential component of the future of basketball analytics, precisely because many of the most important questions in basketball are causal in nature. These questions involve a comparison between an observed outcome and a counterfactual outcome, or require reasoning about the effects of strategic intervention: ``What would have happened if the Houston Rockets had not adopted their three point shooting strategy?'' or ``How many games would the Bucks have won in 2018 if Giannis Antetokounmpo were replaced with an `average' player?'' Metrics like Wins Above Replacement Player are ostensibly aimed at answering the latter question, but are not given an explicitly causal treatment. Tools from causal inference should also help us reason more soundly about questions of extrapolation, identifiability, uncertainty, and confounding, which are all ubiquitous in basketball. Based on our literature review, this need for causal thinking in sports remains largely unmet: there were few works which explicitly focused on causal and/or game theoretic analyses, with the exception of a handful in basketball \citep{skinner2015optimal, sandholtz2018transition} and in sports more broadly \citep{lopez2016persuaded, yamlost, gauriot2018fooled}.
Finally, although new high-resolution data has enabled increasingly sophisticated methods to address previously unanswerable questions, many of the richest data sources are not openly available. Progress in statistical and machine learning methods for sports is hindered by the lack of publicly available data. We hope that data providers will consider publicly sharing some historical spatio-temporal tracking data in the near future. We also note that there is potential for enriching partnerships between data providers, professional leagues, and the analytics community. Existing contests hosted by professional leagues, such as the National Football League's ``Big Data Bowl''~\citep[open to all,][]{nfl_football_operations}, and the NBA Hackathon~\citep[by application only,][]{nbahack}, have been very popular. Additional hackathons and open data challenges in basketball would certainly be well-received.
\section*{DISCLOSURE}
Alexander Franks is a consultant for a basketball team in the National Basketball Association. This relationship did not affect the content of this review. Zachary Terner is not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.
\section*{ACKNOWLEDGMENTS}
The authors thank Luke Bornn, Daniel Cervone, Alexander D'Amour, Michael Lopez, Andrew Miller, Nathan Sandholtz, Hal Stern, and an anonymous reviewer for their useful comments, feedback, and discussions.
\newpage
\section*{SUPPLEMENTARY MATERIAL}
\input{supplement_content.tex}
\section*{LITERATURE\ CITED}
\renewcommand{\section}[2]{}%
\bibliographystyle{ar-style1.bst}
\section*{Response to comments}
We thank the reviewer for their thoughtful and helpful comments. As per the reviewer's suggestions, our major revisions were to: 1) reduce the length of the sections suggested by the reviewer 2) clarify the discussion on EPV and the comparison to APM 3) update some of the text in the production curves sections. Our detailed responses are below.
\begin{itemize}
\item The article provides a comprehensive review of the fast-changing field of basketball analytics. It is very well written and addresses all of the main topics. I have identified a few areas where I think additional discussion (or perhaps rewriting) would be helpful. In addition, the paper as a whole is longer than we typically publish by 5-10\%. We would encourage the authors to examine with an eye towards areas that can be cut. Some suggestions for editing down include Sections 2.1, 2.2.1, 2.2.2, and 3.3.3.
\emph{We cut close to 500 words (approx 5\%) by reducing the text in the suggested sections. However, after editing sections related to the reviewer comments, this length reduction was partially offset. According to the latex editor we are using, there are now 8800 words in the document, less than the 9500 limit listed by the production editor. If the reviewer still finds the text too long, we will likely cut the hot hand discussion, limiting it to a few sentences in the section on shot efficiency. This will further reduce the length by about 350 words. Our preference, however is to keep it.}
\item Vocabulary – I understand the authors decision to not spend much time describing basketball in detail. I think there would be value on occasion though in spelling out some of the differences (e.g., when referring to types of players as ‘bigs’ or ‘wings’). I suppose another option is to use more generic terms like player type.
\emph{We have made an attempt to reduce basketball jargon and clarify where necessary.}
\item Page 8, para 4 – “minimizing nuisance variability in characterizing player performance”
\emph{We have corrected this.}
\item Page 8, para 4, line 4 – Should this be “low-sample size settings”?
\emph{We have corrected this.}
\item Page 10, line 7 – Should refer to the “ridge regression model” or “ridge regression framework”
\emph{We now refer to the "ridge regression framework"}
\item Page 11 – The EPV discussion was not clear to me. To start it is not clear what information is in $C_t$. It seems that the information in $C_t$ depends on which of the type we are in. Is that correct? So $C_t$ is a triple if we are in $C_{poss}$ but it is only a singleton if it is in $C_{trans}$ or $C_{end}?$ Then in the formula that defines EPV there is some confusion about $\delta_t$; is $\delta_t$ the time of the next action ends (after time $t$)? If so, that should be made clearer.
\emph{We agree this discussion may not have been totally clear. There are many details and we have worked hard to revise and simplify the discussion as much as possible. It is probably better to think about $C_t$ as a discrete state that is \emph{defined} by the full continuous data $X$, rather than "carrying information". $C_{poss}$ is a state defined by the ball carrier and location, whereas $C_{end}$ is defined only by the outcome that ends the possession (a made or missed shot or turnover. It is also associated with value of 0, 2, or 3.) The game is only in $C_{trans}$ states for short periods of time, e.g. when the ball is traveling in the air during a pass or shot attempt. $\delta_t$ is the start time of the next non-transition state. The micromodels are used to predict both $\delta_t$ and the identity of the next non-transition state (e.g. who is possessing the ball next and where, or the ending outcome of the possession) given the full resolution tracking data. We have worked to clarify this in the text.}
\item Page 12 – The definition of EPVA is interesting. Is it obvious that this should be defined by dividing the total by the number of games? This weights games by the number of possessions. I’d say a bit more discussion would help.
\emph{This is a great point. Averaging over games implicitly rewards players who have high usage, even if their value added per touch might be low. This is mentioned in their original paper and we specifically highlight this choice in the text.}
\item Page 12 – End of Section 3.1 .... Is it possible to talk about how APM and EPV compare? Has anyone compared the results for a season? Seems like that would be very interesting.
\emph{We have added some discussion about EPVA and APM in the text, and also recommended that a more rigorous comparison would be worthwhile. In short, EPVA is limited by focusing on the ball-handler. Off-ball actions are not rewarded. This can be a significant source of value-added and it implicitly included in APM. The original EPV authors also note that one-dimensional offensive players often accrue the most EPVA per touch since they only handle the ball when they are uniquely suited to scoring.}
\item Page 13 – Production curves – The discussion of $\epsilon_{pt}$ is a bit confusing. What do you mean by random effects? This is often characterized as variation due to other variables (e.g., physical condition). Also, a reference to Berry et al. (JASA, 1999) might be helpful here. They did not consider basketball but have some nice material on different aging functions.
\emph{We have changed this to read: ``$\epsilon_{pt}$ reflects irreducible error which is independent of time, e.g. due to unobserved factors like injury, illness and chance variation.'' The suggestion to include Berry et al is a great one. This is closely related to the work of Page et al and Vaci et al and is now included in the text.}
\item Page 13 – middle – There is a formula here defining $f_p(t)$ as an average over similar players. I think it should be made clear that you are summing over $k$ here.
\emph{We have made this explicit.}
\item Page 16 – Figure 3 – Great figure. Seems odd though to begin your discussion with the counterintuitive aggregate result. I’d recommend describing the more intuitive results in the caption. Perhaps you want to pull the counterintuitive out of the figure and mention it only in text (perhaps only in conclusion?).
\emph{The reviewer's point is well taken. We have restructured the caption to start with the more intuitive results. However, we kept the less intuitive result in the caption, since we wanted to highlight that incorporating spatial context is essential for making the right conclusions. }
\end{itemize}
\end{document}
\section{Tags}
Data Tags: \\
\#spatial \#tracking \#college \#nba \#intl \#boxscore \#pbp (play-by-play) \#longitudinal \#timeseries \\
Goal Tags: \\
\#playereval \#defense \#lineup \#team \#projection \#behavioral \#strategy \#rest \#health \#injury \#winprob \#prediction \\
Miscellaneous: \\
\#clustering \#coaching \#management \#refs \#gametheory \#intro \#background \\
\section{Summaries}
\subsection{Introduction}
Kubatko et al. “A starting point for analyzing basketball statistics.” \cite{kubatko}
\newline
Tags: \#intro \#background \#nba \#boxscore
\begin{enumerate}
\item Basics of the analysis of basketball. Provide a common starting point for future research in basketball
\item Define a general formulation for how to estimate the number of possessions.
\item Provide a common basis for future possession estimation
\item Also discuss other concepts and methods: per-minute statistics, pace, four factors, etc.
\item Contain other breakdowns such as rebound rate, plays, etc
\end{enumerate}
John Hollinger \newline
Pro Basketball Forecast, 2005-06 \newline
\cite{hollinger2004pro}
This is Hollinger's yearly publication forecast for how NBA players will perform.
Can go in an intro section or in a forecast section.
Tags: \#nba \#forecast \#prediction
Handbook of Statistical Methods and Analyses in Sports \newline
Albert, Jim and Glickman, Mark E and Swartz, Tim B and Koning, Ruud H \newline
\cite{albert2017handbook} \newline
Tags: \#intro \#background
\begin{enumerate}
\item This handbook will provide both overviews of statistical methods in sports and in-depth
treatment of critical problems and challenges confronting statistical research in sports. The
material in the handbook will be organized by major sport (baseball, football, hockey,
basketball, and soccer) followed by a section on other sports and general statistical design
and analysis issues that are common to all sports. This handbook has the potential to
become the standard reference for obtaining the necessary background to conduct serious...
\end{enumerate}
Basketball on paper: rules and tools for performance analysis \newline
Dean Oliver \newline
\cite{oliver2004basketball}
Seems like a useful reference / historical reference for analyzing basketball performance?
tags: \#intro \#background
\subsection{Networks/Player performance}
Evaluating Basketball Player Performance via Statistical Network Modeling
Piette, Pham, Anand
\cite{piette2011evaluating}
Tags: \#playereval \#lineup \#nba \#team
\begin{enumerate}
\item Players are nodes, edges are if they played together in the same five-man unit
\item Adapting a network-based algorithm to estimate centrality scores
\end{enumerate}
Darryl Blackport, Nylon Calculus / fansided \newline
\cite{threeStabilize} \newline
Discusses
\begin{itemize}
\item Three-point shots are high-variance shots, leading television analysts to use the phrase “live by the three, die by the three”, so when a career 32\% three point shooter suddenly shoots 38\% for a season, do we know that he has actually improved? How many attempts are enough to separate the signal from the noise? I decided to apply techniques used to calculate how long various baseball stats take to stabilize to see how long it takes for three-point shooting percentage to stabilize.
\item \#shooting \#playereval
\end{itemize}
Andrew Patton / Nylon Calculus / fansided \newline
\cite{visualizegravity} \newline
\begin{itemize}
\item Contains interesting plots to show the gravity of players at different locations on the court
\item Clever surface plots
\#tracking \#shooting \#visualization
\end{itemize}
The price of anarchy in basketball \\
Brian Skinner \\
\cite{skinner2010price} \\
\begin{enumerate}
\item Treats basketball like a network problem
\item each play represents a “pathway” through which the ball and players may move from origin (the inbounds pass) to goal (the basket). Effective field goal percentages from the resulting shot attempts can be used to characterize the efficiency of each pathway. Inspired by recent discussions of the “price of anarchy” in traffic networks, this paper makes a formal analogy between a basketball offense and a simplified traffic network.
\item The analysis suggests that there may be a significant difference between taking the highest-percentage shot each time down the court and playing the most efficient possible game. There may also be an analogue of Braess’s Paradox in basketball, such that removing a key player from a team can result in the improvement of the team’s offensive efficiency.
\item If such thinking [meaning that one should save their best plays for later in the game] is indeed already in the minds of coaches and players, then it should probably be in the minds of those who do quantitative analysis of sports as well. It is my hope that the introduction of ``price of anarchy” concepts will constitute a small step towards formalizing this kind of reasoning, and in bringing the analysis of sports closer in line with the playing and coaching of sports.
\end{enumerate}
Basketball teams as strategic networks \\
Fewell, Jennifer H and Armbruster, Dieter and Ingraham, John and Petersen, Alexander and Waters, James S \\
\cite{fewell2012basketball} \\
Added late
\subsection{Team performance}
Tags: \#team \#boxscore \#longitudinal
Efficiency in the National Basketball Association: A Stochastic Frontier Approach with Panel Data
Richard A. Hofler, and James E. Payneb
\cite{hofler2006efficiency}
\begin{enumerate}
\item Shooting, rebounding, stealing, blocking shots help teams’ performance
\item Turnovers lower it
\item We also learn that better coaching and defensive prowess raise a team’s win efficiency.
\item Uses a stochastic production frontier model
\end{enumerate}
Predicting team rankings in basketball: The questionable use of on-court performance statistics \\
Ziv, Gal and Lidor, Ronnie and Arnon, Michal \\
\cite{ziv2010predicting} \\
\begin{enumerate}
\item Statistics on on-court performances (e.g. free-throw shots, 2-point shots, defensive and offensive rebounds, and assists) of basketball players during actual games are typically used by basketball coaches and sport journalists not only to assess the game performance of individual players and the entire team, but also to predict future success (i.e. the final rankings of the team). The purpose of this correlational study was to examine the relationships between 12 basketball on-court performance variables and the final rankings of professional basketball teams, using information gathered from seven consecutive seasons and controlling for multicollinearity.
\item Data analyses revealed that (a) some on-court performance statistics can predict team rankings at the end of a season; (b) on-court performance statistics can be highly correlated with one another (e.g. 2-point shots and 3-point shots); and (c) condensing the correlated variables (e.g. all types of shots as one category) can lead to more stable regressional models. It is recommended that basketball coaches limit the use of individual on-court statistics for predicting the final rankings of their teams. The prediction process may be more reliable if on-court performance variables are grouped into a large category of variables.
\end{enumerate}
Relationship between Team Assists and Win-Loss Record in the National Basketball Association \\
Merrill J Melnick \\
\cite{melnick2001relationship} \\
\begin{enumerate}
\item Using research methodology for analysis of secondary data, statistical data for five National Basketball Association (NBA) seasons (1993–1994 to 1997–1998) were examined to test for a relationship between team assists (a behavioral measure of teamwork) and win-loss record. Rank-difference correlation indicated a significant relationship between the two variables, the coefficients ranging from .42 to .71. Team assist totals produced higher correlations with win-loss record than assist totals for the five players receiving the most playing time (“the starters”).
\item A comparison of “assisted team points” and “unassisted team points” in relationship to win-loss record favored the former and strongly suggested that how a basketball team scores points is more important than the number of points it scores. These findings provide circumstantial support for the popular dictum in competitive team sports that “Teamwork Means Success—Work Together, Win Together.”
\end{enumerate}
\subsection{Shot selection}
Quantifying shot quality in the NBA
Chang, Yu-Han and Maheswaran, Rajiv and Su, Jeff and Kwok, Sheldon and Levy, Tal and Wexler, Adam and Squire, Kevin
Tags: \#playereval \#team \#shooting \#spatial \#nba
\cite{chang2014quantifying}
\begin{enumerate}
\item Separately characterize the difficulty of shots and the ability to make them
\item ESQ (Effective Shot Quality) and EFG+ (EFG - ESQ)
\item EFG+ is shooting ability above expectations
\item Addresses problem of confounding two separate attributes that EFG encounters
\begin{itemize}
\item quality of a shot and the ability to make that shot
\end{itemize}
\end{enumerate}
The problem of shot selection in basketball \\
Brian Skinner
\cite{skinner2012problem} \\
\begin{enumerate}
\item In this article, I explore the question of when a team should shoot and when they should pass up the shot by considering a simple theoretical model of the shot selection process, in which the quality of shot opportunities generated by the offense is assumed to fall randomly within a uniform distribution. Within this model I derive an answer to the question ‘‘how likely must the shot be to go in before the player should take it?’’
\item The theoretical prediction for the optimal shooting rate is compared to data from the National Basketball Association (NBA). The comparison highlights some limitations of the theoretical model, while also suggesting that NBA teams may be overly reluctant to shoot the ball early in the shot clock.
\end{enumerate}
Generality of the matching law as a descriptor of shot selection in basketball \newline
Alferink, Larry A and Critchfield, Thomas S and Hitt, Jennifer L and Higgins, William J \newline
\cite{alferink2009generality}
Studies the matching law to explains shot selection
\begin{enumerate}
\item Based on a small sample of highly successful teams, past studies suggested that shot selection (two- vs. three-point field goals) in basketball corresponds to predictions of the generalized matching law. We examined the generality of this finding by evaluating shot selection of college (Study 1) and professional (Study 3) players. The matching law accounted for the majority of variance in shot selection, with undermatching and a bias for taking three-point shots.
\item Shotselection matching varied systematically for players who (a) were members of successful versus unsuccessful teams, (b) competed at different levels of collegiate play, and (c) served as regulars versus substitutes (Study 2). These findings suggest that the matching law is a robust descriptor of basketball shot selection, although the mechanism that produces matching is unknown.
\end{enumerate}
Basketball Shot Types and Shot Success in
Different Levels of Competitive Basketball \newline
Er{\v{c}}ulj, Frane and {\v{S}}trumbelj, Erik \newline
\cite{ervculj2015basketball} \newline
\begin{enumerate}
\item Relative frequencies of shot types in basketball
\item The purpose of our research was to investigate the relative frequencies of different types of basketball shots (above head, hook shot, layup, dunk, tip-in), some details about their technical execution (one-legged, two-legged, drive, cut, ...), and shot success in different levels of basketball competitions. We analysed video footage and categorized 5024 basketball shots from 40 basketball games and 5 different levels of competitive basketball (National Basketball Association (NBA), Euroleague, Slovenian 1st Division, and two Youth basketball competitions).
\item Statistical analysis with hierarchical multinomial logistic regression models reveals that there are substantial differences between competitions. However, most differences decrease or disappear entirely after we adjust for differences in situations that arise in different competitions (shot location, player type, and attacks in transition).
\item In the NBA, dunks are more frequent and hook shots are less frequent compared to European basketball, which can be attributed to better athleticism of NBA players. The effect situational variables have on shot types and shot success are found to be very similar for all competitions.
\item tags: \#nba \#shotselection
\end{enumerate}
A spatial analysis of basketball shot chart data \\
Reich, Brian J and Hodges, James S and Carlin, Bradley P and Reich, Adam M \\
\begin{enumerate}
\item Uses spatial methods (like CAR, etc) to understand shot chart of NBA player (Sam Cassell)
\item Has some shot charts and maps
\item Fits an actual spatial model to shot chart data
\item Basketball coaches at all levels use shot charts to study shot locations and outcomes for their own teams as well as upcoming opponents. Shot charts are simple plots of the location and result of each shot taken during a game. Although shot chart data are rapidly increasing in richness and availability, most coaches still use them purely as descriptive summaries. However, a team’s ability to defend a certain player could potentially be improved by using shot data to make inferences about the player’s tendencies and abilities.
\item This article develops hierarchical spatial models for shot-chart data, which allow for spatially varying effects of covariates. Our spatial models permit differential smoothing of the fitted surface in two spatial directions, which naturally correspond to polar coordinates: distance to the basket and angle from the line connecting the two baskets. We illustrate our approach using the 2003–2004 shot chart data for Minnesota Timberwolves guard Sam Cassell.
\end{enumerate}
Optimal shot selection strategies for the NBA \\
Fichman, Mark and O’Brien, John Robert \\
\cite{fichman2019optimal}
Three point shooting and efficient mixed strategies: A portfolio management approach \\
Fichman, Mark and O’Brien, John \\
\cite{fichman2018three}
Rao-Blackwellizing field goal percentage \\
Daniel Daly-Grafstein and Luke Bornn\\
\cite{daly2019rao}
How to get an open shot: Analyzing team movement in basketball using tracking data \\
Lucey, Patrick and Bialkowski, Alina and Carr, Peter and Yue, Yisong and Matthews, Iain \\
\cite{lucey2014get} \\
Big data analytics for modeling scoring probability in basketball: The effect of shooting under high-pressure conditions \\
Zuccolotto, Paola and Manisera, Marica and Sandri, Marco \\
\cite{zuccolotto2018big}
\subsection{Tracking}
Miller and Bornn \newline
Possession Sketches: Mapping NBA Strategies
\cite{miller2017possession}
Tags: \#spatial \#clustering \#nba \#coaching \#strategy
\begin{enumerate}
\item Use LDA to create a database of movements of two players
\item Hierarchical model to describe interactions between players
\item Group together possessions with similar offensive structure
\item Uses tracking data to study strategy
\item Depends on \cite{blei2003latent}
\end{enumerate}
Nazanin Mehrasa*, Yatao Zhong*, Frederick Tung, Luke Bornn, Greg Mori
Deep Learning of Player Trajectory Representations for Team Activity Analysis
\cite{mehrasa2018deep}
Tags: \#tracking \#nba \#player
\begin{enumerate}
\item Use deep learning to learn player trajectories
\item Can be used for event recognition and team classification
\item Uses convolutional neural networks
\end{enumerate}
A. Miller and L. Bornn and R. Adams and K. Goldsberry \newline
Factorized Point Process Intensities: A Spatial Analysis of Professional Basketball \newline
~\cite{miller2013icml}
Tags: \#tracking \#nba \#player
\begin{enumerate}
\item We develop a machine learning approach to represent and analyze the underlying spatial structure that governs shot selection among professional basketball players in the NBA. Typically, NBA players are discussed and compared in an
heuristic, imprecise manner that relies on unmeasured intuitions about player behavior. This makes it difficult to draw comparisons between players and make accurate player specific predictions.
\item Modeling shot attempt data as a point process, we create a low dimensional representation of offensive player types in the NBA. Using non-negative matrix factorization (NMF), an unsupervised dimensionality reduction technique, we show that a low-rank spatial decomposition summarizes the shooting habits of NBA players. The spatial representations discovered by the algorithm correspond to intuitive descriptions of NBA player types, and can be used to model other spatial effects, such as shooting accuracy
\end{enumerate}
Creating space to shoot: quantifying spatial relative field goal efficiency in basketball \newline
Shortridge, Ashton and Goldsberry, Kirk and Adams, Matthew \newline
\cite{shortridge2014creating} \newline
\begin{enumerate}
\item This paper addresses the
challenge of characterizing and visualizing relative spatial
shooting effectiveness in basketball by developing metrics
to assess spatial variability in shooting. Several global and
local measures are introduced and formal tests are proposed to enable the comparison of shooting effectiveness
between players, groups of players, or other collections of
shots. We propose an empirical Bayesian smoothing rate
estimate that uses a novel local spatial neighborhood tailored for basketball shooting. These measures are evaluated
using data from the 2011 to 2012 NBA basketball season in
three distinct ways.
\item First we contrast nonspatial and spatial
shooting metrics for two players from that season and then
extend the comparison to all players attempting at least 250
shots in that season, rating them in terms of shooting effectiveness.
\item Second, we identify players shooting significantly better than the NBA average for their shot constellation, and formally compare shooting effectiveness of different players.
\item Third, we demonstrate an approach to map spatial shooting effectiveness. In general, we conclude that these measures are relatively straightforward to calculate with the right input data, and they provide distinctive and useful information about relative shooting ability in basketball.
\item We expect that spatially explicit basketball metrics will be useful additions to the sports analysis toolbox
\item \#tracking \#shooting \#spatial \#efficiency
\end{enumerate}
Courtvision: New Visual and Spatial Analytics for the NBA \newline
Goldsberry, Kirk \newline
\cite{goldsberry2012courtvision}
\begin{enumerate}
\item This paper investigates spatial and visual analytics as means to enhance basketball expertise. We
introduce CourtVision, a new ensemble of analytical techniques designed to quantify, visualize, and
communicate spatial aspects of NBA performance with unprecedented precision and clarity. We propose
a new way to quantify the shooting range of NBA players and present original methods that measure,
chart, and reveal differences in NBA players’ shooting abilities.
\item We conduct a case study, which applies
these methods to 1) inspect spatially aware shot site performances for every player in the NBA, and 2) to
determine which players exhibit the most potent spatial shooting behaviors. We present evidence that
Steve Nash and Ray Allen have the best shooting range in the NBA.
\item We conclude by suggesting that
visual and spatial analysis represent vital new methodologies for NBA analysts.
\item \#tracking \#shooting \#spatial
\end{enumerate}
Cervone, Daniel and D'Amour, Alex and Bornn, Luke and Goldsberry, Kirk \newline
A Multiresolution Stochastic Process Model for Predicting Basketball Possession Outcomes \newline
\cite{cervone2014multiresolution}
\begin{enumerate}
\item In this article, we propose a framework for using optical player tracking data to estimate, in real time, the expected number of points obtained by the end of a possession. This quantity, called expected possession value (EPV),
derives from a stochastic process model for the evolution of a basketball possession.
\item We model this process
at multiple levels of resolution, differentiating between continuous, infinitesimal movements of players,
and discrete events such as shot attempts and turnovers. Transition kernels are estimated using hierarchical spatiotemporal models that share information across players while remaining computationally tractable
on very large data sets. In addition to estimating EPV, these models reveal novel insights on players’ decisionmaking tendencies as a function of their spatial strategy. In the supplementary material, we provide a data sample and R code for further exploration of our model and its results.
\item This article introduces a new quantity, EPV, which represents a paradigm shift in the possibilities for statistical inferences about basketball. Using high-resolution, optical tracking data, EPV
reveals the value in many of the schemes and motifs that characterize basketball offenses but are omitted in the box score.
\item \#tracking \#expectedvalue \#offense
\end{enumerate}
POINTWISE: Predicting Points and Valuing Decisions in Real Time with NBA Optical Tracking Data \newline
Cervone, Dan and D'Amour, Alexander and Bornn, Luke and Goldsberry, Kirk \newline
\cite{cervonepointwise} \newline
\begin{enumerate}
\item EPV paper
\item . We propose a framework for using player-tracking data to assign a point value to each moment of a possession by computing how many points the offense is expected to score by the end of the possession, a quantity we call expected possession value (EPV).
\item EPV allows analysts to evaluate every decision made during a basketball game – whether it is to pass, dribble, or shoot – opening the door for a multitude of new metrics and analyses of basketball that quantify value in terms of points.
\item In this paper, we propose a modeling framework for estimating EPV, present results of EPV computations performed using playertracking data from the 2012-13 season, and provide several examples of EPV-derived metrics that answer real basketball questions.
\end{enumerate}
A method for using player tracking data in basketball to learn player skills and predict team performance \\
Brian Skinner and Stephen Guy\\
\cite{skinner2015method} \\
\begin{enumerate}
\item Player tracking data represents a revolutionary new data source for basketball analysis, in which essentially every aspect of a player’s performance is tracked and can be analyzed numerically. We suggest a way by which this data set, when coupled with a network-style model of the offense that relates players’ skills to the team’s success at running different plays, can be used to automatically learn players’ skills and predict the performance of untested 5-man lineups in a way that accounts for the interaction between players’ respective skill sets.
\item After developing a general analysis procedure, we present as an example a specific implementation of our method using a simplified network model. While player tracking data is not yet available in the public domain, we evaluate our model using simulated data and show that player skills can be accurately inferred by a simple statistical inference scheme.
\item Finally, we use the model to analyze games from the 2011 playoff series between the Memphis Grizzlies and the Oklahoma City Thunder and we show that, even with a very limited data set, the model can consistently describe a player’s interactions with a given lineup based only on his performance with a different lineup.
\item Tags \#network \#playereval \#lineup
\end{enumerate}
Exploring game performance in the National Basketball Association using player tracking data \\
Sampaio, Jaime and McGarry, Tim and Calleja-Gonz{\'a}lez, Julio and S{\'a}iz, Sergio Jim{\'e}nez and i del Alc{\'a}zar, Xavi Schelling and Balciunas, Mindaugas \\
\cite{sampaio2015exploring}
\begin{enumerate}
\item Recent player tracking technology provides new information about basketball game performance. The aim of this study was to (i) compare the game performances of all-star and non all-star basketball players from the National Basketball Association (NBA), and (ii) describe the different basketball game performance profiles based on the different game roles. Archival data were obtained from all 2013-2014 regular season games (n = 1230). The variables analyzed included the points per game, minutes played and the game actions recorded by the player tracking system.
\item To accomplish the first aim, the performance per minute of play was analyzed using a descriptive discriminant analysis to identify which variables best predict the all-star and non all-star playing categories. The all-star players showed slower velocities in defense and performed better in elbow touches, defensive rebounds, close touches, close points and pull-up points, possibly due to optimized attention processes that are key for perceiving the required appropriate environmental information.
\item The second aim was addressed using a k-means cluster analysis, with the aim of creating maximal different performance profile groupings. Afterwards, a descriptive discriminant analysis identified which variables best predict the different playing clusters.
\item The results identified different playing profile of performers, particularly related to the game roles of scoring, passing, defensive and all-round game behavior. Coaching staffs may apply this information to different players, while accounting for individual differences and functional variability, to optimize practice planning and, consequently, the game performances of individuals and teams.
\end{enumerate}
Modelling the dynamic pattern of surface area in basketball and its effects on team performance \\
Metulini, Rodolfo and Manisera, Marica and Zuccolotto, Paola \\
\cite{metulini2018modelling} \\
Deconstructing the rebound with optical tracking data \\ Maheswaran, Rajiv and Chang, Yu-Han and Henehan, Aaron and Danesis, Samanth \\
\cite{maheswaran2012deconstructing}
NBA Court Realty \\
Cervone, Dan and Bornn, Luke and Goldsberry, Kirk \\
\cite{cervone2016nba} \\
Applying deep learning to basketball trajectories \\
Shah, Rajiv and Romijnders, Rob \\
\cite{shah2016applying} \\
Predicting shot making in basketball using convolutional neural networks learnt from adversarial multiagent trajectories \\
Harmon, Mark and Lucey, Patrick and Klabjan, Diego \\
\cite{harmon2016predicting}
\subsection{Defense}
Characterizing the spatial structure of defensive skill in professional basketball
Franks, Alexander and Miller, Andrew and Bornn, Luke and Goldsberry, Kirk and others
\cite{franks2015characterizing}
Tags: \#tracking \#nba \#playereval \#defense
\begin{enumerate}
\item This paper attempts to fill this void, combining spatial and spatio-temporal processes, matrix factorization techniques and hierarchical regression models with player tracking data to advance the state of defensive analytics in the NBA.
\item Our approach detects, characterizes and quantifies multiple aspects of defensive play in basketball, supporting some common understandings of defensive effectiveness, challenging others and opening up many new insights into the defensive elements of basketball.
\item Specifically, our approach allows us to characterize how players affect both shooting frequency and efficiency of the player they are guarding.
\item By using an NMF-based decomposition of the court, we find an efficient and data-driven characterization of common shot regions which naturally corresponds to common basketball intuition.
\item Additionally, we are able to use this spatial decomposition to simply characterize the spatial shot and shot-guarding tendencies of players, giving a natural low-dimensional representation of a player’s shot chart.
\end{enumerate}
Counterpoints: Advanced defensive metrics for nba basketball
Franks, Alexander and Miller, Andrew and Bornn, Luke and Goldsberry, Kirk
\cite{franks2015counterpoints}
Tags: \#tracking \#nba \#playereval \#defense
\begin{enumerate}
\item This paper bridges this gap, introducing a new suite of defensive metrics that aim to progress the field of basketball analytics by enriching the measurement of defensive play.
\item Our results demonstrate that the combination of player tracking, statistical modeling, and visualization enable a far richer characterization of defense than has previously been possible.
\item Our method, when combined with more traditional offensive statistics, provides a well-rounded summary of a player’s contribution to the final outcome of a game.
\item Using optical tracking data and a model to infer defensive matchups at every moment throughout, we are able to provide novel summaries of defensive performance, and report “counterpoints” - points scored against a particular defender.
\item One key takeaway is that defensive ability is difficult to quantify with a single value. Summaries of points scored against and shots attempted against can say more about the team’s defensive scheme (e.g. the Pacers funneling the ball toward Hibbert) than the individual player’s defensive ability.
\item However, we argue that these visual and statistical summaries provide a much richer set of measurements for a player’s performance, particularly those that give us some notion of shot suppression early in the possession.
\end{enumerate}
The Dwight effect: A new ensemble of interior defense analytics for the NBA
Goldsberry, Kirk and Weiss, Eric
Tags: \#tracking \#nba \#playereval \#defense
\cite{goldsberry2013dwight}
\begin{enumerate}
\item This paper introduces new spatial and visual analytics capable of assessing and characterizing the nature of interior defense in the NBA. We present two case studies that each focus on a different component of defensive play. Our results suggest that the integration of spatial approaches and player tracking data promise to improve the status quo of defensive analytics but also reveal some important challenges associated with evaluating defense.
\item Case study 1: basket proximity
\item Case study 2: shot proximity
\item Despite some relevant limitations, we contend that our results suggest that interior defensive abilities vary considerably across the league; simply stated, some players are more effective interior defenders than others. In terms of affecting shooting, we evaluated interior defense in 2 separate case studies.
\end{enumerate}
Automatic event detection in basketball using HMM with energy based defensive assignment \\
Keshri, Suraj and Oh, Min-hwan and Zhang, Sheng and Iyengar, Garud \\
\cite{keshri2019automatic}
\subsection{Player Eval}
Estimating an NBA player’s impact on his team’s chances of winning
Deshpande, Sameer K and Jensen, Shane T
\cite{deshpande2016estimating}
Tags: \#playereval \#winprob \#team \#lineup \#nba
\begin{enumerate}
\item We instead use a win probability framework for evaluating the impact NBA players have on their teams’ chances of winning. We propose a Bayesian linear regression model to estimate an individual player’s impact, after controlling for the other players on the court. We introduce several posterior summaries to derive rank-orderings of players within their team and across the league.
\item This allows us to identify highly paid players with low impact relative to their teammates, as well as players whose high impact is not captured by existing metrics.
\end{enumerate}
Who is ‘most valuable’? Measuring the player's production of wins in the National Basketball Association
David J. Berri*
\cite{berri1999most}
Tags: \#playereval \#team \#nba
\begin{enumerate}
\item The purpose of this inquiry is to answer this question via an econometric model that links the player’s statistics in the National Basketball Association (NBA) to team wins.
\item The methods presented in this discourse take the given NBA data and provide an accurate answer to this question. As noted, if the NBA tabulated a wider range of statistics, this accuracy could be improved. Nevertheless, the picture painted by the presented methods does provide a fair evaluation of each player’s contribution to his team’s relative success or failure. Such evaluations can obviously be utilized with respect to free agent signings, player-for-player trades, the allocation of minutes, and also to determine the impact changes in coaching methods or strategy have had on an individual’s productivity.
\end{enumerate}
On estimating the ability of nba players
Fearnhead, Paul and Taylor, Benjamin Matthew
\cite{fearnhead2011estimating}
Tags: \#playereval \#nba \#lineup
\begin{enumerate}
\item This paper introduces a new model and methodology for estimating the ability of NBA players. The main idea is to directly measure how good a player is by comparing how their team performs when they are on the court as opposed to when they are off it. This is achieved in a such a way as to control for the changing abilities of the other players on court at different times during a match. The new method uses multiple seasons’ data in a structured way to estimate player ability in an isolated season, measuring separately defensive and offensive merit as well as combining these to give an overall rating. The use of game statistics in predicting player ability will be considered
\item To the knowledge of the authors, the model presented here is unique in providing a structured means of updating player abilities between years. One of the most important findings here is that whilst using player game statistics and a simple linear model to infer offensive ability may be okay, the very poor fit of the defensive ratings model suggests that defensive ability depends on some trait not measured by the current range of player game statistics.
\end{enumerate}
Add \cite{macdonald} as a reference? (Adj. PM for NHL players)
A New Look at Adjusted Plus/Minus for Basketball Analysis \newline
Tags: \#playereval \#nba \#lineups \newline
D. Omidiran \newline
\cite{omidiran2011pm}
\begin{enumerate}
\item We interpret the Adjusted Plus/Minus (APM) model as a special case of a general penalized regression problem indexed by the parameter $\lambda$. We provide a fast technique for solving this problem for general values of $\lambda$. We
then use cross-validation to select the parameter $\lambda$ and demonstrate that this choice yields substantially better prediction performance than APM.
\item Paper uses cross-validation to choose $\lambda$ and shows they do better with this.
\end{enumerate}
Improved NBA adjusted plus-minus using regularization and out-of-sample testing \newline
Joseph Sill \newline
\cite{adjustedpm}
\begin{enumerate}
\item Adjusted +/- (APM) has grown in popularity as an NBA player evaluation technique in recent years. This paper presents a framework for evaluating APM models and also describes an enhancement to APM which nearly doubles its accuracy. APM models are evaluated in terms of their ability to predict the outcome of future games not included in the model’s training data.
\item This evaluation framework provides a principled way to make choices about implementation details. The enhancement is a Bayesian technique called regularization (a.k.a. ridge regression) in which the data is combined with a priori beliefs regarding reasonable ranges for the parameters in order to
produce more accurate models.
\end{enumerate}
Article by Rosenbaum on 82games.com \newline
\cite{rosenbaum}
Measuring How NBA Players Help Their Teams Win \newline
Effects of season period, team quality, and playing time on basketball players' game-related statistics \newline
Sampaio, Jaime and Drinkwater, Eric J and Leite, Nuno M \newline
\cite{sampaio2010effects}
Tags: \#playereval \#team
\begin{enumerate}
\item The aim of this study was to identify within-season differences in basketball players’ game-related statistics according to team quality and playing time. The sample comprised 5309 records from 198 players in the Spanish professional basketball league (2007-2008).
\item Factor analysis with principal components was applied to the game-related statistics gathered from the official box-scores, which limited the analysis to five factors (free-throws, 2-point field-goals, 3-point field-goals, passes, and errors) and two variables (defensive and offensive rebounds). A two-step cluster analysis classified the teams as stronger (6998 winning percentage), intermediate (4395 winning percentage), and weaker teams (3295 winning percentage); individual players were classified based on playing time as important players (2894 min) or less important players (1694 min). Seasonal variation was analysed monthly in eight periods.
\item A mixed linear model was applied to identify the effects of team quality and playing time within the months of the season on the previously identified factors and game-related statistics. No significant effect of season period was observed. A team quality effect was identified, with stronger teams being superior in terms of 2-point field-goals and passes. The weaker teams were the worst at defensive rebounding (stronger teams: 0.1790.05; intermediate teams: 0.1790.06; weaker teams: 0.1590.03; P0.001). While playing time was significant in almost all variables, errors were the most important factor when contrasting important and less important players, with fewer errors being made by important players. The trends identified can help coaches and players to create performance profiles according to team quality and playing time. However, these performance profiles appear to be independent of season period.
\item Identify effects of strength of team on players' performance
\item Conclusion: There appears to be no seasonal variation in high level basketball performances. Although offensive plays determine success in basketball, the results of the current study indicate that securing more defensive rebounds and committing fewer errors are also important. Furthermore, the identified trends allow the creation of performance profiles according to team quality and playing time during all seasonal periods. Therefore, basketball coaches (and players) will benefit from being aware of these results, particularly when designing game strategies and when taking tactical decisions.
\end{enumerate}
Forecasting NBA Player Performance using a Weibull-Gamma Statistical Timing Model \newline
Douglas Hwang \newline
\cite{hwang2012forecasting} \newline
\begin{enumerate}
\item Uses a Weibull-Gamma statistical timing model
\item Fits a player’s performance over time to a Weibull distribution, and accounts for unobserved heterogeneity by fitting the parameters of the Weibull distribution to a gamma distribution
\item This will help predict performance over the next season, estimate contract value, and the potential “aging” effects of a certain player
\item tags \#forecasting \#playereval \#performance
\end{enumerate}
\subsubsection{Position analysis?}
Using box-scores to determine a position's contribution to winning basketball games
Page, Garritt L and Fellingham, Gilbert W and Reese, C Shane
\cite{page2007using}
\begin{enumerate}
\item Tried to quantify importance of different positions in basketball
\item Used hierarchical Bayesian model at five positions
\item One result is that defensive rebounds from point and shooting guards are important
\item While it is generally recognized that the relative importance of different skills is not constant across different positions on a basketball team, quantification of the differences has not been well studied. 1163 box scores from games in the National Basketball Association during the 1996-97 season were used to study the relationship of skill performance by position and game outcome as measured by point differentials.
\item A hierarchical Bayesian model was fit with individual players viewed as a draw from a population of players playing a particular position: point guard, shooting guard, small forward, power forward, center, and bench. Posterior distributions for parameters describing position characteristics were examined to discover the relative importance of various skills as quantified in box scores across the positions.
\item Results were consistent with expectations, although defensive rebounds from both point and shooting guards were found to be quite important.
\item In summary, the point spread of a basketball game increases if all five positions have more offensive rebounds, out-assist, have a better field goal percentage, and fewer turnovers than their positional opponent. These results are cer- tainly not surprising. Some trends that were somewhat more surprising were the importance of defensive rebounding by the guard positions and offensive rebounding by the point guard. These results also show the emphasis the NBA places on an all-around small forward.
\end{enumerate}
\subsubsection{Player production curves}
Effect of position, usage rate, and per game minutes played on NBA player production curves \\
Page, Garritt L and Barney, Bradley J and McGuire, Aaron T \\
\cite{page2013effect}
\begin{enumerate}
\item Production related to games played
\item Production curve modeled using GPs
\item Touches on deterioration of production relative to age
\item Learn how minutes played and usage rate affect production curves
\item One good nugget from conclusion: The general finding from this is that aging trends do impact natural talent evaluation of NBA players. Point guards appear to exhibit a different aging pattern than wings or bigs when it comes to Hollinger’s Game Score, with a slightly later peak and a much steeper decline once that peak is met. The fact that the average point guard spends more of his career improving his game score is not lost on talent evaluators in the NBA, and as such, point guards are given more time on average to perform the functions of a rotation player in the NBA.
\end{enumerate}
Functional data analysis of aging curves in sports \\
Alex Wakim and Jimmy Jin \\
\cite{wakim2014functional} \\
\begin{enumerate}
\item Uses FDA and FPCA to study aging curves in sports
\item Includes NBA and finds age patterns among NBA players with different scoring abilities
\item It is well known that athletic and physical condition is affected by age. Plotting an individual athlete's performance against age creates a graph commonly called the player's aging curve. Despite the obvious interest to coaches and managers, the analysis of aging curves so far has used fairly rudimentary techniques. In this paper, we introduce functional data analysis (FDA) to the study of aging curves in sports and argue that it is both more general and more flexible compared to the methods that have previously been used. We also illustrate the rich analysis that is possible by analyzing data for NBA and MLB players.
\item In the analysis of MLB data, we use functional principal components analysis (fPCA) to perform functional hypothesis testing and show differences in aging curves between potential power hitters and potential non-power hitters. The analysis of aging curves in NBA players illustrates the use of the PACE method. We show that there are three distinct aging patterns among NBA players and that player scoring ability differs across the patterns. We also show that aging pattern is independent of position.
\end{enumerate}
Large data and Bayesian modeling—aging curves of NBA players \\
Vaci, Nemanja and Coci{\'c}, Dijana and Gula, Bartosz and Bilali{\'c}, Merim \\
\cite{vaci2019large} \\
\begin{enumerate}
\item Researchers interested in changes that occur as people age are faced with a number of methodological problems, starting with the immense time scale they are trying to capture, which renders laboratory experiments useless and longitudinal studies rather rare. Fortunately, some people take part in particular activities and pastimes throughout their lives, and often these activities are systematically recorded. In this study, we use the wealth of data collected by the National Basketball Association to describe the aging curves of elite basketball players.
\item We have developed a new approach rooted in the Bayesian tradition in order to understand the factors behind the development and deterioration of a complex motor skill. The new model uses Bayesian structural modeling to extract two latent factors, those of development and aging. The interaction of these factors provides insight into the rates of development and deterioration of skill over the course of a player’s life. We show, for example, that elite athletes have different levels of decline in the later stages of their career, which is dependent on their skill acquisition phase.
\item The model goes beyond description of the aging function, in that it can accommodate the aging curves of subgroups (e.g., different positions played in the game), as well as other relevant factors (e.g., the number of minutes on court per game) that might play a role in skill changes. The flexibility and general nature of the new model make it a perfect candidate for use across different domains in lifespan psychology.
\end{enumerate}
\subsection{Predicting winners}
Modeling and forecasting the outcomes of NBA basketball games
Manner, Hans
Tags: \#hothand \#winprob \#team \#multivariate \#timeseries
\cite{manner2016modeling}
\begin{enumerate}
\item This paper treats the problem of modeling and forecasting the outcomes of NBA basketball games. First, it is shown how the benchmark model in the literature can be extended to allow for heteroscedasticity and treat the estimation and testing in this framework. Second, time-variation is introduced into the model by (i) testing for structural breaks in the model and (ii) introducing a dynamic state space model for team strengths.
\item The in-sample results based on eight seasons of NBA data provide some evidence for heteroscedasticity and a few structural breaks in team strength within seasons. However, there is no evidence for persistent time variation and therefore the hot hand belief cannot be confirmed. The models are used for forecasting a large number of regular season and playoff games and the common finding in the literature that it is difficult to outperform the betting market is confirmed. Nevertheless, it turns out that a forecast combination of model based forecasts with betting odds can outperform either approach individually in some situations.
\end{enumerate}
Basketball game-related statistics that discriminate between teams’ season-long success \newline
~\cite{ibanez2008basketball} \newline
Tags: \#prediction \#winner \#defense \newline
Ibanez, Sampaio, Feu, Lorenzo, Gomez, Ortega,
\begin{enumerate}
\item The aim of the present study was to identify the game-related statistics that discriminate between season-long successful and
unsuccessful basketball teams participating in the Spanish Basketball League (LEB1). The sample included all 145 average
records per season from the 870 games played between the 2000-2001 and the 2005-2006 regular seasons.
\item The following
game-related statistics were gathered from the official box scores of the Spanish Basketball Federation: 2- and 3-point fieldgoal attempts (both successful and unsuccessful), free-throws (both successful and unsuccessful), defensive and offensive
rebounds, assists, steals, turnovers, blocks (both made and received), and fouls (both committed and received). To control
for season variability, all results were normalized to minutes played each season and then converted to z-scores.
\item The results allowed discrimination between best and worst teams’ performances through the following game-related statistics: assists (SC0.47), steals (SC0.34), and blocks (SC0.30). The function obtained correctly classified 82.4\% of the cases. In conclusion, season-long performance may be supported by players’ and teams’ passing skills and defensive preparation.
\item Our results suggest a number of differences between
best and worst teams’ game-related statistics, but
globally the offensive (assists) and defensive (steals
and blocks) actions were the most powerful factors
in discriminating between groups. Therefore, game
winners and losers are discriminated by defensive
rebounding and field-goal shooting, whereas season-long performance is discriminated by players’
and teams’ passing skills and defensive preparation.
Players should be better informed about these
results and it is suggested that coaches pay attention
to guards’ passing skills, to forwards’ stealing skills,
and to centres’ blocking skills to build and prepare
offensive communication and overall defensive
pressure
\end{enumerate}
Simulating a Basketball Match with a Homogeneous Markov Model and Forecasting the Outcome \newline
Strumbelj and Vracar \newline
\#prediction \#winner \newline
~\cite{vstrumbelj2012simulating}
\begin{enumerate}
\item We used a possession-based Markov model to model the progression of a basketball match. The model’s transition matrix was estimated directly from NBA play-by-play data and indirectly from the teams’ summary statistics. We evaluated both this approach and other commonly used forecasting approaches: logit regression of the outcome, a latent strength rating method, and bookmaker odds. We found that the Markov model approach is appropriate for modelling a basketball match and produces forecasts of a quality comparable to that of other statistical approaches, while giving more insight into basketball. Consistent with previous studies, bookmaker odds were the best probabilistic forecasts
\item Using summary statistics to estimate Shirley’s Markov model for basketball produced a model for a match between two specific teams. The model was be used to simulate the match and produce outcome forecasts of a quality comparable to that of other statistical approaches, while giving more insights into basketball. Due to its homogeneity, the model is still limited with respect to what it can simulate, and a non-homogeneous model is required to deal with the issues. As far as basketball match simulation is concerned, more work has to be done, with an emphasis on making the transitional probabilities conditional on the point spread and the game time.
\end{enumerate}
Differences in game-related statistics of basketball performance by game location for men's winning and losing teams \\
G{\'o}mez, Miguel A and Lorenzo, Alberto and Barakat, Rub{\'e}n and Ortega, Enrique and Jos{\'e} M, Palao \\
\cite{gomez2008differences} \\
\#location \#winners \#boxscore
\begin{enumerate}
\item The aim of the present study was to identify game-related statistics that differentiate winning and losing teams according to game location. The sample included 306 games of the 2004–2005 regular season of the Spanish professional men's league (ACB League). The independent variables were game location (home or away) and game result (win or loss). The game-related statistics registered were free throws (successful and unsuccessful), 2- and 3-point field goals (successful and unsuccessful), offensive and defensive rebounds, blocks, assists, fouls, steals, and turnovers. Descriptive and inferential analyses were done (one-way analysis of variance and discriminate analysis).
\item The multivariate analysis showed that winning teams differ from losing teams in defensive rebounds (SC = .42) and in assists (SC = .38). Similarly, winning teams differ from losing teams when they play at home in defensive rebounds (SC = .40) and in assists (SC = .41). On the other hand, winning teams differ from losing teams when they play away in defensive rebounds (SC = .44), assists (SC = .30), successful 2-point field goals (SC = .31), and unsuccessful 3-point field goals (SC = –.35). Defensive rebounds and assists were the only game-related statistics common to all three analyses.
\end{enumerate}
\subsubsection{NCAA bracket}
Predicting the NCAA basketball tournament using isotonic least squares pairwise comparison model \\
Neudorfer, Ayala and Rosset, Saharon \\
\cite{neudorfer2018predicting}
Identifying NCAA tournament upsets using Balance Optimization Subset Selection \\
Dutta, Shouvik and Jacobson, Sheldon H and Sauppe, Jason J \\
\cite{dutta2017identifying}
Building an NCAA men’s basketball predictive model and quantifying its success
Lopez, Michael J and Matthews, Gregory J
\cite{lopez2015building}
Tags: \#ncaa \#winprob \#outcome \#prediction
\begin{enumerate}
\item This manuscript both describes our novel predictive models and quantifies the possible benefits, with respect to contest standings, of having a strong model. First, we describe our submission, building on themes first suggested by Carlin (1996) by merging information from the Las Vegas point spread with team-based possession metrics. The success of our entry reinforces longstanding themes of predictive modeling, including the benefits of combining multiple predictive tools and the importance of using the best possible data
\end{enumerate}
Introduction to the NCAA men’s basketball prediction methods issue
Glickman, Mark E and Sonas, Jeff \\
\cite{glickman2015introduction} \\
This is a whole issue of JQAS so not so important to include the intro cited here specifically \\
A generative model for predicting outcomes in college basketball
Ruiz, Francisco JR and Perez-Cruz, Fernando
\cite{ruiz2015generative}
Tags: \#ncaa \#prediction \#outcomes \#team
\begin{enumerate}
\item We show that a classical model for soccer can also provide competitive results in predicting basketball outcomes. We modify the classical model in two ways in order to capture both the specific behavior of each National collegiate athletic association (NCAA) conference and different strategies of teams and conferences. Through simulated bets on six online betting houses, we show that this extension leads to better predictive performance in terms of profit we make. We compare our estimates with the probabilities predicted by the winner of the recent Kaggle competition on the 2014 NCAA tournament, and conclude that our model tends to provide results that differ more from the implicit probabilities of the betting houses and, therefore, has the potential to provide higher benefits.
\item In this paper, we have extended a simple soccer model for college basketball. Outcomes at each game are modeled as independent Poisson random variables whose means depend on the attack and defense coefficients of teams and conferences. Our conference-specific coefficients account for the overall behavior of each conference, while the perteam coefficients provide more specific information about each team. Our vector-valued coefficients can capture different strategies of both teams and conferences. We have derived a variational inference algorithm to learn the attack and defense coefficients, and have applied this algorithm to four March Madness Tournaments. We compare our predictions for the 2014 Tournament to the recent Kaggle competition results and six online betting houses. Simulations show that our model identifies weaker but undervalued teams, which results in a positive mean profit in all the considered betting houses. We also outperform the Kaggle competition winner in terms of mean profit.
\end{enumerate}
A new approach to bracket prediction in the NCAA Men’s Basketball Tournament based on a dual-proportion likelihood
Gupta, Ajay Andrew
\cite{gupta2015new}
Tags: \#ncaa \#prediction
\begin{enumerate}
\item This paper reviews relevant previous research, and then introduces a rating system for teams using game data from that season prior to the tournament. The ratings from this system are used within a novel, four-predictor probability model to produce sets of bracket predictions for each tournament from 2009 to 2014. This dual-proportion probability model is built around the constraint of two teams with a combined 100\% probability of winning a given game.
\item This paper also performs Monte Carlo simulation to investigate whether modifications are necessary from an expected value-based prediction system such as the one introduced in the paper, in order to have the maximum bracket score within a defined group. The findings are that selecting one high-probability “upset” team for one to three late rounds games is likely to outperform other strategies, including one with no modifications to the expected value, as long as the upset choice overlaps a large minority of competing brackets while leaving the bracket some distinguishing characteristics in late rounds.
\end{enumerate}
Comparing Team Selection and Seeding for the 2011 NCAA Men's Basketball Tournament
Gray, Kathy L and Schwertman, Neil C
\cite{gray2012comparing}
Tags: \#ncaa \#seeding \#selection
\begin{enumerate}
\item In this paper, we propose an innovative heuristic measure of team success, and we investigate how well the NCAA committee seeding compares to the computer-based placements by Sagarin and the rating percentage index (RPI). For the 2011 tournament, the NCAA committee selection process performed better than those based solely on the computer methods in determining tournament success.
\item This analysis of 2011 tournament data shows that the Selection Committee in 2011 was quite effective in the seeding of the tournament and that basing the seeding entirely on computer ratings was not an advantage since, of the three placements, the NCAA seeding had the strongest correlation with team success points and the fewest upsets. The incorporation of some subjectivity into their seeding appears to be advantageous. The committee is able to make an adjustment, for example, if a key player is injured or unable to play for some reason. From this analysis of the data for the 2011 tournament, there is ample evidence that NCAA selection committee was proficient in the selection and seeding of the tournament teams.
\end{enumerate}
Joel Sokol's group on LRMC method: \\
\cite{brown2010improved} \\
\cite{kvam2006logistic} \\
\cite{brown2012insights} \\
\subsection{Uncategorized}
Hal Stern in JASA on Brownian motion model for progress of sports scores: \\
\cite{stern1994brownian} \\
Random walk picture of basketball scoring \\
Gabel, Alan and Redner, Sidney \\
\cite{gabel2012random}\\
\begin{enumerate}
\item We present evidence, based on play-by-play data from all 6087 games from the 2006/07– 2009/10 seasons of the National Basketball Association (NBA), that basketball scoring is well described by a continuous-time anti-persistent random walk. The time intervals between successive scoring events follow an exponential distribution, with essentially no memory between different scoring intervals.
\item By including the heterogeneity of team strengths, we build a detailed computational random-walk model that accounts for a variety of statistical properties of scoring in basketball games, such as the distribution of the score difference between game opponents, the fraction of game time that one team is in the lead, the number of lead changes in each game, and the season win/loss records of each team.
\item In this work, we focus on the statistical properties of scoring during each basketball game. The scoring data are consistent with the scoring rate being described by a continuous-time Poisson process. Consequently, apparent scoring bursts or scoring droughts arise from Poisson statistics rather than from a temporally correlated process.
\item Our main hypothesis is that the evolution of the score difference between two competing teams can be accounted by a continuous-time random walk.
\item However, this competitive rat race largely eliminates systematic advantages between teams, so that all that remains, from a competitive standpoint, are small surges and ebbs in performance that arise from the underlying stochasticity of the game.
\end{enumerate}
Importance of Free-Throws at Various Stages of Basketball Games \\
\cite{kozar1994importance}
\begin{enumerate}
\item Basketball coaches often refer to their teams' success or failure as a product of their players' performances at the free-throw line. In the present study, play-by-play records of 490 NCAA Division I men's basketball games were analyzed to assess the percentage of points scored from free-throws at various stages of the games.
\item About 20\% of all points were scored from free-throws. Free-throws comprised a significantly higher percentage of total points scored during the last 5 minutes than the first 35 minutes of the game for both winning and losing teams. Also, in the last 5 minutes of 246 games decided by 9 points or less and 244 decided by 10 points or more, winners scored a significantly higher percentage of points from free-throws than did losers. Suggestions for structuring practice conditions are discussed.
\end{enumerate}
Dynamic modeling of performance in basketball \\
\cite{malarranha2013dynamic} \\
\begin{enumerate}
\item The aim of this study was to identify the intra-game variation from four performance indicators that determine the outcome of basketball games, controlling for quality of opposition. All seventy-four games of the Basketball World Championship (Turkey 2010) were analyzed to calculate the performance indicators in eight 5-minute periods. A repeated measures ANOVA was performed to identify differences in time and game outcome for each performance indicator. The quality of opposition was included in the models as covariable.
\item The effective field goal percentage (F=14.0 p <.001, η2=.09) influenced the game outcome throughout the game, while the offensive rebounds percentage (F=7.6 p <.05, η2=.05) had greater influence in the second half. The offensive (F=6.3, p <.05, η2=.04) and defensive (F=12.0, p <.001, η2=.08) ratings also influenced the outcome of the games. These results may allow coaches to have more accurate information aimed to prepare their teams for the competition.
\end{enumerate}
Modeling the offensive-defensive interaction and resulting outcomes in basketball \\
Lamas, Leonardo and Santana, Felipe and Heiner, Matthew and Ugrinowitsch, Carlos and Fellingham, Gilbert \\
\cite{lamas2015modeling} \\
\subsubsection{Sampaio section}
Discriminative power of basketball game-related statistics by level of competition and sex \\
\cite{sampaio2004discriminative} \\
Discriminative game-related statistics between basketball starters and nonstarters when related to team quality and game outcome \\
\cite{sampaio2006discriminative} \\
Discriminant analysis of game-related statistics between basketball guards, forwards and centres in three professional leagues \\
\cite{sampaio2006discriminant} \\
Statistical analyses of basketball team performance: understanding teams’ wins and losses according to a different index of ball possessions \\
\cite{sampaio2003statistical}
Game related statistics which discriminate between winning and losing under-16 male basketball games \\
\cite{lorenzo2010game} \\
Game location influences basketball players' performance across playing positions \\
\cite{sampaio2008game} \\
Identifying basketball performance indicators in regular season and playoff games \\
\cite{garcia2013identifying} \\
\subsection{General references}
Basketball reference: \cite{bballref} \newline
Moneyball: \cite{lewis2004moneyball} \newline
NBA website: \cite{nba_glossary} \newline
Spatial statistics: \cite{diggle2013statistical} and \cite{ripley2005spatial} \newline
Statistics for spatial data: \cite{cressie93} \newline
Efron and Morris on Stein's estimator? \cite{efron1975data} \\
Bill James?: \cite{james1984the-bill} and \cite{james2010the-new-bill}
I think Cleaning the Glass would be a good reference, especially the stats component of its site. The review article should mention somewhere that the growth of basketball analytics in academic articles - and its growth in industry - occurred simultaneously with its use on different basketball stats or fan sites. Places like CTG, hoopshype, etc which mention and popularize fancier statistics contributed to the growth of analytics in basketball.
Not sure where or how is the best way to include that in the paper but thought it was at least worth writing down somewhere.
\subsection{Hot hand?}
\cite{hothand93online}
\bibliographystyle{plain}
\section{Tags}
Data Tags: \\
#spatial #tracking #college #nba #intl #boxscore #pbp (play-by-play) #longitudinal #timeseries \\
Goal Tags: \\
#playereval #defense #lineup #team #projection #behavioral #strategy #rest #health #injury #winprob #prediction \\
Miscellaneous: \\
#clustering #coaching #management #refs #gametheory #intro #background \\
\section{Summaries}
\subsection{Introduction}
Kubatko et al. “A starting point for analyzing basketball statistics.” \cite{kubatko}
\newline
\citefield{kubatko}{title}
Tags: #intro #background #nba #boxscore
\begin{enumerate}
\item Basics of the analysis of basketball. Provide a common starting point for future research in basketball
\item Define a general formulation for how to estimate the number of possessions.
\item Provide a common basis for future possession estimation
\item Also discuss other concepts and methods: per-minute statistics, pace, four factors, etc.
\item Contain other breakdowns such as rebound rate, plays, etc
\end{enumerate}
\subsection{Networks/Player performance}
Evaluating Basketball Player Performance via Statistical Network Modeling
Piette, Pham, Anand
\cite{piette2011evaluating}
Tags: #playereval #lineup #nba #team
\begin{enumerate}
\item Players are nodes, edges are if they played together in the same five-man unit
\item Adapting a network-based algorithm to estimate centrality scores
\end{enumerate}
Quantifying shot quality in the NBA
Chang, Yu-Han and Maheswaran, Rajiv and Su, Jeff and Kwok, Sheldon and Levy, Tal and Wexler, Adam and Squire, Kevin
Tags: #playereval #team #shooting #spatial #nba
\cite{chang2014quantifying}
\begin{enumerate}
\item Separately characterize the difficulty of shots and the ability to make them
\item ESQ (Effective Shot Quality) and EFG+ (EFG - ESQ)
\item EFG+ is shooting ability above expectations
\item Addresses problem of confounding two separate attributes that EFG encounters
\begin{itemize}
\item quality of a shot and the ability to make that shot
\end{itemize}
\end{enumerate}
\bibliographystyle{plain}
\section{INTRODUCTION}
Please begin the main text of your article here.
\section{FIRST-LEVEL HEADING}
This is dummy text.
\subsection{Second-Level Heading}
This is dummy text. This is dummy text. This is dummy text. This is dummy text.
\subsubsection{Third-Level Heading}
This is dummy text. This is dummy text. This is dummy text. This is dummy text.
\paragraph{Fourth-Level Heading} Fourth-level headings are placed as part of the paragraph.
\section{ELEMENTS\ OF\ THE\ MANUSCRIPT}
\subsection{Figures}Figures should be cited in the main text in chronological order. This is dummy text with a citation to the first figure (\textbf{Figure \ref{fig1}}). Citations to \textbf{Figure \ref{fig1}} (and other figures) will be bold.
\begin{figure}[h]
\includegraphics[width=3in]{SampleFigure}
\caption{Figure caption with descriptions of parts a and b}
\label{fig1}
\end{figure}
\subsection{Tables} Tables should also be cited in the main text in chronological order (\textbf {Table \ref{tab1}}).
\begin{table}[h]
\tabcolsep7.5pt
\caption{Table caption}
\label{tab1}
\begin{center}
\begin{tabular}{@{}l|c|c|c|c@{}}
\hline
Head 1 &&&&Head 5\\
{(}units)$^{\rm a}$ &Head 2 &Head 3 &Head 4 &{(}units)\\
\hline
Column 1 &Column 2 &Column3$^{\rm b}$ &Column4 &Column\\
Column 1 &Column 2 &Column3 &Column4 &Column\\
Column 1 &Column 2 &Column3 &Column4 &Column\\
Column 1 &Column 2 &Column3 &Column4 &Column\\
\hline
\end{tabular}
\end{center}
\begin{tabnote}
$^{\rm a}$Table footnote; $^{\rm b}$second table footnote.
\end{tabnote}
\end{table}
\subsection{Lists and Extracts} Here is an example of a numbered list:
\begin{enumerate}
\item List entry number 1,
\item List entry number 2,
\item List entry number 3,\item List entry number 4, and
\item List entry number 5.
\end{enumerate}
Here is an example of a extract.
\begin{extract}
This is an example text of quote or extract.
This is an example text of quote or extract.
\end{extract}
\subsection{Sidebars and Margin Notes}
\begin{marginnote}[]
\entry{Term A}{definition}
\entry{Term B}{definition}
\entry{Term C}{defintion}
\end{marginnote}
\begin{textbox}[h]\section{SIDEBARS}
Sidebar text goes here.
\subsection{Sidebar Second-Level Heading}
More text goes here.\subsubsection{Sidebar third-level heading}
Text goes here.\end{textbox}
\subsection{Equations}
\begin{equation}
a = b \ {\rm ((Single\ Equation\ Numbered))}
\end{equation}
Equations can also be multiple lines as shown in Equations 2 and 3.
\begin{eqnarray}
c = 0 \ {\rm ((Multiple\ Lines, \ Numbered))}\\
ac = 0 \ {\rm ((Multiple \ Lines, \ Numbered))}
\end{eqnarray}
\begin{summary}[SUMMARY POINTS]
\begin{enumerate}
\item Summary point 1. These should be full sentences.
\item Summary point 2. These should be full sentences.
\item Summary point 3. These should be full sentences.
\item Summary point 4. These should be full sentences.
\end{enumerate}
\end{summary}
\begin{issues}[FUTURE ISSUES]
\begin{enumerate}
\item Future issue 1. These should be full sentences.
\item Future issue 2. These should be full sentences.
\item Future issue 3. These should be full sentences.
\item Future issue 4. These should be full sentences.
\end{enumerate}
\end{issues}
\section*{DISCLOSURE STATEMENT}
If the authors have noting to disclose, the following statement will be used: The authors are not aware of any affiliations, memberships, funding, or financial holdings that
might be perceived as affecting the objectivity of this review.
\section*{ACKNOWLEDGMENTS}
Acknowledgements, general annotations, funding.
\section*{LITERATURE\ CITED}
\noindent
Please see the Style Guide document for instructions on preparing your Literature Cited.
The citations should be listed in alphabetical order, with titles. For example:
\begin{verbatim}
| proofpile-arXiv_065-316 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Consider the following \emph{edge orientation} problem: we are given a set
$V$ of $n$ nodes, and undirected edges arrive online one-by-one. Upon
arrival of an edge $\{u,v\}$, it has to be oriented as either $u \to v$
or $v \to u$, immediately and irrevocably. The goal is to minimize the
\emph{discrepancy} of this orientation at any time $t\in [T]$ during the arrival
process, i.e., the maximum imbalance
between the in-degree and out-degree of any node. Formally, if we let
$\bm{\chi}^t$ to denote the orientation at time $t$ and $\delta_t^-(v)$ (resp. $\delta_t^+(v)$) to denote the number of in-edges (resp. out-edges) incident to $v$ in $\bm{\chi}^t$, then we want to minimize
\[ \max_t \text{disc}(\bm{\chi}^t) := \max_t \max_v | \delta_t^-(v) - \delta_t^+(v) |. \]
If the entire sequence of edges is known up-front, one can use a
simple cycle-and-path-peeling argument to show that any set of
edges admit a discrepancy of at most $1$. The main focus of this work is in understanding how much loss is caused by the presence of uncertainty, since we don't have knowledge of future arrivals when we irrevocably orient an edge.
This problem was proposed by Ajtai et al.~\cite{AANRSW-Journal98} as a special
case of the \emph{carpooling problem} where hyperedges arrive online,
each representing a carpool where one person must be designated as a
driver. The ``fair share'' of driving for person $i$ can be defined as
$\sum_{e: i \in e} 1/|e|$, and we would like each person to drive
approximately this many times. In the case of graphs where each
carpool is of size $|e| = 2$, this carpooling problem is easily
transformed into the edge-orientation problem.
Ajtai et al.\ showed that while deterministic algorithms cannot have
an $o(n)$ discrepancy, they gave a randomized ``local
greedy'' which has an expected discrepancy (for any $T \geq 1$) of $O(\sqrt{n \log n})$ for any online input sequence of $T$ arrivals. Indeed, note that the discrepancy bound is independent of the length of the sequence $T$, and depends only
on the number of nodes, thus giving a non-trivial improvement over the naive random assignment, which will incur a discrepancy of $O(\sqrt{T \log n})$. Intriguingly, the lower bound they show for online algorithms is only $\Omega((\log n)^{1/3})$---leaving a large gap between the upper and
lower bounds.
Given its apparent difficulty in the adversarial online model, Ajtai et al.\ proposed a stochastic model, where each edge is an
independent draw from some underlying probability distribution over
pairs of vertices. They considered the the uniform distribution, which
is the same as presenting a uniformly random edge of the complete
graph at each time. In this special case, they showed that the greedy
algorithm (which orients each edge towards the endpoint with lower
in-degree minus out-degree) has expected discrepancy
$\Theta(\log\!\log n)$. Their analysis crucially relies on the
structure and symmetry of the complete graph.
In this paper, we consider this stochastic version of the problem for general graphs:
i.e., given an arbitrary simple graph $G$, the online input is a sequence of
edges chosen independently and uniformly at random (with replacement) from the edges of
this graph $G$\footnote{It is possible to extend our results, by losing a $\log T$ factor, to edge-weighted distributions where an edge is drawn i.i.d. with probability proportional to its weight. Since this extension
uses standard ideas like bucketing edges with similar weights, we restrict our attention to arrivals from a graph $G$ for simplicity.}.
Our main result is the following:
\begin{theorem}[Main Theorem] \label{thm:final}
There is an efficient algorithm for the edge-orientation problem that
maintains, w.h.p, a maximum discrepancy of $O(\poly\log (nT))$ on input sequences formed by i.i.d.\ draws from the edges of a given graph $G$.
\end{theorem}
\subsection{Our Techniques}
Let us fix some notation. Given a (multi)graph $G = (V,E)$ with
$|V| = n$, the algorithm is presented with a vector $v^t$ at each time
as follows. A uniformly random edge $(u,v) \in G$ is sampled, and the
associated characteristic vector $v^t = \mathbf{e}_u - \mathbf{e}_v$
is presented to the algorithm, where $\mathbf{e}_u \in \mathbb{R}^n$
has all zeros except index $u$ being $1$. The algorithm must
immediately sign $v^t$ with $\chi^t \in \{-1, 1\}$, to keep
the discrepancy bounded at all times $t$. Here the discrepancy
of node $u$ at time $t$ is the $u^{th}$ entry of the vector
$\sum_{s \leq t} \chi^{s} v^{s}$ (which could be negative), and the discrepancy of the
algorithm is the maximum absolute discrepancy over all vertices, i.e.,
$ \Big\| \sum_{s \leq t} \chi^{s} v^{s} \Big\|_{\infty}$ .
A natural algorithm
is to pick a uniformly random orientation for each arriving edge. This
maintains zero expected discrepancy at each node. However, the large
variance may cause the maximum discrepancy over nodes to be as large
as $\Omega(\sqrt{T})$, where $T$ the total number of edges (which is
the same as the number of time-steps). For example, this
happens even on $T$ parallel edges between two nodes. In this case, however, the \emph{greedy
algorithm} which orients the edge from the vertex of larger discrepancy to that of smaller discrepancy works well. Indeed it is not known to be
bad for stochastic instances. (Since it is a
deterministic algorithm, it can perform poorly on adversarial inputs due to known $o(n)$ lower bounds~\cite{AANRSW-Journal98}.)
Building on the work of Ajtai et al. who consider stochastic arrivals on complete graphs, the first step towards our overall algorithm is to consider the problem on \emph{expander graphs}. At a
high level, one hurdle to achieving low discrepancy in the stochastic
case is that we reach states where both endpoints of a randomly-chosen edge
already have equally high discrepancy. Then, no matter how we orient the edge,
we increase the maximum discrepancy. But this should not happen in
expander graphs: if $S$ is the set of ``high'' discrepancy vertices,
then the expansion of the graph implies that $|\partial S|$ must be a
large fraction of the total number of edges incident to
$S$. Therefore, intuitively, we have a good chance of reducing the
discrepancy if we get edges that go from $S$ to low-degree nodes.
To make this idea formal, we relate the greedy process on expander graphs
$G$ to the so-called \OPBP over an \emph{easier arrival} sequence where the end-points of a new edge are chosen from a \emph{product distribution}, where the probability of choosing a vertex is proportional to its
degree in $G$. However, in the \OPBP\footnote{The name \OPBP stems from the notion for an analogous load-balancing (or) balls-and-bins setting~\cite{PTW-Journal15}, this process would be like the $(1+\beta)$-fractional version of the power-of-two choices process.}, the algorithm orients a new edge greedily with only probability $\beta$ for some small value of $\beta$, and does a random orientation with the remaining probability $(1-\beta)$.
Indeed, we compare these two processes by showing that (a)~the expected increase of a
natural potential $\Phi := \sum_v \cosh(\lambda \, {\rm discrepancy}(v))$---which can
be thought of as a soft-max function---is lower for the greedy algorithm on expanders when compared to the \OPBP on the product distribution, and (b)~the same potential increases very slowly
(if at all) on the product distribution. A similar idea was used by Peres et al.~\cite{PTW-Journal15} for a
related stochastic load balancing problem; however, many of the technical details are different.
The second component of the algorithm is to decompose a general
graph into expanders. This uses the (by-now commonly used) idea of
expander decompositions. Loosely speaking, this says that the edges of
any graph can be decomposed into some number of smaller graphs (each
being defined on some subset of vertices), such that (a)~each of these
graphs is an expander, and (b)~each vertex appears in only a
poly-logarithmic number of these expanders. Our arguments for
expanders require certain weak-regularity properties---namely the
degrees of vertices should not be too small compared to the average
degree---and hence some care is required in obtaining decompositions
into such expanders. These details appear in \S\ref{sec:expand-decomp}.
Our overall algorithm can then be summarized in Algorithm~\ref{alg:expander-greedy}.
\begin{algorithm}
\caption{{\algo} (graph $G=(V,E)$)}
\label{alg:expander-greedy}
\begin{algorithmic}[1]
\State run the expander-decomposition algorithm in~\Cref{thm:regular-expanders} (in~\Cref{sec:tieup}) on $G$ to obtain a collection ${\cal P} = \{G_1, G_2, \ldots, G_k\}$ of edge-disjoint expander graphs.
\State initialize ${\cal H} = \{H_1, H_2, \ldots H_k\}$ to be a collection of empty graphs, where $H_i$ is the directed multi-graph consisting of all edges which have arrived corresponding to base graph $G_i$, along with their orientations assigned by the algorithm upon arrival.
\For {each new edge $e \equiv \{u,v\}$ that arrives at time-step $t$} \label{alg:arrival}
\State let $i$ denote the index such that $e \in G_i$ according to our decomposition. \label{alg:identify}
\State add $e$ to $H_i$, and orient $e$ in a greedy manner w.r.t $H_i$, i.e., from $u$ to $v$ if ${\rm disc}_{H_i}(u) \geq {\rm disc}_{H_i}(v)$, where ${\rm disc}_H(w) = \delta_{H_i}^{{\rm in}}(w) - \delta_{H_i}^{{\rm out}}(w)$ is the in-degree minus out-degree of any vertex $w$ in the current sub-graph $H_i$ maintained by the algorithm. \label{alg:greedy}
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Related Work}
\label{sec:related-work}
The study of discrepancy problems has a long history; see the
books~\cite{Matousek-Book09,Chazelle-Book01} for details on the
classical work.
The problem of online discrepancy minimization was studied by
Spencer~\cite{Spencer77}, who showed an $\Omega(\sqrt{T})$ lower bound
for for adaptive adversarial arrivals. More refined lower bounds were given
by B\'ar\'any~\cite{Barany79};
see~\cite{BJSS-STOC20} for many other references.
Much more recently, Bansal and
Spencer~\cite{BS-arXiv19} and Bansal et al.~\cite{BJSS-STOC20}
consider a more general vector-balancing problem, where each request
is a vector $v^t \in \mathbb{R}^n$ with $\|v^t \|_{\infty} \leq1$, and
the goal is to assign a sign $\chi^t \in \{-1,1\}$ to each vector to
minimize $\| \sum_t \chi^t v^t\|_\infty$, i.e., the largest coordinate of
the signed sum. Imagining each edge $e_t = \{u,v\}$ to be the vector
$\frac{1}{\sqrt{2}}(\mathbf{e}_u - \mathbf{e}_v)$ (where this initial
sign is chosen arbitrarily) captures the edge-orientation problem up
to constant factors. Bansal et al.\ gave an
$O(n^2 \log (nT))$-discrepancy algorithm for the natural stochastic
version of the problem under general distributions.
For some special geometric problems, they gave an algorithm that
maintains $\poly(s, \log T, \log n)$ discrepancy for sparse vectors that have only $s$ non-zero coordinates. These improve on the work of Jiang et
al.~\cite{JKS-arXiv19}, who give a sub-polynomial discrepancy coloring
for online arrivals of points on a line. A related variant of these
geometric problems
was also studied in Dwivedi et al.~\cite{DFGR19}.
Very recently, an independent and exciting work of Alweiss, Liu, and
Sawhney~\cite{ALS-arXiv20} gave a randomized algorithm that maintains
a discrepancy of $O(\log (nT)/\delta)$ for any input sequence chosen
by an oblivious adversary with probability $1-\delta$, even for the
more general vector-balancing problem for vectors of unit Euclidean
norm (the so-called K\'oml\'os setting). Instead of a potential based
analysis like ours, they directly argue why a carefully chosen
randomized greedy algorithm ensures w.h.p. that the
discrepancy vector is always sub-Gaussian. A concurrent work of Bansal et al.~\cite{BJMSS-arXiv20} also obtains similar results for i.i.d. arrivals, but they use a very different potential than our expander-decomposition approach. It is an interesting open question to extend our approach to hypergraphs and re-derive their results.
\subsection{Notation} \label{sec:notation}
We now define some graph-theoretic terms that are useful for the remainder of the paper.
\begin{defn}[Volume and $\alpha$-expansion]
Given any graph $G=(V,E)$, and set $S \subseteq V$ its \emph{volume} is
defined to be $\vol(S) := \sum_{v \in S} \text{degree}(v)$. We say $G$ is an \emph{$\alpha$-expander} if
\[ \min_{S\subseteq V} \frac{|E(S, V \setminus S)|}{\min\{ \vol(S), \vol(V \setminus S) \}} \geq \alpha.
\]
\end{defn}
We will also need the following definition of ``weakly-regular''
graphs, which are graphs where every vertex has degree \emph{at least} a constant factor of the average degree. Note that the \emph{maximum} degree can be arbitrarily larger than the average degree.
\begin{defn}[$\gamma$-weakly-regular] For $\gamma \in [0,1]$, a graph $G=(V,E)$ is called \emph{$\gamma$-weakly-regular} if every vertex $v\in V$ has degree at least $\gamma \cdot {\sum_{u\in V} \text{degree}(u)}/{|V|}$.
\end{defn}
\begin{defn}[Discrepancy Vector]
Given any directed graph $H=(V,A)$ (representing all the oriented edges until any particular time-step), let $\bd \in \integers^{|V|}$ represent the discrepancy vector of the current graph, i.e. the $v^{{\rm th}}$ entry of $\bd$, denoted by $d_v$ is the difference between the number of in-edges incident at $v$ and the number of out-endges incident at $v$ in $H$.
\end{defn}
\section{The Greedy Algorithm on Expander Graphs}
In this section, we consider the special case when the
graph $G$ is an expander. More formally, we show that the greedy algorithm is actually good for such graphs.
\begin{defn}[Expander Greedy Process]
The greedy algorithm maintains a current discrepancy $d^t_v$ for each vertex $v$, which is the in-degree minus out-degree of every vertex among the previously arrived edges. Initially, $d^1_v = 0$ for every vertex $v$ at the beginning of time-step $1$. At each time~$t \geq 1$, a uniformly random edge $e \in G$ with end-points $\{u,v\}$ is presented to the algorithm, and suppose w.l.o.g. $d^t_u \geq d^t_v$, i.e., $u$ has larger discrepancy (ties broken arbitrarily). Then, the algorithm orients the edge from $u$ to $v$. The discrepancies of $u$ and $v$ become $d^{t+1}_u = d^{t}_u -1$ and $d^{t+1}_v = d^{t}_u + 1$, and other vertices' discrepancies are unchanged.
\end{defn}
\begin{theorem}
\label{thm:main}
Consider any $\gamma$-weakly-regular $\alpha$-expander $G$, and suppose edges are arriving as independent samples from $G$ over a horizon of $T$ time-steps. Then, the greedy algorithm maintains a discrepancy $d^t_v$ of $O(\log^5 nT)$ for
every time $t$ in $[0\ldots T]$ and every vertex $v$, as long as
$\alpha \geq 6\lambda$, $\gamma \geq \lambda^{1/4}$, where $\lambda = O(\log^{-4} nT)$.
\end{theorem}
For the sake of concreteness, it might be instructive to assume $\alpha \approx \gamma \approx O(\frac{1}{\log n})$, which is roughly what we will obtain from our expander-decomposition process.
\subsection{Setting Up The Proof} \label{sec:setup}
Our main idea is to introduce \emph{another random process} called the \OPBP, and show that the \OPBP stochastically dominates the expander-greedy process in a certain manner, and separately bound the behaviour of the \OPBP subsequently. By combining these two, we get our overall analysis of the expander-greedy process.
To this end, we first define a random arrival sequence where the end-points of each new edge are actually sampled independently from a \emph{product distribution}.
\begin{defn}[Product Distribution]
Given a set $V$ of vertices with associated weights
$\{w_v \geq 0 \mid v \in V\}$, at each time $t$, we select two vertices $u,v$ as {\em two independent samples} from $V$, according to the distribution where any vertex $v \in V$
is chosen with probability $\frac{w_{v}}{\sum_{v' \in V} w_{v'}}$,
and the vector $v^t := \chi_u - \chi_v$ is presented to the
algorithm.
\end{defn}
We next define the \OPBP, which will be crucial for the analysis.
\begin{defn}[\OPBP on product distributions]
Consider a product distribution over a set of vertices $V$. When presented with a vector $v^t := \chi_u - \chi_v$ from this product distribution at time $t$, the \OPBP assigns a sign to the vector $v^t$ as follows:
with
probability $(1-\beta)$, it assigns it uniformly $\pm 1$, and only
with the remaining probability $\beta$ it uses the greedy algorithm
to sign this vector.
\end{defn}
Note that setting $\beta=1$ gives us back the greedy algorithm, and
$\beta=0$ gives an algorithm that assigns a random sign to each vector.
\begin{remark}
The original \OPBP was in fact introduced in~\cite{PTW-Journal15}, where Peres et al. analyzed a general load-balancing process over $n$ bins (corresponding to vertices), and balls arrive sequentially. Upon each arrival, the algorithm gets to sample a random edge from a $k$-regular expander\footnote{Actually their proof works for a slightly more general notion of expanders, but which is still insufficient for our purpose.} $G$ over the bins, and places the ball in the lighter loaded bin among the two end-points of the edge. They show that this process maintains a small maximum load, by relating it to an analogous \OPBP, where instead of sampling an edge from $G$, \emph{two bins} are chosen uniformly at random, and the algorithm places the ball into a random bin with probability $1-\beta$, and the lesser loaded bin with probability $\beta$. Note that their analysis inherently assumed that the two vertices are sampled from
the uniform distribution where all weights $w_u$ are
equal. By considering arbitrary product
distributions, we are able to handle arbitrary graphs with
a non-trivial conductance, i.e., even those that \emph{do not} satisfy the $k$-regularity property.
This is crucial for us because the expander decomposition algorithms, which reduce general graphs to a
collection of expanders, do not output regular expanders.
\end{remark}
Our analysis will also involve a potential function (intuitively the soft-max of the vertex discrepancies) for both the expander-greedy process as well as the \OPBP.
\begin{defn}[Potential Function]
Given vertex discrepancies $\bd \in \integers^{|V|}$, define
\begin{gather}
\Phi(\bd) := \sum_v \cosh(\lambda d_v), \label{eq:pot}
\end{gather}
where $\lambda < 1$ is a suitable parameter to be optimized.
\end{defn}
Following many prior works, we use the hyperbolic cosine function to symmetrize for positive and negative discrepancy values. When $\bd$ is clear from the context, we will write $\Phi(\bd)$ as $\Phi$. We will also use $\bdt$ to refer to the discrepancy vector at time $t$, and $d^t_u$ to the discrepancy of $u$ at time $t$. We will often ignore the superscript $t$ if it is clear from the context.
We are now ready to define the appropriate parameters of the \OPBP. Indeed, given the expander-greedy process defined on graph $G$, we construct an associated \OPBP where for each vertex $v$, the probability of sampling any vertex in the product distribution is proportional to its degree in $G$, i.e., $w_v = {\rm degree}_G(v)$ for all $v \in V$. We also set the $\beta$ parameter equal to $\alpha$, the conductance of the graph $G$.
\subsection{One-Step Change in Potential} \label{sec:overview}
The main idea of the proof is to use a \emph{majorization argument} to argue that \emph{the expected one-step change} in potential
of the expander process can be upper bounded by that of the \OPBP, if the two processes start at the same discrepancy configuration $\bdt$.
Subsequently, we bound the one-step change for the \OPBP in \cref{sec:beta-process}.
To this end, consider a time-step $t$, where the current discrepancy vector of the expander process is $\bdt$.
Suppose the next edge in the expander process is $(i,j)$, where $d^t_i >
d^t_j$. Then the greedy algorithm will always
choose a sign such that $d_i$ decreases by $1$, and $d_j$ increases by
$1$. Indeed, this ensures the overall potential is non-increasing
unless $d_i = d_j$. More importantly, the potential term for other
vertices remains unchanged, and so we can express the expected change
in potential as having contributions from precisely two terms, one due
to $d_i \to d_i - 1$ (called the \emph{decrease term}), and denoted as
$\Delta_{-1}(t)$, and one due to $d_j \to d_j + 1$ (the \emph{increase term}), denoted as $\Delta_{+1}(t)$:
\begin{align}
\E_{(i,j) \sim G}[\Delta \Phi] &= \E_{(i,j) \sim G}\Big[\Phi(\bd^{t+1}) - \Phi(\bdt) \Big] \notag \\
& \hspace{-1cm} = \underbrace{\E_{(i,j})\Big[\cosh(\lambda (d_i-1)) -
\cosh(\lambda (d_i)) \Big]}_{=: \Delta_{-1} (\bdt)} + \underbrace{\E_{(i,j)}
\Big[\cosh(\lambda (d_j+1)) -
\cosh(\lambda (d_j))\Big]}_{=: \Delta_{+1} (\bdt)} . \notag
\end{align}
Now, consider the \OPBP on the vertex set $V$, where the product
distribution is given by weights $w_u = \deg(u)$ for each $u \in
V$, starting with the same discrepancy
vector $\bdt$ as the expander process at time $t$. Then, if
$u$ and $v$ are the two vertices sampled independently according to
the product distribution, then by its definition, the \OPBP signs this pair randomly with probability $(1-\beta)$, and greedily with probability $\beta$.
For the sake of analysis, we define two terms analogous to $\Delta_{-1}
(\bdt)$ and $\Delta_{+1} (\bdt)$ for the \OPBP. To
this end, let $i \in \{u,v\}$ denote the identity of the random vertex
to which the \OPBP assigns $+1$. Define
\begin{gather}
\widetilde{\Delta}_{+1} (\bdt) : =\E_{(u,v) \sim \bw \times \bw}
\Big[\cosh(\lambda (d_i+1)) - \cosh(\lambda (d_i))\Big], \label{eq:2}
\end{gather}
where $\bw \times \bw$ refers to two independent choices from the product distribution corresponding to $w$.
Similarly let $j \in \{u,v\}$ denote the identity of the random vertex
to which the \OPBP assigns $-1$, and define
\begin{gather}
\widetilde{\Delta}_{-1}(\bdt) :=
\E_{(u,v) \sim \bw \times \bw} \Big[\cosh(\lambda (d_j-1)) -
\cosh(\lambda (d_j))\Big]. \label{eq:3}
\end{gather}
In what follows, we bound $\Delta_{-1} (\bdt) \leq \widetilde{\Delta}_{-1} (\bdt)$ through a coupling argument, and similarly bound $\Delta_{+1} (\bdt) \leq \widetilde{\Delta}_{+1} (\bdt)$ using a separate coupling.
A subtlety: the expected one-step change in $\Phi$ in the
expander process precisely equals
$\Delta_{-1} (\bdt) + \Delta_{+1}(\bdt)$. However, if we define an
analogous potential for the \OPBP, then the one-step change in
potential there \emph{does not} equal the sum
$\widetilde{\Delta}_{-1} (\bdt) + \widetilde{\Delta}_{+1}
(\bdt)$. Indeed, we sample $u$ and $v$ i.i.d.\ in the \OPBP,
it is possible that $u = v$ and therefore the one-step change in
potential is $0$, while the sum
$\widetilde{\Delta}_{-1} (\bdt) + \widetilde{\Delta}_{+1} (\bdt)$ will
be non-zero. Hence the following lemma does not bound the expected
potential change for the expander process by that for the \OPBP (both
starting from the same state), but by this surrogate
$\widetilde{\Delta}_{-1} (\bdt) + \widetilde{\Delta}_{+1} (\bdt)$, and it is this
surrogate sum that we bound in~\Cref{sec:beta-process}.
\subsection{The Coupling Argument} \label{sec:coupling}
We now show a coupling between the expander-greedy process and the \OPBP defined in ~\Cref{sec:setup}, to bound the expected one-step change in potential for the expander process.
\begin{lemma}
\label{lem:coupling}
Given an $\alpha$-expander $G = (V,E)$, let $\bdt \equiv (d_v \, : v
\in V)$ denote the current discrepancies of the vertices at any time
step $t$ for the expander-greedy process. Consider a hypothetical \OPBP on vertex set $V$ with $\beta = \alpha$, the
weight of vertex $v \in V$ set to $w_{v} = \deg(v)$, and starting from the same discrepancy state $\bdt$. Then:
\begin{OneLiners}
\item[(a)] $\Delta_{-1} (\bdt) \leq \widetilde{\Delta}_{-1} (\bdt)$,
~~and~~ (b) $\Delta_{+1} (\bdt) \leq \widetilde{\Delta}_{+1} (\bdt)$.
\end{OneLiners}
Hence the expected one-step change in potential $\E[ \Phi(\bd^{t+1})
- \Phi(\bdt)] \leq \widetilde{\Delta}_{-1} (\bdt) +
\widetilde{\Delta}_{+1} (\bdt)$.
\end{lemma}
\begin{proof}
We start by renaming the vertices in $V$ such that $d_n \leq d_{n-1} \leq \ldots \leq d_1$. Suppose the next edge in the expander process corresponds to indices $i,j$ where $i<j$.
We prove the lemma statement by two separate coupling
arguments, which crucially depend on the following
claim. Intuitively, this claim shows that a $-1$ is more
likely to appear among the high discrepancy vertices of $G$ in
the expander process than the \OPBP (thereby having a lower
potential), and similarly a $+1$ is more likely to appear among
the low discrepancy vertices of $G$ in the expander process
than in the \OPBP. Peres et al.~\cite{PTW-Journal15} also prove
a similar claim for stochastic load balancing, but they
only consider uniform distributions.
\begin{claim} \label{cl:good-prefix}
For any $k \in [n]$, if $S_k$ denotes the set of vertices with
indices $k' \in [k]$ (the $k$ highest discrepancy vertices) and $T_k$ denotes $V\setminus S_k$, then
\begin{align*}
\Pr_{(i,j) \sim G}[-1 \in S_k] &\geq \Pr_{(u,v) \sim \bw \times \bw}[-1
\in S_k] \quad \text{and} \quad
\Pr_{(i,j) \sim G}[+1 \in T_k] &\geq \Pr_{(u,v) \sim \bw \times \bw}[+1 \in T_k] \, .
\end{align*}
Above, we abuse notation and use the terminology `$-1 \in S_k$' to denote that the vertex whose discrepancy decreases falls in the set $S_k$ in the corresponding process.
\end{claim}
\begin{proof}
Fix an index $k$, and let $\rho:= \frac{\vol(S_k)}{\vol(V)}$ be the
relative volume of $S_k$, i.e., the fraction of edges of $G$ incident to the
$k$ nodes of highest degree.
First we consider the \OPBP on $V$. With $(1-\beta)$, probability we assign a sign to the input vector uniformly at random. Therefore, conditioned on this choice, a vertex in $S_k$ will get a $-1$ sign with probability
$$ \frac{1}{2} \cdot \Pr[u \in S_k] + \frac{1}{2} \Pr[v \in
S_k]~~ =~~ \frac{\vol(S_k)}{\vol(V)} ~~=~~ \rho,$$ where $u$ and $v$ denote the two vertices chosen by the \OPBP process. With probability $\beta$, we will use the greedy algorithm, and so $-1$ will appear on a vertex in $S_k$ iff at least one of the two chosen vertices lie in $S_k$. Putting it together, we get
\begin{align} \Pr_{(u,v) \sim \bw \times \bw}[-1 \in S_k]
&= (1-\beta)\cdot \frac{\vol(S_k)}{\vol(V)} + \beta \cdot
{\Pr_{(u,v) \sim \bw \times \bw} [\{u,v\} \cap S_k \neq \emptyset]} \notag \\
&= (1-\beta) \cdot \rho + \beta \cdot
\left(1 - (1 - \rho)^2 \right) ~~=~~ (1 + \beta - \beta \cdot \rho
) \cdot \rho. \label{eq:4}
\end{align}
Now we consider the expander process. A vertex in $S_k$ gets -1 iff the chosen edge has at least one end-point in $S_k$. Therefore,
\begin{align*} &\Pr_{(i,j) \sim G}[-1 \in S_k] ~~ =~~ \Pr[i \in S_k] ~~ =~~ \frac{|E(S_k,S_k)| + |E(S_k, V\setminus S_k)|}{|E|} \\
& \quad = \frac{\big( 2|E(S_k,S_k)| + |E(S_k, V\setminus S_k)|\big) + |E(S_k, V\setminus S_k)|}{2|E|}
~~=~~ \frac{\vol(S_k) + |E(S_k, V\setminus S_k)|}{\vol(V)}.
\end{align*}
Recalling that $\beta = \alpha$, and that $G$ is an $\alpha$-expander, we consider two cases:
\textbf{Case 1}: If $\vol(S_k) \leq \vol(V\setminus S_k)$, we use
\begin{align*} \Pr_{(i,j) \sim G}[-1 \in S_k] &~~=~~ \frac{\vol(S_k) + |E(S_k, V\setminus S_k)|}{\vol(V)} \\
&~~\geq~~ (1+\alpha)\frac{\vol(S_k)}{\vol(V)} ~~=~~ (1+\beta)\rho \geq \Pr_{(u,v) \sim \bw \times \bw}[-1 \in S_k].
\end{align*}
\textbf{Case 2}: If $\vol(S_k) > \vol(V\setminus S_k)$, we use
\begin{align*}
\Pr_{(i,j) \sim G}[-1 \in S_k] &~~=~~ \frac{\vol(S_k) + |E(S_k, V\setminus S_k)|}{\vol(V)} ~~\geq~~ \frac{\vol(S_k) + \alpha \cdot \vol(V \setminus S_k)}{\vol(V)} \\
&~~\geq~~ \Big(1 + \beta \cdot \frac{\vol(V \setminus S_k)}{\vol(V)} \Big) \cdot \rho ~~=~~ \Pr_{(i,j) \sim \bw \times \bw}[-1 \in S_k],
\end{align*}
where the last equality uses~(\ref{eq:4}).
This completes the proof of $\Pr_{(i,j) \sim G}[-1 \in S_k] \geq \Pr_{(i,j) \sim \bw}[-1 \in S_k]$.
One can similarly show $\Pr_{(i,j) \sim G}[+1 \in T_k] \geq \Pr_{(u,v) \sim \bw \times \bw}[+1 \in T_k] $, which completes the proof of the claim.
\end{proof}
~\Cref{cl:good-prefix} shows that we can establish a coupling between the two
processes such that if $-1$ belongs to $S_k$ in \OPBP, then the same
happens in the expander process. In other words, there is a joint
sample space $\Omega$ such that for any outcome $\omega \in \Omega,$
if vertices $v_a$ and $v_b$ get sign $-1$ in the expander process and
the \OPBP respectively, then $a \leq b$.
Let $\bd$ and ${\widetilde \bd}$ denote the discrepancy vectors in the
expander process and the \OPBP after the -1 sign has been assigned,
respectively. Now, since both the processes start with the same discrepancy
vector $\bdt$, we see that for any fixed outcome $\omega \in \Omega,$
the vector ${\widetilde \bd}$ majorizes $\bd$ in the following sense.
\begin{defn}[Majorization] Let ${\bf a}$ and ${\bf b}$ be two real vectors of the same length $n$. Let $\overrightarrow{{\bf a}}$ and $\overrightarrow{{\bf b}}$ denote the vectors ${\bf a}$ and ${\bf b}$ with coordinates rearranged in descending order respectively. We say that ${\bf a}$ {\em majorizes} ${\bf b}$, written ${\bf a} \succeq {\bf b}$, if for all $i$, $1 \leq i \leq n$, we have $\sum_{j=1}^i \overrightarrow{{\bf a}}_j \geq \sum_{j=1}^i \overrightarrow{{\bf b}}_j. $
\end{defn}
One of the properties of majorization~\cite{Hardy} is that any convex and symmetric function of the discrepancy vector (which $\Phi$ is) satisfies
that $\Phi(\bd) \leq \Phi({\widetilde \bd})$.
Thus, for any fixed outcome $\omega$, the change in potential in the expander
process is at most that of the surrogate potential in the \OPBP. Since $\Delta_{-1}(\bdt)$ and
$\widetilde{\Delta}_{-1} (\bdt)$ are just the expected change of these
quantities in the two processes (due to assignment of -1 sign), the
first statement of the lemma follows. Using an almost identical proof, we can also show the second statement. (Note that we may need to redefine the coupling
between the two processes to ensure that if vertices $v_a, v_b$ get
sign $+1$ as above, then $b \leq a$.)
\end{proof}
\subsection{Analyzing One-Step $\Delta \Phi$ of the \OPBP} \label{sec:beta-process}
Finally we bound the one-step change in (surrogate) potential of the
\OPBP starting at discrepancy vector $\bdt$; recall the definitions of $\widetilde{\Delta}_{-1}(\bdt)$ and $\widetilde{\Delta}_{+1}(\bdt)$ from~\Cref{sec:overview}.
\begin{lemma} \label{lem:beta-process}
If $\Phi(\bdt) \leq (nT)^{10}$,
and if the weights $w_v$ are such that for all $v$, $\frac{w_v}{\sum_{v'} w_{v'}}\geq \frac{\gamma}{n}$ (i.e.,
the minimum weight is at least a $\gamma$ fraction of the average weight),
then we have that
\[ \widetilde{\Delta}_{-1}(\bdt) +
\widetilde{\Delta}_{+1}(\bdt) \leq O(1), \]
as long as $\beta \geq 6\lambda$, $\gamma \geq 16 \lambda^{1/4}$, and $\lambda = O(\log^{-4} nT)$.
\end{lemma}
\begin{proof}
Let
$u$ be an arbitrary vertex in $V$, and we condition on the fact that the first vertex chosen by the \OPBP is $u$. Then, we show that
\begin{gather*}
\E_{v \sim \bw} \Big[\cosh(\lambda (d_i-1)) -
\cosh(\lambda (d_i)) + \cosh(\lambda (d_j+1)) -
\cosh(\lambda (d_j)) \, \Big| \, u \textrm{ is sampled
first}\Big],
\end{gather*}
is $O(1)$ regardless of the choice of $u$, where we assume that
$i$ is the random vertex which is assigned $-1$ by the \OPBP,
and $j$ is the random vertex which is assigned $+1$. The proof of the lemma then
follows by removing the conditioning on $u$.
Following~\cite{BS-arXiv19,BJSS-STOC20}, we use the first two terms of the Taylor expansion of $cosh(\cdot)$ to upper bound the difference terms of the form $\cosh(x+1) - \cosh(x)$ and $\cosh(x-1) - \cosh(x)$. To this end, note that, if $|\epsilon| \leq 1$ and $\lambda < 1$, we have that
\begin{align*}
\cosh(\lambda (x + \epsilon)) - \cosh(\lambda x) & \textstyle \leq \epsilon \lambda \sinh(\lambda x) + \frac{\epsilon^2}{2!} \lambda^2 \cosh(\lambda x) + \frac{\epsilon^3}{3!} \lambda^3 \sinh(\lambda x) + \ldots \\
& \leq \epsilon \lambda \sinh(\lambda x) + \epsilon^2 \lambda^2 \cosh(\lambda x).
\end{align*}
Using this, we proceed to bound the following quantity (by setting $\epsilon = -1$ and $1$ respectively):
\begin{align*}
&\E_{v \sim \bw} \Big[ \underbrace{- \lambda \big( \sinh(\lambda d_i) - \sinh (\lambda d_j) \big)}_{=: -L} + \underbrace{\lambda^2 \big( \cosh(\lambda d_i) + \cosh (\lambda d_j) \big)}_{=:Q} \, \Big| \, u \textrm{ is sampled first}\Big] .
\end{align*}
We refer to $L=\lambda \big( \sinh(\lambda d_i) - \sinh (\lambda d_j) \big)$ and $Q=\lambda^2 \big( \cosh(\lambda (d_i)) + \cosh (\lambda d_j) \big)$ as the \emph{linear} and \emph{quadratic} terms, since they arise from the first- and second-order derivatives in the Taylor expansion.
To further simplify our exposition, we define the following random variables:
\begin{OneLiners}
\item[(i)] $u_{>}$ is the identity of the vertex among $u,v$ with higher discrepancy, and $u_{<}$ is the other vertex. Hence we have that $d_{u_>} \geq d_{u_<}$.
\item[(ii)] $G$ denotes the random variable $\lambda \big( \sinh(\lambda d_{u_>}) - \sinh (\lambda d_{u_<}) \big)$, which indicates an analogous term to $L$, but if we exclusively did a greedy signing always (recall that the greedy algorithm would always decrease the larger discrepancy, but the \OPBP follows a uniformly random signing with probability $(1-\beta)$ and follows the greedy rule only with probability $\beta$).
\end{OneLiners}
Finally, for any vertex $w \in V$, we let $\danger(w) = \{ v : | d_w - d_v | < \frac{2}{\lambda}\}$ to denote the set of vertices with discrepancy close to that of $w$, where the gains from the term corresponding to $\beta G$ are insufficient to compensate for the increase due to $Q$.
We are now ready to proceed with the proof. Firstly, note that, since the \OPBP follows the greedy algorithm with probability $\beta$ (independent of the choice of the sampled vertices $u$ and $v$), we have that
\begin{eqnarray}
\label{eq:usample}
\E_v[L \mid u \textrm{ is sampled first}] ~~=~~ (1-\beta) 0 + \beta \E_v[G \mid u \textrm{ is sampled first}].
\end{eqnarray}
Intuitively, the remainder of the proof proceeds as follows: suppose $d_{u_>}$ and $d_{u_<}$ are both non-negative (the intuition for the other cases are similar). Then, $Q$ is proportional to $\lambda^2 \cosh (\lambda d_{u_>})$. Now, if $d_{u_>} - d_{u_<}$ is sufficiently large, then $G$ is proportional to $\lambda \sinh (\lambda d_{u_>})$, which in turn is close to $\lambda \cosh (\lambda d_{u_>})$. As a result, we get that as long as $\lambda = O(\beta)$, the term $- \beta G + Q$ can be bounded by $0$ for each choice of $v$ such that $d_{u_>} - d_{u_<}$ is large.
However, what happens when $d_{u_>} - d_{u_<}$ is small, i.e., when $v$ falls in $\danger(u)$? Here, the $Q$ term is proportional to $\lambda^2 \cosh (\lambda d_{u})$, but the $G$ term might be close to $0$, and so we can't argue that $- \beta G + Q \leq O(1)$ in these events. Hence, we resort to an amortized analysis by showing that (i) when $v \notin \danger(u)$, $-\beta G$ can not just compensate for $Q$, it can in fact compensate for $\frac{1}{\sqrt{\lambda}}Q \geq \frac{1}{\sqrt{\lambda}} \cdot \lambda^2 \cosh (\lambda d_{u})$, and secondly, (ii) the probability over a random choice of $v$ of $v \notin \danger(u)$ is at least $\sqrt{\lambda}$, provided $\Phi$ is bounded to begin with. The overall proof then follows from taking an average over all $v$.
Hence, in what follows, we will show that in expectation the magnitude of $\beta G$ can compensate for a suitably large multiple of $Q$ when $v \notin \danger(u)$.
\begin{claim} \label{cl:sinh-to-cosh}
Let $\beta \geq 6\lambda$. For any fixed choice of vertices $u$ and $v$ such that $v
\notin \danger(u)$, we have $G := \lambda \big( \sinh(\lambda d_{u_>}) - \sinh (\lambda
d_{u_<}) \big) \geq \frac{\lambda}{3} (\cosh(\lambda d_u) + \cosh(\lambda d_v) - 4)$.
\end{claim}
\begin{proof} The proof is a simple convexity argument.
To this end, suppose both $d_u, d_v \geq
0$. Then since $\sinh(x)$ is convex when $x \geq
0$ and its derivative is $\cosh(x)$, we get that
\begin{align*}
\sinh(\lambda d_{u_>}) - \sinh(\lambda d_{u_<}) &~~\geq~~ \lambda
\cosh(\lambda d_{u_<}) \cdot |d_u - d_v| ~~\geq~~ 2 \cosh(\lambda
d_{u_<}), \\
\intertext{using
$v \notin \danger(u)$. But since $\big| |\sinh(x)| -\cosh(x) \big| \leq
1$, we get that}
\sinh(\lambda d_{u_>}) - \sinh(\lambda d_{u_<}) &~~\geq~~ 2\sinh(\lambda d_{u_<}) - 2.
\end{align*}
Therefore, $\sinh(\lambda d_{u_<}) \leq \frac13(\sinh(\lambda
d_{u_>})+1)$. Now substituting, and using the monotonicity of $\sinh$ and its closeness to
$\cosh$, we get $G$ is at least
$$ \frac{2 \lambda}{3} \left( \sinh(\lambda d_{u_>}) - 1
\right) ~\geq~ \frac{\lambda}{3} \left( \sinh(\lambda d_{u_>}) +
\sinh(\lambda d_{u <}) - 2 \right) ~\geq~ \frac{\lambda}{3} \Big(
\cosh(\lambda d_{u}) + \cosh(\lambda d_{v}) -
4 \Big).$$ The case of $d_u, d_v \leq 0$ follows from setting $d_u'
= |d_u|, d_v' = |d_v|$ and using the above calculations, keeping in
mind that
$\sinh$ is an odd function but $\cosh$ is even. Finally, when $d_{u
<}$ is negative but $d_{u >}$ is positive,
\begin{align*}
G &~~=~~ \lambda( \big( \sinh(\lambda d_{u_>}) - \sinh (\lambda d_{u_<})
\big) ~~=~~ \lambda \big( \sinh(\lambda d_{u_>}) + \sinh (\lambda
|d_{u_<}|) \big) \\
&~~\geq~~ \frac\lambda3 \big( \cosh(\lambda d_{u_>}) +
\cosh (\lambda d_{u_<}) - 2 \big) ~~\geq~~ \frac{\lambda}{3} \Big(
\cosh(\lambda d_{u}) + \cosh(\lambda d_{v}) -
4\Big). \qedhere
\end{align*}
\end{proof}
\begin{claim} \label{cl:no-danger}
Let $\beta \geq 6\lambda$. For any fixed choice of vertices $u$ and $v$ such that $v
\notin \danger(u)$, we have $ -\beta G + \left( 1 +
\frac{1}{\sqrt{\lambda}} \right) Q \leq O(1) $.
\end{claim}
\begin{proof}
Recall that $G = \lambda \big( \sinh(\lambda d_{u_>}) - \sinh (\lambda
d_{u_<}) \big)$. Now, let $A$ denote $ \cosh(\lambda d_{u}) + \cosh(\lambda
d_{v}).$ Then, by definition of $Q$ and from~\Cref{cl:sinh-to-cosh}, we have that
$$ -\beta G + \left( 1 + \frac{1}{\sqrt{\lambda}} \right) Q ~\leq~
-\frac{\beta\lambda}{3} (A - 4) + \left( 1 + \frac{1}{\sqrt{\lambda}} \right)
\lambda^2 A ~\leq~ \frac{4\lambda \beta}{3} + \left(\lambda^2 +
\lambda^{\frac{3}{2}} - \frac{\lambda \beta}{3} \right) A ~\leq~
\lambda \beta $$ is at most $O(1)$, assuming $\beta \geq 6 \lambda \geq 3
(\lambda+\sqrt{\lambda})$, and recalling that $\lambda, \beta$ are at most 1.
\end{proof}
We now proceed with our proof using two cases:
\medskip \noindent {\bf Case (i):} $|d_u| \leq \frac{10}{\lambda}$. In
this case, note that the $Q$ term is
\begin{align*}
&\E_v[Q \mid u \textrm{ is sampled first}] \\
& = \E_v[Q \mid v \in \danger(u), \, u \textrm{ is sampled first}] \cdot \Pr[v \in \danger(u) \mid u \textrm{ is sampled first}] \\
& ~~~~ + \E_v[Q \mid v \notin \danger(u) \, u \textrm{ is sampled first}] \cdot \Pr[v \notin \danger(u) \mid u \textrm{ is sampled first}] & \\
& \leq O(1) + \E_v[Q \mid v \notin \danger(u) ~,~ u \textrm{ is sampled first}] \cdot \Pr[v \notin \danger(u) \mid u \textrm{ is sampled first}] .
\end{align*}
Here the inequality uses $v \in \danger(u)$ and $|d_u| \leq
\frac{10}{\lambda}$ to infer that that both $|d_u|$ and $|d_v|$ are $ \leq
\frac{12}{\lambda}$. Hence the $Q$ term in this scenario will simply be a constant.
Next we analyze the $L$ term. For the following, we observe that the algorithm chooses a random $\pm 1$ signing with probability $(1-\beta)$, and chooses the greedy signing with probability $\beta$, and moreover, this choice is independent of the random choices of $u$ and $v$. Hence, the expected $L$ term conditioned on the algorithm choosing a random signing is simply $0$, and the expected $L$ term conditioned on the algorithm choosing the greedy signing is simply the term $\E[G]$. Hence, we can conclude that:
\begin{align*}
&\E_v[-L \mid u \textrm{ is sampled first}] \\
&= \E_v[-L \mid v \in \danger(u), u \textrm{ is sampled first}] \cdot \Pr[v \in \danger(u) \mid u \textrm{ is sampled first}] \\
& ~~~~ + \E_v[-L \mid v \notin \danger(u) ~,~ u \textrm{ is sampled first}] \cdot \Pr[v \notin \danger(u) \mid u \textrm{ is sampled first}] & \\
& \leq \E_v[-\beta G \mid v \notin \danger(u) ~,~ u \textrm{ is sampled first}] \cdot \Pr[v \notin \danger(u) \mid u \textrm{ is sampled first}] .
\end{align*}
Adding the inequalities and applying~\Cref{cl:no-danger}, we get
$
\E_v[-L + Q \, | \, u \textrm{ is sampled first}] \leq O(1).$
\medskip \noindent {\bf Case (ii):} $|d_u| > \frac{10}{\lambda}$.
We first prove two easy claims.
\begin{claim}
\label{cl:easy}
Suppose $v \in \danger(u).$ Then $\cosh(\lambda d_v) \leq 8 \cosh(\lambda d_u). $
\end{claim}
\begin{proof}
Assume w.l.o.g.\ that $d_u, d_v \geq 0$. Also, assume that
$d_v \geq d_u,$ otherwise there is nothing to prove. Now
$d_v \leq d_u + \frac{2}{\lambda}.$ So
$\frac{\cosh(\lambda d_v)}{\cosh(\lambda d_u)} \leq \sup_x
\frac{\cosh(x+2)}{\cosh(x)}$. The supremum on the right happens
when $x \to \infty$, and then the ratio approaches $e^2 < 8$.
\end{proof}
\begin{claim} \label{cl:good-prob}
For any discrepancy vector $\bdt$ such that $\Phi(\bdt) \leq O((nT)^{10})$, and for any $u$ such that $|d_u| > \frac{10}{\lambda}$, we have $\Pr[v \notin \danger(u)] \geq 8 \sqrt{\lambda}$, as long as $\lambda = O(\log^{-4} nT)$.
\end{claim}
\begin{proof}
We consider the case that $d_u > \frac{10}{\lambda}$; the
case were $d_u < - \frac{10}{\lambda}$ is similar.
Assume for a contradiction that
$\Pr[v \in \danger(u)] \geq 1- 8 \sqrt{\lambda}$, and so $\Pr[v \notin \danger(u)] \leq 8 \sqrt{\lambda}$.
We first show that the cardinality of the set $| w \notin \danger(u) | $ is small. Indeed, this follows immediately from our assumption on the minimum weight of any vertex in the statement of~\Cref{lem:beta-process} being at least $\gamma/n$ times the total weight. So we have that for every $w$, the probability of sampling $w$ in the \OPBP is at least $\pi_w \geq \gamma/n$, implying that the total number of vertices not in $\danger(u)$ must be at most $ \frac{8\sqrt{\lambda} \cdot n}{\gamma}$.
This also means that the total number of vertices in $\danger(u) \geq \frac{n}{2}$ since $\gamma \geq {\lambda}^{1/4} \geq 16 \sqrt{\lambda}$ for sufficiently small $\lambda$.
Since $d_u > \frac{10}{\lambda}$, we get that any vertex $v \in \danger(u)$ satisfies
$d_v \geq d_u - \frac2\lambda \geq \frac{8}{\lambda}$. Moreover, since
$\sum_{v} d_v = 0$, it must be that the negative discrepancies must in total compensate for the total sum of discrepancies of the vertices in $\danger(u)$. Hence, we have that
$ \sum_{w : d_w < 0} |d_w| ~~\geq~~ \sum_{v \in \danger(u)}d_v ~~\geq~~ |\{ v~:~ v \in \danger(u)\}| \cdot \frac{8}{\lambda} ~~\geq~~ 0.5n \cdot \frac{8}{\lambda}$.
From the last inequality, and since $| \{w : d_w < 0 \}| \leq |\{w ~:~ w \not\in \danger(u)\}|
\leq \frac{ 8 \sqrt{\lambda} n}{\gamma}$, we get that there exists a vertex $\widetilde{w}$ s.t $d_{\widetilde{w}} < 0$ and $| d_{\widetilde{w}} | \geq \frac{\gamma}{8 \sqrt{\lambda}
n} \cdot \frac{4n}{\lambda} = \frac{\gamma}{2 \lambda^{3/2}} $. But this
implies $\Phi(\bd_t) \geq \cosh(\lambda d_{\widetilde{w}}) \geq \cosh
\left( \frac{\gamma}{2 \sqrt{\lambda}} \right) > (nT)^{10}$, using that
$\lambda = O(\log^{-4} nT)$ and that $\gamma \geq \lambda^{1/4}$. So we get a contradiction on the assumption that $\Phi(\bdt) \leq (nT)^{10}$.
\end{proof}
Returning to the proof for the case of $|d_u| \geq
\frac{10}{\lambda}$, we get that
\begin{align*}
&\E_v[Q \mid u \textrm{ is sampled first}] \\
&= \E_v[Q \mid v \in \danger(u) ~,~ u \textrm{ is sampled first}] \cdot \Pr[v \in \danger(u) \mid u \textrm{ is sampled first}] \\
& \quad + \E_v[Q \mid v \notin \danger(u) ~,~ u \textrm{ is sampled first}] \cdot \Pr[v \notin \danger(u) \mid u \textrm{ is sampled first}] & \\
& \leq 8\lambda^2 \cosh(\lambda d_u) \\
& \quad + \E[Q \mid v \notin \danger(u) ~,~ u \textrm{ is sampled first}] \cdot \Pr[v \notin \danger(u) \mid u \textrm{ is sampled first}] ,
\end{align*}
where the first term in inequality follows from Claim~\ref{cl:easy}.
Next we analyze the $L$ term similarly:
\begin{align*}
&\E_v[-L \mid \, u \textrm{ is sampled first}]\\
&= \E_v[-L \mid v \in \danger(u), \, u \textrm{ is sampled first}] \cdot \Pr[v \in \danger(u) \, u \textrm{ is sampled first}] \\
& \qquad + \E_v[-L \mid v \notin \danger(u) ~,~ u \textrm{ is sampled first}] \cdot \Pr[v \notin \danger(u) \, u \textrm{ is sampled first}] & \\
& \leq \E_v[-\beta G \mid v \notin \danger(u) ~,~ u \textrm{ is sampled first}] \cdot \Pr[v \notin \danger(u) \mid u \textrm{ is sampled first}],
\end{align*}
where the last inequality follows using the same arguments as in case~(i).
Adding these inequalities and applying~\Cref{cl:no-danger}, we get that
\begin{align*}
\E_v[-L + Q \mid u \textrm{ is sampled first}] &~~\leq~~ O(1) + 8 \lambda^2 \cosh(\lambda d_u) \\
&\hspace{-3cm} - \frac{1}{\sqrt{\lambda}} \cdot \E_v[ Q \mid u \textrm{ is sampled first}] \cdot \Pr[v \notin \danger(u) \mid u \textrm{ is sampled first}].
\end{align*}
To complete the proof of \Cref{lem:beta-process}, we note that $Q \geq \lambda^2 \cosh(\lambda
d_u)$, and use Claim~\ref{cl:good-prob} to infer that $\Pr[v \notin \danger(u)] \geq 8 \sqrt{\lambda}$. This implies
\[ \E_v[-L + Q \mid u \textrm{ is sampled first}] ~\leq~ O(1) + 8 \lambda^2 \cosh(\lambda d_u)
- 8 \lambda^2 \cosh(\lambda d_u) ~\leq~ O(1). \qedhere \]
\end{proof}
We now can use this one-step expected potential change for the \OPBP
to get the following result for the original expander process:
\begin{proof}[Proof of~\Cref{thm:main}]
Combining \Cref{lem:beta-process} and \Cref{lem:coupling}, we get that
in the expander process, if we condition on the random choices made
until time $t$, if $\Phi(\bdt) \leq (nT)^{10}$, then $\E[ \Phi(\bd^{t+1}) - \Phi(\bdt)] \leq C$ for some constant $C$.
The potential starts off at
$n$, so if it ever exceeds $C\,T\,(nT)^5$ in
$T$ steps, there must be a time $t$ such that $\Phi(\bd^t) \leq
C\,t\,(nT)^5$ and the increase is at least
$C(nT)^5$. But the expected increase at this step is at most
$C$, so by Markov's inequality the probability of increasing by
$C(nT)^5$ is at most
$1/(nT)^5$. Now a union bound over all times
$t$ gives that the potential exceeds
$C\,T\,(nT)^5 \leq (nT)^{10}$ with probability at most $T/(nT)^5 = 1/\poly(nT)$.
But then $\cosh(\lambda d^t_v) \leq
(nT)^{10}$, and therefore $d^t_v \leq O(\lambda \log (nT)^{10}) = O(\log^3
nT) $ for all vertices $v$ and time $t$.
\end{proof}
In summary, if the underlying graph is
$\gamma$-weakly-regular for $\gamma \geq \Omega(\log^{-1}
nT)$, and has expansion $\alpha \geq \Omega(\log^{-2} nT)$, the greedy
process maintains a poly-logarithmic discrepancy.
\subsection{Putting it Together} \label{sec:tieup}
We briefly describe the expander decomposition procedure and summarize the final algorithm.
\begin{theorem}[Decomposition into Weakly-Regular Expanders]
\label{thm:regular-expanders}
Any graph $G = (V,E)$ can be decomposed into an edge-disjoint union
of smaller graphs $G_1 \uplus G_2 \ldots \uplus G_k$ such that each
vertex appears in at most $O(\log^2 n)$ many smaller graphs, and (b)
each of the smaller subgraphs $G_i$ is a $\frac{\alpha}{4}$-weakly regular
$\alpha$-expander, where $\alpha = O(1/\log n)$.
\end{theorem}
The proof is in~\Cref{sec:expand-decomp}. So, given a graph $G=(V,E),$ we use Theorem~\ref{thm:regular-expanders} to partition the edges into a union of
$\frac{\alpha}{4}$-weakly regular $\alpha$-expanders, namely $H_1, \ldots, H_s,$ where $\alpha= O(1/\log n)$. Further, each vertex in $V$ appears in at most $O(\log^2 n)$ of these expanders. For each graph $H_i$, we run the greedy algorithm independently. More formally, when an edge $e$ arrives, it belongs to exactly one of the subgraphs $H_i$. We orient this edge with respect to the greedy algorithm running on $H_i$.~\Cref{thm:main} shows that the discrepancy of each vertex in $H_i$ remains $O(\log^5 (nT))$ for each time $t \in [0 \ldots T]$ with high probability. Since each vertex in $G$ appears in at most $O(\log^2 n)$ such expanders, it follows that the discrepancy of any vertex in $G$ remains $O(\log^7 n + \log^5 T)$ with high probability. This proves~\Cref{thm:final}.
\section{Expander Decomposition}
\label{sec:expand-decomp}
Finally, in this section, we show how to decompose any graph into an
edge-disjoint union of weakly-regular expanders such that no vertex
appears in more than $O(\log^2 n)$ such expanders. Hence, running the
algorithm of the previous section on all these expanders independently
means that the discrepancy of any vertex is at most $O(\log^2 n)$ times
the bound from \Cref{thm:main}, which is $O(\poly\log nT)$ as claimed.
The expander decomposition of this section is not new: it follows from
\cite[Theorem~5.6]{BBGNSSS}, for instance. We give it here for
the sake of completeness, and to explicitly show the bound on the number of
expanders containing any particular vertex.
Recall from \S\ref{sec:notation} that a $\gamma$-weakly-regular
$\alpha$-expander $G = (V,E)$ with $m := |E|$ edges and $n := |V|$
vertices is one where (a) the minimum degree is at least $\gamma$
times the average degree $d_{\rm avg} = \frac{2 m}{n}$, and (b) for
every partition of $V$ into $(S, V \setminus S)$, we have that
$| E(S, V \setminus S) | \geq \alpha \min (\vol(S), \vol(V\setminus
S))$. The main result of this section is the following:
\subsection{Proof of~\Cref{thm:regular-expanders}}
We begin our proof with a definition of what we refer to as \emph{uniformly-dense} graphs.
\begin{defn}[Uniformly Dense Graphs]
A graph $H = (V,E)$ is \emph{$\alpha$-uniformly-dense} if (i) the minimum degree of the graph $H$ is at least $1/\alpha$ times its average degree $\frac{2m}{n}$, and (ii) no induced sugraph is much denser than $H$, i.e., for every subset $S \subseteq V$, the average degree of the induced sub-graph $\frac{2 E(S,S)}{|S|}$ is at most $\alpha$ times the average degree of $H$ which is $\frac{2m}{n}$.
\end{defn}
We first provide a procedure which will partition a graph $G$ into edge-disjoint smaller graphs such that each of the smaller graphs is uniformly-dense, and moreoever each vertex participates in $O(\log n)$ such smaller graphs. We then apply a standard expander decomposition on each of the smaller graphs to get our overall decomposition.
\begin{lemma}[Reduction to Uniformly-Dense Instances] \label{lem:uniformly-dense}
Given any graph $G = (V,E)$, we can decompose it into an edge-disjoint union of smaller graphs $G_1 \uplus G_2 \ldots \uplus G_{\ell}$ such that each vertex appears in at most $O(\log n)$ many smaller graphs, and (b) each of the smaller subgraphs is $2$-uniformly-dense.
\end{lemma}
\begin{proof}
The following algorithm describes our peeling-off procedure which gives us the desired decomposition.
\begin{algorithm}
\caption{Input: Graph $G = (V,E)$}
\label{alg:peeling-off}
\begin{algorithmic}[1]
\State initialize the output collection $\cop := \emptyset$. \label{alg:peeling-step1}
\For {$\bar{d} \in \{ \frac{n}{2}, \frac{n}{4}, \ldots, 32 \}$ in decreasing order} \label{alg:peeling-step1a}
\State define the residual graph $R := (V, E_R)$, where $E_R = E \setminus \cup_{G_i = (V_i, E_i) \in \cop} E_i$ is the set of residual edges. \label{alg:peeling-step2}
\While {there exists vertex $v \in R$ such that $0 < d_R(v) < \bar{d}$} \label{alg:peeling-step3}
\State delete all edges incident to $v$ from $R$ making $v$ an isolated component. \label{alg:peeling-step4}
\EndWhile
\State add each non-trivial connected component in $R$ to $\cop$. \label{alg:peeling-step5}
\EndFor
\end{algorithmic}
\end{algorithm}
It is easy to see that in any iteration (step~\ref{alg:peeling-step1a}) with degree threshold $\bar{d}$, if a sub-graph $G_i = (V_i, E_i)$ is added to $\cop$ in step~\ref{alg:peeling-step5}, it has minimum degree $\bar{d}$. The crux of the proof is in showing that the average degree of $G_i$ (and in fact of any induced sub-graph of $G_i$) is at most $2 \bar{d}$. Intuitively, this is because the peeling algorithm would have already removed all subgraphs of density more than $2 \bar{d}$ in the previous iterations. We formalize this as follows:
\begin{claim} \label{cl:peeling-invariant}
Consider the iteration (step~\ref{alg:peeling-step1a}) when the degree threshold is $\bar{d}$. Then, the residual graph $R$ constructed in step~\ref{alg:peeling-step2} does not have any induced subgraph $S$ of density greater than $2 \bar{d}$.
\end{claim}
\begin{proof}
Indeed, for contradiction, suppose there was a subset of vertices in $R$ with average induced degree greater than $2 \bar{d}$. Consider the minimal such subset $S$. Due to the minimality assumption, we in fact get a stronger property that \emph{every vertex in $S$} has induced degree (within $S$) of at least $2 \bar{d}$ (otherwise, we can remove the vertex with minimum induced degree and get a smaller subset $S' \subseteq S$ which still has average induced degree more than $2 \bar{d}$, thereby contradicting the minimality assumption of $S$).
For ease of notation, let us denote the set of edges induced by $S$ in the graph $R$ as $E_R(S)$. We now claim that all of these edges $E_R(S)$ \emph{should not belong to the residual graph $R$} for this iteration, thereby giving us the desired contradiction. To this end, consider the previous iteration of step~\ref{alg:peeling-step1a} with degree threshold $2 \bar{d}$. Clearly, all of the edges in $E_R(S)$ belong to the residual subgraph for this iteration as well. And consider the first point in the while loop~\ref{alg:peeling-step3} where any edge from $E_R(S)$ is deleted. At this point, note that all the vertices in $S$ must have a degree of strictly greater than $2 \bar{d}$ since even their induced degree in $E_R(S)$ is at least $2 \bar{d}$. Therefore, this gives us an immediate contradiction to any of these edges being deleted in the previous iteration, and hence they would not be present in the current iteration with degree threshold $\bar{d}$.
\end{proof}
It is now easy to complete the proof of \Cref{lem:uniformly-dense}. Indeed, we first show that every smaller graph added to $\cop$ in our peeling procedure is $2$-uniformly-dense. To this end, consider any non-trivial connected component added to $\cop$ during some iteration with degree threshold $\bar{d}$. From~\Cref{cl:peeling-invariant}, we know that this component has average degree at most $2 \bar{d}$, and moreover, every vertex in the component has degree at least $\bar{d}$ (otherwise it would be deleted in our while loop). Moreover, every sub-graph induced within this connected component must also have density at most $2 \bar{d}$ again from~\Cref{cl:peeling-invariant}. This then shows that the component added is $2$-uniformly dense.
Finally,
each vertex participates in at most one non-trivial connected
component in each iteration of step~\ref{alg:peeling-step1a}, and hence
each vertex is present in $O(\log n)$ smaller sub-graphs. Hence the
proof of \Cref{lem:uniformly-dense}.
\end{proof}
Next, we apply a standard divide-and-conquer approach to partition a given 2-uniformly-dense graph $H = (V,E)$ with $m$ edges and $n$ vertices into a vertex-disjoint union of $\alpha$-expanders $H_1 := (V_1, E_1) \uplus H_2 := (V_2, E_2) \ldots \uplus H_k := (V_k, E_k)$, such that the total number of edges in $E$ which are not contained in these expanders is at most $m/2$, and moreover, the induced degree of any vertex in the expander it belongs to is at least $\alpha$ times its degree in $H$.
\begin{lemma}[Decomposition for Uniformly-Dense Graphs] \label{lem:expander-decomp}
Given any $2$-uniformly-dense graph $H = (V,E)$ with $n$ vertices and $m$ edges, we can decompose the vertex-set $V$ into $V_1 \uplus V_2 \ldots \uplus V_\ell$ such that each induced subgraph $H_i = (V_i, E(V_i))$ is an $\frac{\alpha}{4}$-weakly-regular $\alpha$-expander, and moreover, the total number of edges of $H$ which go between different parts is at most $(2 \alpha \log n ) \, m$. Here $\alpha$ is a parameter which is $O(1/\log n)$.
\end{lemma}
\begin{proof}
The following natural recursive algorithm
(Algorithm~\ref{algo:2}) describes our partitioning
procedure.\footnote{Step~\ref{alg:decomp-step6} in the
algorithm does not run in polynomial time. This step can be
replaced by a suitable logarithmic approximation algorithm,
which would lose logarithmic terms in the eventual
discrepancy bound, but would not change the essential nature
of the result. The details are deferred to the full version.} The only idea which is non-standard is that of using self-loops around vertices during recursion, to capture the property of approximately preserving the degree of every vertex in the final partitioning w.r.t its original degree. This has been applied in other contexts by Thatchaphol et al.~\cite{SW-SODA19}.
\begin{algorithm}[h]
\caption{Input: Graph $H = (V,E)$}
\label{alg:sparse-cut}
\begin{algorithmic}[1]
\State initialize the output partition $\pop := \emptyset$, and the set of recursive partitions $\rop = \{H := (V,E) \}$. \label{alg:decomp-step1}
\While {$\rop \neq \emptyset$} \label{alg:decomp-step2}
\State choose an arbitrary $H' := (V',E') \in \rop$ to process. \label{alg:decomp-step3}
\If {the expansion of $H'$ is at least $\alpha$} \label{alg:decomp-step4}
\State add $H'$ to the final partitioning $\pop$ \label{alg:decomp-step5}
\Else
\State let $(S, V' \setminus S)$ denote a cut of conductance at most $\alpha$. \label{alg:decomp-step6}
\State for each $v \in S$, add $|\delta(v, V' \setminus S)|$ self-loops at $v$. \label{alg:decomp-step7}
\State for each $v \in V \setminus S$, add $|\delta(v, S)|$ self-loops at $v$. \label{alg:decomp-step8}
\State add the sub-graphs (including the self-loops) induced in $S$ and $V' \setminus S$ to the recursion set $\rop$ and remove $H'$ from $\rop$. \label{alg:decomp-step9}
\EndIf
\EndWhile
\end{algorithmic}
\label{algo:2}
\end{algorithm}
\begin{claim} \label{cl:degree-preseved}
Consider any vertex $v$. At all times of the algorithm, $v$ appears in at most one sub-graph in the collection $\rop$, and morover, suppose it appears in sub-graph $H \in \rop$. Then its degree in $H$ (edges it is incident to plus the number of self-loops it is part of) is exactly its original degree in $G$.
\end{claim}
\begin{proof}
The proof follows inductively over the number of iterations of the while loop in step~\ref{alg:decomp-step2}. Clearly, at the beginning, $\rop$ contains only $H$, and the claim is satisfied trivially. Suppose it holds until the beginning some iteration $i \geq 1$ of the algorithm. Then during this iteration, two possible scenarios could occur: (a) the algorithm selects a sub-graph $H' \in \rop$, and removes it from $\rop$ and adds it to $\pop$, or (b) the algorithm finds a sparse cut of $H'$ and adds the two induced subgraphs to $\rop$ after removing $H'$ from $\rop$. The inductive claim continues to hold in the first case since we dont add any new graphs to $\rop$. In case (b), note that, for every vertex $v \in H'$, we add as many self-loops as the number of edges incident ot $v$ that cross the partition in the new sub-graph it belongs to. Hence, the inductive claim holds in this scenario as well.
\end{proof}
\begin{claim} \label{cl:add-expanders}
Every sub-graph $H'$ which is added to $\pop$ is an $\frac{\alpha}{4}$-weakly-regular $\alpha$-expander.
\end{claim}
\begin{proof}
Consider any iteration of the algorithm where it adds a sub-graph $H'$ to $\pop$ in step~\ref{alg:decomp-step5}. That $H'$ is an $\alpha$-expander is immediate from the condition in step~\ref{alg:decomp-step4}. Moreover, since the input graph $H$ is $2$-uniformly dense, we know that (a) for every vertex $v \in H$, its degree in $H$ is at least half of the average degree $\bar{d}(H)$ of $H$, and (b) the average degree $\bar{d}(H')$ of $H'$ (which is a sub-graph of $H$) is at most $2 \bar{d}(H)$. Finally, from the fact that $H'$ is an $\alpha$-expander, we can apply the expansion property to each vertex to obtain that $d_{H'}(v) \geq \alpha \cdot \vol_{H'}(v) = \alpha \cdot d_H(v)$. Here, the last equality is due to~\Cref{cl:degree-preseved}. Putting these observations together, we get that for every $v \in H'$, $d_{H'}(v) \geq \alpha \cdot d_H(v) \geq \frac{\alpha}{2} \bar{d}(H) \geq \frac{\alpha}{4} \bar{d}(H')$. This completes the proof.
\end{proof}
\begin{claim} \label{cl:few-edges}
The total number of edges going across different subgraphs in the final partitioning is at most $(2 \alpha \log n ) \, m$.
\end{claim}
\begin{proof}
The proof proceeds via a standard charging argument. We associate a charge to each vertex which is $0$ initially for all $v \in V$. Then, whenever we separate a sub-graph $H'$ into to smaller sub-graphs $H_1$ and $H_2$ in step~\ref{alg:decomp-step9}, we charge all the crossing edges to the smaller sub-graph $H_1$ as follows: for each $v \in H_1$, we increase its charge by $\alpha \cdot \vol_{H'}(v) = \alpha \cdot d_{H'}(v) = \alpha \cdot d_H(v)$, where the last equality follows from~\Cref{cl:degree-preseved}. Then it is easy to see that the total number of edges crossing between $H_1$ and $H_2$ is at most the total increase in charge (summed over all vertices in $H_1$) in this iteration (due to the fact that the considered partition is $\alpha$-sparse in $H$). Hence, over all iterations, the total number of edges going across different sub-graphs is at most the total charge summed over all vertices in $V$.
Finally, note that whenever a vertex $v$ is charged a non-zero amount, the sub-graph it belongs to has reduced in size by a factor of at least two, by virtue of our analysis always charging to the smaller sub-graph. Hence, the total charge any vertex $v \in V$ accrues is at most $( \log n \alpha ) d_G(v)$. Summing over all $v \in V$ then completes the proof.
\end{proof}
This completes the proof of \Cref{lem:expander-decomp}.
\end{proof}
We now complete the
proof of \Cref{thm:regular-expanders}. We first apply~\Cref{lem:uniformly-dense} to partition the input graph $G$ into $O(\log n)$ edge disjoint subgraphs, say, $H_1, \ldots, H_s$, where each vertex of $G$ appears in at most $O(\log n)$ such subgraphs. For each of these sub-graphs $H_i$, we apply~\Cref{lem:expander-decomp} to obtain $\frac{\alpha}{4}$-weakly-regular $\alpha$-expanders. Across all these partitions, the total number of edges excluded (due to going between parts in~\Cref{lem:expander-decomp}) is at most $m/2$. We recursively apply the above process (i.e.,~\Cref{lem:uniformly-dense} followed by~\Cref{lem:expander-decomp}) to the residual subgraph induced by these excluded edges. Thus, we have $O(\log n)$ such recursive steps, and taking the union of the $O(\log n)$ subgraphs constructed in such step proves~\Cref{thm:regular-expanders}.
\medskip
\noindent
\subsection*{Acknowledgments}
We thank Thatchaphol Saranurak for explaining and pointing us to \cite[Theorem~5.6]{BBGNSSS}. The last author would like to thank Navin Goyal for introducing him to~\cite{AANRSW-Journal98}.
{\small
\bibliographystyle{alpha}
| proofpile-arXiv_065-317 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In topologically ordered materials, the change of topological invariant caused by tuning a certain system parameter signifies a topological phase transition. In an attempt to draw analogy with the usual second-order quantum phase transition, a recently emerged scenario is to characterize the quantum criticality near the topological phase transition through investigating the curvature function, defined as the function whose momentum space integration gives the topological invariant\cite{Chen2017,Chen19_book_chapter,Chen20191,Molignini19}. From the generic critical behavior of the curvature function, the critical exponents and scaling laws can be extracted, and the Fourier transform of the curvature function is proposed to be the characteristic correlation function for topological materials. Because the curvature function is a purely geometric object that is not limited to a specific dimension or symmetry class\cite{Schnyder08,Ryu10,Chiu16}, this scenario has successfully described the criticality in a wide range of topological materials including almost all prototype noninteracting models\cite{Chen20191}, weakly\cite{Chen18_weakly_interacting} and strongly interacting models\cite{Kourtis17}, as well as periodically driven\cite{Molignini18,Molignini20} and multicritical systems\cite{Rufo19,Abdulla20}. In addition, a curvature renormalization group (CRG) approach has been proposed based on the divergence of the curvature function\cite{Chen1,Chen2}, which is shown to be particularly powerful in solving otherwise tedious interacting or mult-parameter systems\cite{Chen18_weakly_interacting,Kourtis17,Niewenburg,Molignini19}.
On the other hand, another important quantity that has been proposed to characterize the criticality near quantum phase transitions in general is the fidelity susceptibility\cite{You07,Zanardi07}. Formulated within the framework of quantum metric tensor, the fidelity susceptibility measures how a tuning parameter makes a quantum state deviate from itself, and usually diverges as the system approaches the critical point. In addition, the scaling behavior of the fidelity susceptibility near the critical point has been investigated in a variety of correlated models\cite{Gu08,Yang08,Albuquerque10,Gu10}. Given that this aspect of fidelity susceptibility can be broadly applied to any type of quantum phase transitions, it is intriguing to ask how the fidelity susceptibility manifests in topological phase transitions, whether it displays a particular scaling behavior, and how it is related to the aforementioned scenario based on the curvature function.
In this paper, we formulate the fidelity susceptibility for topological phase transitions within the framework of Dirac models, which are low energy effective theories for a wide range of topological materials. We observe that the fidelity ssuceptibility has a meaningful intepretation if one treats the momentum space as a manifold to construct the quantum metric. For systems where the curvature function is the Berry connection or Berry curvature, the determinant of the quantum metric tensor coincides with the square of the curvature function. As a result, the fidelity susceptibility shares the same critical behavior as the curvature function, and so inherits its critical exponent and scaling laws, and moreover decays with a characteristic momentum scale resulted from the correlation length.
We further address the issue of pratically simulating the criticality of curvature function and fidelity susceptibility. For this purpose, we turn to the quantum walks, which are proposed to be universal primitives \cite{Lovett} that can simulate a variety of quantum systems and phenomena \cite{Mohseni,Vakulchyk}, including topological materials \cite{Kitagawa}. The flexibility and controllability of the quantum walks help to simulate these topological phases\cite{Panahiyan2019}, which include all symmetry classes and edge states in one- (1D) and two-dimensional (2D) systems \cite{Kitagawa,Panahiyan2019,Asboth,Obuse,Chen,Panahiyan2020-1}, some others in three-dimension (3D) \cite{Panahiyan2020-2}, as well as to directly probe the topological invariants \cite{Ramasesh}. The topological phase transitions \cite{Rakovszky} and the possibility to invoke bulk-boundary correspondence have also been addressed for quantum walks. From experimental point of view, the existence of a robust edge state for the simulated topological phases was reported \cite{KitagawaExp} and it was shown that experimental realizations of the quantum walk can be employed to investigate topological phenomena in both 1D and 2D \cite{Cardano,Cardano2017,Barkhofen,Flurin,Zhan,Xiao,Wang2018,Wang,Nitsche,Errico,Xu}.
This list of encouraging results is enriched by our analysis that, by drawing analogy with perdiodically driven systems\cite{Molignini18,Molignini20}, clarifies the stroboscopic curvature functions for two specific cases of quantum walks in 1D and 2D. The extracted critical exponents indicate that these quantum walks faithfully reproduce the desired critical behavior, and as simulators belong to the same universality classes as their topological insulator counterparts. A correlation function that measures the overlap of stroboscopic Wannier states is proposed, and the convenience of CRG in solving multi-parameter quantum walks is elaborated.
The structure of the paper is organized in the following manner. In Sec.~\ref{sec:quantum_criticality_TPT}, we give an overview of the curvature function scenario, including the generic critical behavior, critical exponents, and the CRG approach. We then calculate the fidelity susceptibility in 1D and 2D Dirac models, and show explicitly its coincidence with the Berry connection and Berry curvature. In Sec.~\ref{sec:quantum_walks}, we turn to quantum walks in two specific cases that simulate the Berry curvature and Berry connection, and extract the critical exponents and scaling laws that are consistent with the Dirac models. Section \ref{sec:conclusion} gives a summary and outlook of these results.
\section{Quantum criticality near topological phase transitions \label{sec:quantum_criticality_TPT}}
\subsection{Generic critical behavior \label{sec:generic_critical_behavior}}
The eigenstates of a $D$ dimensional noninteracting topological material can in general be parameterized by two distinctive set of parameters; $\boldsymbol k$ and $\boldsymbol M$ in which $\boldsymbol k$ are momenta in $D$ dimension and $\boldsymbol M$ are tunable parameters in the Hamiltonian. Different topological phases are characterized by quantized integers known as topological invariants, which are generally obtained through integration of a curvature function over the first Brillouin zone (BZ)\cite{Chen2017,Chen20191,Molignini19}
\begin{eqnarray}
C(M) &=& \int_{BZ} F(\boldsymbol k,\boldsymbol M) \frac{d^d \boldsymbol k}{(2 \pi)^d}, \label{TPI}
\end{eqnarray}
in which $F(\boldsymbol k,\boldsymbol M)$ is referred to as the curvature function. Different topological phases are separated by boundaries that define topological phase transitions. As $\boldsymbol M$ crosses the critical point $\boldsymbol M_{c}$, the topological invariant jumps from one integer to another, accompanied by a gap-closing in the energy spectrum.
The precise definition of the curvature function depends on the dimensionality and symmetries of the system under consideration\cite{Chen19_book_chapter,Chen20191}. It is generally an even function $F(\boldsymbol k_{c}+\delta \boldsymbol k)=F(\boldsymbol k_{c}-\delta \boldsymbol k)$ around a certain momentum $\boldsymbol k_{c}$ in the BZ, and hence well described by an Ornstein-Zernike form
\begin{align}
F(k_{c}+\delta k,\boldsymbol M) &= \frac{F(k_{c},\boldsymbol M)}{ 1 \pm \xi^2 \delta k^2}, \label{curv-11D}
\\
F(\boldsymbol k_{c}+\delta \boldsymbol k,\boldsymbol M) &= \frac{F(\boldsymbol k_{c},\boldsymbol M)}{ (1 \pm \xi^2_{x} \delta k^2_{x}) (1 \pm \xi^2_{y} \delta k^2_{y})}, \label{curv-22D}
\end{align}
in 1D and 2D, respectively, in which $\xi$, $\xi_{x}$ and $\xi_{y}$ are width of the peak and they are characteristic length scales. The key ingredient of the curvature function as a tool to investigate topological phase transition is its varying nature as $\boldsymbol M$ changes. In other words, the topological invariant remains fixed for specific region of $\boldsymbol M$ whereas the profile of the curvature function varies. This variation enables us to characterize the critical behavior of the system with curvature function and extract correlation function, critical exponents, length scale, and validate a scaling law\cite{Chen2017,Chen19_book_chapter,Chen20191}.
The critical behavior of the curvature function in the majority of topological materials is described by the narrowing and flipping of the Lorentian peak of the curvature function as the system approaches the two sides of the critical point $\boldsymbol M^{+}_{c}$ and $\boldsymbol M^{-}_{c}$
\begin{eqnarray}
&&\lim\limits_{\boldsymbol M \rightarrow \boldsymbol M^{+}_{c}} F(\bf k_{c},\boldsymbol M) = - \lim\limits_{\alpha \rightarrow \boldsymbol M^{-}_{c}} F(\bf k_{c},\boldsymbol M) = \pm \infty,
\\
&&\lim\limits_{\boldsymbol M \rightarrow \boldsymbol M_{c}} \xi = \infty,
\label{FkM_xi_critical_behavior}
\end{eqnarray}
which are also found to be true for quantum walks, as we demonstrate in later sections. These divergencies suggest that the critical behavior of $F(\bf k_{c},\bf M)$ and $\xi$ can be described by
\begin{eqnarray}
F(\bf k_{c},\boldsymbol M)\propto |\boldsymbol M-\boldsymbol M_{c}|^{-\gamma} \text{, } \xi \propto |\boldsymbol M-\boldsymbol M_{c}|^{-\nu}
\end{eqnarray}
in which $\gamma$ and $\nu$ are critical exponents that satisfy a scaling law $\gamma= D\nu$ originated from the conservation of the topological invariant \cite{Chen2017,Chen20191,Chen19_book_chapter}. The physical meaning of these exponents become transparent through the notion of correlation functions, which is introduced by considering the Wannier states constructed from the Bloch state of the Hamiltonian
\begin{eqnarray}
|{\boldsymbol R}\rangle=\frac{1}{N}\sum_{\boldsymbol k}e^{i{\boldsymbol k({\hat r}-R)}}|\psi_{\boldsymbol k-}\rangle.
\end{eqnarray}
in which $|\psi_{\boldsymbol k-}\rangle$ is the lower eigenstate of the Hamiltonian. The correlation function is proposed to be the Fourier transform of the curvature function, which generally measures the overlap of Wannier states at the origin $|\boldsymbol 0\rangle$ and at $|\boldsymbol R\rangle$ sandwiched by a certain position operator. In 1D case where the curvature function is the Berry connection, the Wannier state correlation function reads
\begin{eqnarray}
\tilde{F}_{1D}(R)&=&\int_{0}^{2\pi}\frac{dk_x}{2\pi}F(k_x,\boldsymbol M) e^{ik_xR} \notag
\\
&=&\int_{0}^{2\pi}\frac{dk}{2\pi}\langle\psi_{-}|i\partial_{k_x}|\psi_{-}\rangle e^{ik_xR}
\notag
\\
&=&\langle 0|{\hat r}|R\rangle=\int dr\,r\,W^{\ast}(r)W(r-R),
\label{1D_Wannier_correlation}
\end{eqnarray}
where $\langle r|R\rangle=W(r-R)$ is the Wannier function centering at the home cell $R$, and ${\hat r}$ is the position operator. By replacing the curvature function with $\eqref{curv-11D}$, one can show that the Wannier state correlation function decays with the length scale $\xi$, and hence $\xi$ can be interpreted as the correlation length in this problem, assigned with the critical exponent $\nu$ as in the convention of statistical mechanics. The same can be done for 2D case where the curvature function is the Berry curvature
\begin{eqnarray*}
&&\tilde{F}_{2D}({\boldsymbol R})=\int\frac{d^{2}{\bf k}}{(2\pi)^{2}}F({\boldsymbol k},\boldsymbol M)=
\nonumber \\
&&\int\frac{d^{2}{\boldsymbol k}}{(2\pi)^{2}}\left\{\partial_{k_{x}}\langle\psi_{\boldsymbol k-}|i\partial_{k_{y}}|\psi_{\boldsymbol k-}\rangle-
\partial_{k_{y}}\langle\psi_{\boldsymbol k-}|i\partial_{k_{x}}|\psi_{\boldsymbol k-}\rangle\right\}e^{i{\boldsymbol k\cdot R}}
\nonumber \\
&&=-i\langle{\boldsymbol R}|(R^{x}{\hat y}-R^{y}{\hat x})|{\bf 0}\rangle
\nonumber \\
&&=-i\int d^{2}{\boldsymbol r}(R^{x}y-R^{y}x)W^{\ast}({\boldsymbol r}-{\boldsymbol R})W({\boldsymbol r}),
\label{Wannier_correlation_2D}
\end{eqnarray*}
in which $\langle{\boldsymbol r}|{\boldsymbol R}\rangle=W({\boldsymbol r}-{\boldsymbol R})$ is the Wannier function. In the following sections, we will elaborate explicitly that topological quantum walks also fit into this scheme of curvature-based quantum criticality.
\subsection{Curvature renormalization group approach}
In systems whose topology is controlled by multiple tuning parameters ${\boldsymbol M}=(M_{1},M_{2}...)$, a curvature renormalization group (CRG) has been proposed to efficiently capture the topological phase transitions in the multi-dimensional parameter space\cite{Chen1,Chen2}. The approach is based on the iterative mapping ${\boldsymbol M}\rightarrow{\boldsymbol M}'$ that satisfies
\begin{equation}
F({\boldsymbol k}_{0}+\delta {\boldsymbol k},{\boldsymbol M})=F({\boldsymbol k}_{0},{\boldsymbol M}'),
\end{equation}
where $\delta {\boldsymbol k}=\delta k{\hat{\boldsymbol k}}_{s}$ is a small deviation away from the high symmetry point (HSP) ${\boldsymbol k}_{0}$ along the scaling direction ${\hat{\boldsymbol k}}_{s}$. Expanding the scaling equation up to leading order yields the generic RG equation
\begin{equation}
\frac{dM_{i}}{d\ell}=\frac{M_{i}^{\prime}-M_{i}}{\delta k^{2}}=\frac{1}{2}\frac{({\boldsymbol\nabla}\cdot{\boldsymbol {\hat k}}_{s})^{2}F({\boldsymbol k},{\boldsymbol M})|_{\boldsymbol k=k_{0}}}{\partial_{M_{i}}F({\boldsymbol k}_{0},{\boldsymbol M})}
\label{RG_eq_derivative}
\end{equation}
Numerically, the right-hand side of the above equation can be evaluated conveniently by
\begin{equation}
\frac{dM_{i}}{d\ell}=\frac{F({\boldsymbol k}_{0}+\Delta k{\boldsymbol{\hat k}}_{s},{\boldsymbol M})-F({\boldsymbol k}_{0},{\boldsymbol M})}{F({\boldsymbol k}_{0},{\boldsymbol M}+\Delta M_{i}{\hat{\boldsymbol M}}_{i})-F({\boldsymbol k}_{0},{\boldsymbol M})},
\label{RG_eq_numerical}
\end{equation}
where $\Delta k$ is a small deviation away from the HSP in momentum space, and $\Delta M_{i}$ is a small interval in the parameter space along the ${\hat{\boldsymbol M}}_{i}$ direction. This numerical interpretation serves as a great advantage over the integration of topological invariant in Eq.~(\ref{TPI}), since for a given $\boldsymbol M$, one only needs to calculate the curvature function at three points $F({\boldsymbol k}_{0}+\Delta k{\boldsymbol{\hat k}}_{s},{\boldsymbol M})$, $F({\boldsymbol k}_{0},{\bf M})$ and $F({\boldsymbol k}_{0},{\boldsymbol M}+\Delta M_{i}{\hat{\boldsymbol M}}_{i})$ to obtain the RG flow along the ${\hat{\boldsymbol M}}_{i}$ direction, and hence a powerful tool to capture the topological phase transitions in the vast ${\boldsymbol M}$ parameter space. The efficiency of this method has been demonstrated in a great variety of systems, and in the present work we aim to demonstrate its feasibility for quantum walks.
\subsection{Fidelity susceptibility near topological phase transitions}
In this section, we elaborate that within the context of Dirac models, the fidelity susceptibility near a topological phase transition has the same critical behavior as the curvature function. For completeness, we first give an overview of the fidelity susceptibility formulated under the notion of quantum geometric tensor\cite{Zanardi07,You07,Gu10}. Our aim is to calculate the fidelity of the eigenstates of a given Hamiltonian under one or multiple tuning parameters
\begin{eqnarray}
&&H(\mu)=H_{0}+\mu\,H_{I},\;\;\;H(\mu)|\psi_{n}(\mu)\rangle=E_{n}|\psi_{n}(\mu)\rangle,
\nonumber \\
&&\sum_{n}|\psi_{n}(\mu)\rangle\langle \psi_{n}(\mu)|=I,
\end{eqnarray}
where $\mu=\left\{\mu_{a}\right\}$ with $a=1,2...\Lambda$ is a set of tuning parameters that form a $\Lambda$-dimensional manifold. For two eigenstates that are very close in the parameter space, the fidelity is the module of the product of the two eigenstates
\begin{equation}
|\langle\psi(\mu)|\psi(\mu+\delta\mu)\rangle|= 1-\sum_{ab}\frac{1}{2}g_{ab}\delta\mu_{a}\delta\mu_{b},
\end{equation}
which defines the the quantum metric tensor
\begin{eqnarray}
g_{ab}=\frac{1}{2}\langle\partial_{a}\psi|\partial_{b}\psi\rangle
+\frac{1}{2}\langle\partial_{b}\psi|\partial_{a}\psi\rangle
-\langle\partial_{a}\psi|\psi\rangle\langle\psi|\partial_{b}\psi\rangle,
\nonumber \\
\label{gab_multiple_lambda}
\end{eqnarray}
as a measure of the distance between the quantum states in the $\left\{\mu_{a}\right\}$ manifold. The quantum metric tensor is the real part $g_{ab}={\rm Re}T_{ab}$ of the more general quantum geometric tensor
\begin{eqnarray}
T_{ab}=\langle\partial_{a}\psi|\partial_{b}\psi\rangle-\langle\partial_{a}\psi|\psi\rangle\langle\psi|\partial_{b}\psi\rangle.
\label{Tab_definition}
\end{eqnarray}
whose imaginary part is the Berry curvature times $-1/2$
\begin{eqnarray}
{\rm Im}T_{ab}=-\frac{1}{2}\left[\partial_{a}\langle\psi_{-}|i\partial_{b}|\psi_{-}\rangle
-\partial_{b}\langle\psi_{-}|i\partial_{a}|\psi_{-}\rangle\right].
\end{eqnarray}
It can be easily shown that the quantum geometric tensor is invariant under local gauge transformation $|\psi_{-}\rangle\rightarrow e^{i\varphi}|\psi_{-}\rangle$, and so are $g_{ab}$ and the Berry curvature. Hence these quantities are measurables, as have been demonstrated in various systems\cite{Abanin13,Jotzu14,Duca15,Tan19}.
\subsubsection{1D topological insulators \label{sec:1D_Dirac_model}}
We now consider the quantum metric tensor of the eigenstates of a $2\times 2$ Dirac Hamiltonian that has only two components
\begin{eqnarray}
H=d_{1}\sigma_{1}+d_{2}\sigma_{2},
\label{1D_Dirac_d1d2}
\end{eqnarray}
which are relevent to several classes of 1D topological insulators\cite{Schnyder08,Ryu10,Chiu16,Chen20191}. The eigenstates and eigenenergies are, in a specific gauge choice,
\begin{eqnarray}
|\psi_{\pm}\rangle=\frac{1}{\sqrt{2}d}\left(\begin{array}{c}
\pm d \\
d_{1}+id_{2}
\end{array}
\right),\;\;\;E_{\pm}=\pm d,
\label{1D_eigenstates}
\end{eqnarray}
where $d=\sqrt{d_{1}^{2}+d_{2}^{2}}$. Suppose each component of the ${\bf d}$-vector is a function of a certain tuning parameter $k$ (what precisely is $k$ is unimportant at this stage), then the quantum metric tensor in Eq.~(\ref{gab_multiple_lambda}) reads
\begin{eqnarray}
g_{kk}&=&\langle\partial_{k}\psi_{-}|\partial_{k}\psi_{-}\rangle-\langle\partial_{k}\psi_{-}|\psi_{-}\rangle\langle\psi_{-}|\partial_{k}\psi_{-}\rangle
\nonumber \\
&=&\left[\frac{d_{1}\partial_{k}d_{2}-d_{2}\partial_{k}d_{1}}{2d^{2}}\right]^{2}
=\left[\langle\psi_{-}|i\partial_{k}|\psi_{-}\rangle\right]^{2}
\nonumber \\
&=&\frac{1}{4}({\hat{\bf d}}\times\partial_{k}{\hat{\bf d}})_{z}^{2}=\frac{1}{4}\partial_{k}{\hat{\bf d}}\cdot\partial_{k}{\hat{\bf d}},
\label{1D_gkk_Berry2}
\end{eqnarray}
which is equal to the square of the Berry connection $\langle\psi_{-}|i\partial_{k}|\psi_{-}\rangle$ in this gauge (it should be remined that $g_{kk}$ is gauge invariant but the Berry connection is not), and ${\hat{\bf e}}_{k}=\partial_{k}{\hat{\bf d}}/2$ plays the role of vielbein. Moreover, in this case that there is only one tuning parameter $k$, the quantum metric tensor is also equal to the fidelity susceptibility defined from\cite{You07,Zanardi07}
\begin{eqnarray}
\langle\psi_{-}(k)|\psi_{-}(k+\delta k)\rangle=1-\frac{\delta k^{2}}{2}\chi_{F}=1-\frac{\delta k^{2}}{2}g_{kk}.
\label{1D_fidelity_sus_definition}
\end{eqnarray}
We proceed to consider the physically meaningful Dirac model relevant to the low energy theory near the HSP $k=0$ of topological insulators
\begin{eqnarray}
d_{1}=M,\;\;\;d_{2}=k,
\end{eqnarray}
where $M$ is the mass and $k$ is the momentum. Our observation is that the quantum metric has a meaningful interpretation if we treat the momentum space as a manifold and construct the metric between $|\psi_{-}(k)\rangle$ and $|\psi_{-}(k+\delta k)\rangle$. Using Eqs.~(\ref{1D_gkk_Berry2}) and (\ref{1D_fidelity_sus_definition}), this gives
\begin{eqnarray}
&&\langle\psi_{-}|i\partial_{k}|\psi_{-}\rangle=-\frac{M}{2(M^{2}+k^{2})},
\nonumber \\
&&\chi_{F}=\frac{M^{2}}{4(M^{2}+k^{2})^{2}}.
\end{eqnarray}
At the HSP $k=0$, these quantities diverge with the mass term $M$ like
\begin{eqnarray}
&&\langle\psi_{-}|i\partial_{k}|\psi_{-}\rangle|_{k=0}\propto |M|^{-1}=|M|^{-\gamma},
\nonumber \\
&&\chi_{F}|_{k=0}\propto |M|^{-2}=|M|^{-2\gamma}.\;\;\;
\label{chiF_div_1D}
\end{eqnarray}
Thus the divergence of the Berry connection and that of the fidelity susceptibility near the topological phase transition $M\rightarrow 0$ are basically described by the same critical exponent $\gamma$. This justifies the usage of exponent $\gamma$ for the Berry connection at $k=0$ which is conventionally assigned to the susceptibility. Notice that in topological phase transitions, we treat $M$ as the tuning parameter, but to extract the quantum metric we treat $k$ as the tuning parameter.
Equation (\ref{1D_gkk_Berry2}) has another significant implication on the differential geometry of the manifold. Because the determinant of the quantum metric is the quantum metric itself $\det g_{kk}=g_{kk}\equiv g$, it implies that the integration $I=\int\phi(k)\sqrt{g}dk$ of any function $\phi(k)$ over the manifold is associated with the line element $\sqrt{g}dk=|\langle\psi_{-}|i\partial_{k}|\psi_{-}\rangle|dk$ given by the absolute value of the Berry connection. Therefore the total length of the 1D manifold is
\begin{eqnarray}
L=\int\sqrt{g}dk=\int|\langle\psi_{-}|i\partial_{k}|\psi_{-}\rangle|dk.
\label{length_1D_manifold}
\end{eqnarray}
In the topologically nontrivial phase of a varieties of 1D topological insulators, such as the Su-Schrieffer-Heeger model\cite{Chen2017,Chen19_book_chapter}, the Berry connection is often positive everywhere on the manifold $\langle\psi_{-}|i\partial_{k}|\psi_{-}\rangle=|\langle\psi_{-}|i\partial_{k}|\psi_{-}\rangle|$ (this also occurs in our 1D quantum walk at some parameters, as discussed in Sec.~\ref{sec:quantum_walks}). In this case, the total length of the manifold is equal to the topological invariant $L={\cal C}$, and hence remains a integer. In contrast, for the topologically trivial phase where the Berry connection is positive in some regions and negative in some other, this equivalence is not guaranteed and the length $L$ should be calculated by Eq.~(\ref{length_1D_manifold}).
\subsubsection{2D time-reversal breaking topological insulators}
We now turn to the 2D Dirac Hamiltonian that has all three components
\begin{eqnarray}
H=d_{1}\sigma_{1}+d_{2}\sigma_{2}+d_{3}\sigma_{3},
\end{eqnarray}
which has eigenstates and eigenenergies
\begin{eqnarray}
|\psi_{\pm}\rangle=\frac{1}{\sqrt{2d(d\pm d_{3})}}\left(\begin{array}{c}
d_{3}\pm d \\
d_{1}+id_{2}
\end{array}\right),\;\;\;E_{\pm}=\pm d,
\end{eqnarray}
where $d=\sqrt{d_{1}^{2}+d_{2}^{2}+d_{3}^{2}}$. Since the two eigenstates form a complete set $I=|\psi_{+}\rangle\langle\psi_{+}|+|\psi_{-}\rangle\langle\psi_{-}|$, we can compute the quantum geometric tensor $T_{ab}$ in Eq.~(\ref{Tab_definition}) by assuming each component of the ${\bf d}$-vector is a function of $\left\{\mu_{a}\right\}$,
\begin{eqnarray}
&&T_{ab}=\langle\partial_{a}\psi_{-}|\psi_{+}\rangle\langle\psi_{+}|\partial_{b}\psi_{-}\rangle
\nonumber \\
&&=\frac{1}{4d^{2}(d_{1}^{2}+d_{2}^{2})}\left(-d_{3}\partial_{a}d+d\partial_{a}d_{3}-id_{1}\partial_{a}d_{2}+id_{2}\partial_{a}d_{1}\right)
\nonumber \\
&&\times\left(-d_{3}\partial_{b}d+d\partial_{b}d_{3}+id_{1}\partial_{b}d_{2}-id_{2}\partial_{b}d_{1}\right),
\end{eqnarray}
whose real and imaginary parts are
\begin{eqnarray}
&&{\rm Re}T_{ab}=g_{ab}=\frac{1}{4}\partial_{a}{\hat{\bf d}}\cdot\partial_{b}{\hat{\bf d}}
\nonumber \\
&&=\frac{1}{4d^{2}}\left\{\partial_{a}d_{1}\partial_{b}d_{1}+\partial_{a}d_{2}\partial_{b}d_{2}
+\partial_{a}d_{3}\partial_{b}d_{3}-\partial_{a}d\partial_{b}d\right\},
\nonumber \\
&&{\rm Im}T_{ab}=-\frac{1}{2}\Omega_{ab}=-\frac{1}{4}{\hat{\bf d}}\cdot\left(\partial_{a}{\hat{\bf d}}\times\partial_{b}{\hat{\bf d}}\right)
\nonumber \\
&&=-\frac{1}{4d^{3}}\epsilon^{ijk}d_{i}\partial_{a}d_{j}\partial_{b}d_{k}.
\label{2D_Regab_Imgab}
\end{eqnarray}
One sees that ${\hat{\bf e}}_{a}=\partial_{a}{\hat{\bf d}}/2$ plays the role of vielbein according the the definition $g_{ab}={\hat{\bf e}}_{a}\cdot{\hat{\bf e}}_{b}$, and $\Omega_{ab}$ is the Berry curvature whose integration over the 2D manifold gives the skyrmion number of the ${\hat{\bf d}}$ vector. Moreover, the determinant of the quantum metric tensor coincides with the square of the Berry curvature
\begin{eqnarray}
g=\det g_{ab}=\frac{1}{4}\Omega_{xy}^{2}\equiv \chi_{F},
\label{chiF_div_2D}
\end{eqnarray}
a result very similar to Eq.~(\ref{1D_gkk_Berry2}), which suggests the determinant $g\equiv \chi_{F}$ as the representative fidelity susceptibility in this 2D problem.
Similar to that discussed before and after Eq.~(\ref{length_1D_manifold}), the determinant $g$ gives the area element $\sqrt{g}d^{2}{\bf k}$ of the integration of any function over the 2D manifold. Thus the total area of the manifold reads
\begin{eqnarray}
A=\int\sqrt{g}\,d^{2}{\bf k}=\frac{1}{2}\int|\Omega_{xy}|d^{2}{\bf k}.
\end{eqnarray}
Thus for the cases that the Berry curvature is positive everywhere on the manifold $\Omega_{xy}=|\Omega_{xy}|$, which occurs in the topologically nontrivial phases in some systems (including our 2D quantum walk in Sec.~\ref{sec:quantum_walks}), the total area of the manifold coincides with the topological invariant $A={\cal C}/2$ and remains a quantized constant.
We observe that for Dirac models relevant to 2D time-reversal breaking topological insulators\cite{Schnyder08,Ryu10,Chiu16,Chen20191}
\begin{eqnarray}
d_{1}=k_{x},\;\;\;d_{2}=k_{y},\;\;\;d_{3}=M,
\end{eqnarray}
the quantum metric tensor has a meaningful interpretation if we treat the 2D Brillouin zone in momentum space ${\bf k}=(k_{x},k_{y})$ as a manifold. Using Eq.~(\ref{2D_Regab_Imgab}), the quantum metric tensor and the Berry curvature in this Dirac model are given by
\begin{eqnarray}
&&g_{ab}=\left(\begin{array}{cc}
g_{xx} & g_{xy} \\
g_{yx} & g_{yy}
\end{array}
\right)
\nonumber \\
&&=\frac{1}{4\left(M^{2}+k^{2}\right)^{2}}\left(\begin{array}{cc}
k_{y}^{2}+M^{2} & -k_{x}k_{y} \\
-k_{x}k_{y} & k_{x}^{2}+M^{2}
\end{array}
\right),
\nonumber \\
&&\Omega_{xy}=-\Omega_{yx}=\frac{M}{2\left(M^{2}+k^{2}\right)^{3/2}}.
\end{eqnarray}
Moreover, the determinant of the quantum metric tensor coincides with the square of the Berry curvature
\begin{eqnarray}
\det g_{ab}=\frac{1}{4}\Omega_{xy}^{2}=\frac{M^{2}}{16(M^{2}+k^{2})^{3}}\equiv \chi_{F},
\label{chiF_div_2D2}
\end{eqnarray}
a result very similar to Eq.~(\ref{1D_gkk_Berry2}). To further draw relevance to topological phase transitions driven by the mass term $M$, we see that at the HSP ${\bf k}=(0,0)$, the critical exponents of these quantities are
\begin{eqnarray}
&&\Omega_{xy}|_{k=0}\propto|M|^{-2}=|M|^{-\gamma},
\nonumber \\
&&\det g_{ab}|_{k=0}\propto|M|^{-4}=|M|^{-2\gamma}.
\end{eqnarray}
Thus the critical exponent of the Berry curvature is basically the same as that of the determinant of the quantum metric tensor. This suggests the determinant $\det g_{ab}\equiv \chi_{F}$ as the representative fidelity susceptibility, and justifies the usage of exponent $\gamma$ for the divergence of the Berry curvature.
In the language of the curvature function in Sec.~\ref{sec:generic_critical_behavior}, our analysis implies that the fidelity is equal to the square of the curvature function (up to a prefactor) for the 1D Dirac model in Sec.~\ref{sec:1D_Dirac_model} and the 2D Dirac model in this section
\begin{eqnarray}
\chi_{F}\propto F({\boldsymbol k},{\boldsymbol M})^{2}.
\end{eqnarray}
Thus $\chi_{F}$ also takes the Lorentzian shape in Eqs.~(\ref{curv-11D}) and (\ref{curv-22D}), as confirmed by expanding Eqs.~(\ref{chiF_div_1D}) and (\ref{chiF_div_2D2}). As a result, the inverse of the correlation length $\xi^{-1}$ is a momentum scale over which the fidelity susceptibility decays from the HSP.
\section{Quantum walks \label{sec:quantum_walks}}
We now demonstrate that quantum walks serve as practical simulators for the critical exponents, scaling laws, Wannier state correlation functions, CRG, and fidelity ssuceptibility discussed in Sec.~\ref{sec:quantum_criticality_TPT}. The quantum walk is the result of a protocol that has been successively applied on an initial state of a walker. The protocol of the quantum walk consists of two types of operators; coin operators that manipulate the internal states of the walker and shift operators that change the external degree freedom of the walker based on its internal state. Due to successive nature of the quantum walk, the protocol of the quantum walk can be understood as a stroboscopic periodic Floquet evolution. This means that we can map the the protocol of the quantum walk to a (dimensionless) effective Floquet Hamiltonian \cite{Kitagawa}
\begin{eqnarray}
\widehat{H}& = &i \ln\widehat{U}= E \boldsymbol n \cdot \boldsymbol \sigma, \label{Hamiltonian}
\end{eqnarray}
in which $E$ is the (quasi)energy dispersion, $\boldsymbol \sigma$ are Pauli matrices and $\boldsymbol n$ defines the quantization axis
for the spinor eigenstates at each momentum. It is straightforward to obtain energy through eigenvalues of $\widehat{U}$ by $E= i \ln \eta$, in which $\eta$ is the eigenvalue of $\widehat{U}$. In this paper, we focus on two types of the quantum walk: a) 1D quantum walk with particle-hole (PHS), time reversal (TRS) and chiral (CHS) symmetries in which the symmetries square to $+1$. This type of quantum walk simulates BDI family of the topological phases in one dimension. b) 2D quantum walk with only PHS which simulates D family of topological phases. For both of the protocols, the topological invariants are integer-valued ($\mathbb{Z}$).
The quantum walks have one or two external degrees of freedom (position space) with two internal degrees of freedom. Therefore, the coin Hilbert space ($\mathcal{H}_{C}$) is spanned by $\{ \ketm{0},\: \ketm{1} \}$, respectively. For 1D and 2D quantum walks, the position Hilbert spaces are spanned by $\{ \ketm{x}_{P}: x\in \mathbb{Z}\}$ and $\{ \ketm{x,y}_{P}: x,y \in \mathbb{Z}\}$, respectively. The total Hilbert space of the walk is given by tensor product of subspaces of coin and position spaces. In addition, there are generally three ways that energy bands can close their gap \cite{Panahiyan2020-1}. If the energy bands close their gap linearly, we have a Dirac cone type of boundary state. For the nonlinear case, we have Fermi arc type boundary states. Finally, if the energy bands close their gap for arbitrary momentum, the boundary states are known as flat bands. To summarize, the Dirac cones show linear dispersion while the Fermi arcs have nonlinear dispersive behaviors and the flat bands are dispersionless.
\begin{figure*}[htb]
\begin{center}
\includegraphics[clip=true,width=0.5\columnwidth]{ED1}
\includegraphics[clip=true,width=1.2\columnwidth]{CurveD1V2}\\
\includegraphics[clip=true,width=0.5\columnwidth]{ED11}
\includegraphics[clip=true,width=1.2\columnwidth]{CurveD11V2}
\caption{Left panel, energy as a function of rotation angle and momentum $k$ for two cases of $\beta=0$ and $\beta=\alpha-\pi$ are plotted, respectively. In upper panel, the energy bands close their gap linearly while in lower panel, it is closed nonlinearly. In middle and right panels, the corresponding curvature function as $\alpha \rightarrow \alpha_{c}$ are plotted. Evidently, the curvature function peaks and then flips as it passes the critical point. Since only one peak is present in the diagrams of the curvature function, the band crossing is one.} \label{Fig1}
\end{center}
\end{figure*}
\subsection{One-dimensional class BDI quantum walks}
For the 1D class BDI quantum walks, we consider the following protocol \cite{Kitagawa,Panahiyan2020-1}
\begin{eqnarray}
\widehat{U} & = & \widehat{S}_{\uparrow}(x) \widehat{C}_{y}(\alpha) \widehat{S}_{\downarrow}(x) \widehat{C}_{y}(\beta) \label{protocol1D},
\end{eqnarray}
in which, one step of quantum walk comprises rotation of internal states with $\widehat{C}_{\beta}$, displacement of its position with $\widehat{S}_{\downarrow}(x)$, a second rotation of internal states with $\widehat{C}_{y}(\alpha)$ and final displacement with $\widehat{S}_{\uparrow}(x)$. The coin operators are rotation matrices around $y$ axis, $\widehat{C}_{y}(\beta)= e^{-\frac{i \beta}{2}\sigma_{y}}$ and $\widehat{C}_{\alpha}= e^{-\frac{i \alpha}{2}\sigma_{y}}$ with $\alpha$ and $\beta$ being rotation angles. The shift operators are in diagonalized forms of $\widehat{S}_{\uparrow}(x)=e^{\frac{i k}{2}(\sigma_{z}-1)}$ and $\widehat{S}_{\downarrow}(x)=e^{\frac{i k}{2}(\sigma_{z}+1)}$ in which we have used Discrete Fourier Transformation ($\ketm{k}=\sum_{x}e^{-\frac{i k x}{2}}\ketm{x}$). It is a matter of calculation to find the energy bands and $\boldsymbol n$ as
\begin{eqnarray}
E & = & \pm\cos^{-1}(\kappa_{\alpha}\kappa_{\beta}\cos(k)-\lambda_{\alpha}\lambda_{\beta}), \label{energy1D}
\end{eqnarray}
\begin{equation}
\boldsymbol n = {\boldsymbol \zeta}/|\zeta|, \label{n1D}
\end{equation}
in which $\cos(\frac{j}{2})= \kappa_{j}$ and $\sin(\frac{j}{2})= \lambda_{j}$ where $j$ could be $\alpha$ and $\beta$, and $\zeta=(\kappa_{\alpha} \lambda_{\beta} \sin (k),\lambda_{\alpha} \kappa_{\beta} + \kappa_{\alpha} \lambda_{\beta} \sin (k),-\kappa_{\alpha} \kappa_{\beta} \sin (k))$.
The curvature function for the 1D quantum walk can be obtained by \cite{Cardano2017}
\begin{eqnarray}
&& F(k,\alpha,\beta) = \bigg (\boldsymbol n \times \partial_{k} \boldsymbol n \bigg) \cdot \boldsymbol A = \notag
\\
&& \frac{-\cos (k) \lambda_{2\alpha} \kappa_{\beta}-2 \kappa_{\alpha}^2 \lambda_{\beta}}{ 2 \sin ^2(k) \kappa_{\alpha}^2+2\left(\cos (k) \kappa_{\alpha}\lambda_{\beta}+\lambda_{\alpha} \kappa_{\beta}\right)^2}. \label{curv1D}
\end{eqnarray}
in which $\boldsymbol A=(\kappa_{\beta},0, \lambda_{\beta})$ and perpendicular to $\boldsymbol n$. We consider $\alpha$ as the tuning parameter that closes the band gap, and denote its critical point by $\alpha_{c}$. Using the curvature function in Eq.~\eqref{curv1D}, one can show the following relations
\begin{equation}
\lim\limits_{\alpha \rightarrow \alpha_{c}^{-}} F(k=0,\alpha) = \infty = -\lim\limits_{\alpha \rightarrow \alpha_{c}^{+}} F(k=0,\alpha) ,
\end{equation}
\begin{equation}
\lim\limits_{\alpha \rightarrow \alpha_{c}^{-}} F(k=\pi,\alpha) = - \infty = -\lim\limits_{\alpha \rightarrow \alpha_{c}^{+}} F(k=\pi,\alpha),
\end{equation}
which confirms the divergence and sign change of the curvature function at the critical point described by Eq.~(\ref{FkM_xi_critical_behavior}). To extract the critical exponents and formulate the curvature function, we first gauge away the $z$ component of the ${\boldsymbol\zeta}$ by rotating it around the $y$ axis
\begin{eqnarray}
&&R\,{\boldsymbol\zeta}=\left(\begin{array}{c}
\kappa_{\alpha}\sin (k) \\
\zeta_{y} \\
0
\end{array}\right)=\left(\begin{array}{c}
\zeta_{ x}^\prime \\
\zeta_{ y}^\prime \\
0
\end{array}\right)\equiv{\boldsymbol\zeta}^{\prime},
\nonumber \\
&&R\,{\bf A}=\left(\begin{array}{c}
0 \\
0 \\
1
\end{array}\right)\equiv{\bf A}^{\prime}.
\end{eqnarray}
The new ${\boldsymbol\zeta}^{\prime}$ leads to a rotated Hamiltonian and the corresponding eigenstates
\begin{eqnarray}
|\psi_{k\pm}'\rangle=\frac{1}{\sqrt{2}|{\boldsymbol\zeta}^{\prime}|}\left(\begin{array}{c}
\pm|{\boldsymbol\zeta}^{\prime}| \\
\zeta_{ x}^\prime\pm i\zeta_{ y}^\prime
\end{array}\right),
\end{eqnarray}
in terms of which the curvature function coincides with the stroboscopic Berry connection
\begin{eqnarray}
&& F'(k,\alpha,\beta)= \frac{\zeta_{x}^{\prime}\partial_{k}\zeta_{y}^{\prime}-\zeta_{y}^{\prime}\partial_{k}\zeta_{x}^{\prime}}{(\zeta_{x}^{\prime 2}+\zeta_{y}^{\prime 2})}
=2\langle\psi_{k-}^{\prime}|i\partial_{k}|\psi_{k-}^{\prime}\rangle
\nonumber \\ &&=
\frac{-\kappa_{\alpha}^{2}\lambda_{\beta}-\lambda_{\alpha}\kappa_{\alpha}\kappa_{\beta}\cos (k)}{\kappa_{\alpha}^{2}\sin^{2}(k)+\lambda_{\alpha}^{2}\kappa_{\beta}^{2}
+2\kappa_{\alpha}\kappa_{\beta}\lambda_{\alpha}\lambda_{\beta}\cos (k)+\kappa_{\alpha}^{2}\lambda_{\beta}^{2}\cos^{2}(k)}.
\label{strob_Berry_connection}
\nonumber \\
\end{eqnarray}
Notice that because of the controlability of momentum $k$ in the protocol of Eq.~(\ref{protocol1D}), the quantum walk can obtain the entire momentum profile of the Berry connection, and hence the quantum metric in Eq.~(\ref{1D_gkk_Berry2}) for the entire momentum space manifold.
\begin{figure*}[htb]
\begin{center}
\includegraphics[clip=true,width=1\columnwidth]{CorrFD1V2}
\includegraphics[clip=true,width=1\columnwidth]{CorrFD11V2}
\caption{Correlation function, $\tilde{F}_{1D}(R,\alpha)$, as a function of $R$ for two cases of $\beta=0$ (left two panels) and $\beta=\alpha-\pi$ (right two panels). In left two panels, we observe that correlation function decays via a damped oscillation. This oscillation rooted in the fact that curvature function has three peaks at $k=0$ and $\pi$. In contrast, right two panels, the correlation function decays without any oscillation and monotonically. The correlation function, similar to curvature function, also has characteristic behaviors of sign flip that was observed for the curvature function around the critical point.} \label{Fig11}
\end{center}
\end{figure*}
The gap-closing points $k_{c}$ are located at $0$ and $\pi$. Upon a series expansion around these points, we find that the Lorentzian shape in Eq.~(\ref{curv-11D}) satisfy
\begin{eqnarray}
&&F'(k=\left\{0,\pi\right\},\alpha) =- \frac{\kappa_{\alpha}}{\lambda_{\alpha\pm\beta}}\propto\frac{1}{\lambda_{\alpha\pm\beta}},
\nonumber \\
&&\xi^{2}(k=\left\{0,\pi\right\},\alpha) =\frac{1}{2}\frac{\kappa_{\beta}^{2}+\kappa_{\alpha}^{2}\kappa_{\beta}^{2}
-\kappa_{\alpha}\kappa_{\beta}\lambda_{\alpha}\lambda_{\beta}}{\lambda_{\alpha\pm\beta}^{2}}
\propto\frac{1}{\lambda_{\alpha\pm\beta}^{2}}.
\nonumber \\
\label{xiDkc1}
\end{eqnarray}
In terms of the critical rotation angle $\alpha_{c}$, we find
\begin{eqnarray}
F(k=k_{c},\alpha) \propto
\xi(k=k_{c},\alpha) \propto |\alpha-\alpha_{c}|^{-1}.
\end{eqnarray}
which indicate that critical exponents are $\gamma=\nu=1$. This satisfies\cite{Chen2017,Chen19_book_chapter,Chen20191} the 1D scaling law $\gamma=\nu$ and the prediction for 1D class BDI that $\nu\in 2{\mathbb Z}+1$. Finally, using the obtained curvature function, we find that its Fourier transform takes the form
\begin{eqnarray}
\tilde{F}_{1D}(R,\alpha)\approx\frac{1}{2}\int_{0}^{2\pi}\frac{dk}{2\pi}\frac{F'(k_{c},\alpha,\beta)}{1+\xi^{2}k^{2}} e^{i k R} \propto e^{-R/\xi},\;\;\;\;\;
\label{Fourier_curvature_1D}
\end{eqnarray}
which decays as a function of $R$ with the length scale $\xi$. Combining with the fact that the curvature function is the stroboscopic Berry connection of the rotated eigenstates, as proved in Eq.~(\ref{strob_Berry_connection}), the Fourier transform in Eq.~(\ref{Fourier_curvature_1D}) then represents the correlation function between the rotated stroboscopic Wannier states as stated in Eq.~(\ref{1D_Wannier_correlation}), with $\xi$ playing the role of the correlation length. Interestingly, if the gap simultaneously closes at $k=0$ and $\pi$, then the correlation function decays through a damped oscillation (see Fig. \ref{Fig11}). On the other hand, if the gap only closes at one momentum, then the correlation function decays monotonically. Finally, the quantum metric associated to the our quantum walk \eqref{1D_gkk_Berry2} and fidelity susceptibility \eqref{1D_fidelity_sus_definition} in one dimension read as
\begin{eqnarray}
g_{kk}=\chi_{F}= \frac{F^{\prime 2}(k,\alpha,\beta)}{4},
\end{eqnarray}
which has indication of the divergency provided that $\alpha=\alpha_{c}$ and $k=k_{c}$.
The result of the CRG approach applied to the 1D quantum walk is shown in Fig.~\ref{Fig111}, where we treat ${\boldsymbol M}=\left\{\alpha,\beta\right\}$ as transition-driving parameters. The RG flow is obtained from Eq.~(\ref{RG_eq_numerical}) by using the two HSPs $k_{0}=0$ and $\pi$. The phase boundaries identified from the lines where the RG flow flows away from and the flow rate $|d{\bf M}/d\ell|$ diverges correctly capture the topological phase transitions in this problem, as also confirmed by observing the gap closing in these lines.
\begin{figure*}[htb]
\begin{center}
\includegraphics[clip=true,width=0.9\columnwidth]{RG1DV2}
\includegraphics[clip=true,width=0.51\columnwidth]{Phase1D}
\caption{Left two panels, CRG approach applied to the 1D quantum walk shows that the RG flow in the ${\boldsymbol M}=(\alpha,\beta)$ parameter space when the scaling procedure is applied to each of the four HSPs $k_{c}=0$ and $\pi$ with the scaling direction fixed at ${ {\hat k}}_{s}={ {\hat k}}$. The color code indicates the logarithmic of the flow rate $\log|d{\boldsymbol M}/d\ell|$, and the green lines are where the flow rate diverges, indicating a topological phase transition caused by flipping of the curvature function at the corresponding $k_{c}$. Right panel is phase diagrams and critical points as a function of rotation angles. The existence of the multicriticality in the phase diagram is evident.} \label{Fig111}
\end{center}
\end{figure*}
\subsection{Two-dimensional class D quantum walks}
For the 2D class D quantum walk, the protocol of the quantum walk is \cite{Kitagawa,Panahiyan2020-2}
\begin{equation}
\widehat{U} = \widehat{S}_{\uparrow \downarrow}(y) \widehat{C}_{y}(\beta) \widehat{S}_{\uparrow \downarrow}(x) \widehat{C}_{y}(\alpha) \widehat{S}_{\uparrow \downarrow}(x,y) \widehat{C}_{y} (\beta) \label{protocol2D},
\end{equation}
Using the Discrete Fourier Transformation, we find the shift operators are $\widehat{S}_{\uparrow \downarrow} (x,y)= e^{i(k_{x}+k_{y}) \sigma_{z}}$, $\widehat{S}_{\uparrow \downarrow} (x)= e^{ik_{x} \sigma_{z}}$ and $\widehat{S}_{\uparrow \downarrow} (y)= e^{ik_{y} \sigma_{z}}$. We obtain the energy bands
\begin{eqnarray}
E & = & \pm\cos^{-1}(\rho), \label{energy2D}
\end{eqnarray}
in which
\begin{eqnarray}
\rho = &&\kappa_{\alpha}\kappa_{2\beta}\cos(k_x) \cos (k_x+ 2k_y)
\nonumber \\
&&-\kappa_{\alpha}\sin(k_x) \sin (k_x+ 2k_y)
-\lambda_{\alpha} \lambda_{2\beta} \cos ^2(k_x).
\end{eqnarray}
and $\boldsymbol \zeta$ is obtained as
\begin{eqnarray}
\zeta_{x} &=& - 2 \lambda _{\beta } \sin \left(k_x\right) \left(\lambda _{\alpha } \lambda _{\beta } \cos \left(k_x\right)-\kappa _{\alpha } \kappa _{\beta } \cos \left(k_x+2 k_y\right)\right), \notag
\\
\zeta_{y} &=& \lambda _{\alpha } \kappa _{\beta }^2-\lambda _{\alpha } \lambda _{\beta }^2 \cos \left(2 k_x\right)
\nonumber \\
&&+2 \kappa _{\alpha } \kappa _{\beta } \lambda _{\beta } \cos \left(k_x\right) \cos \left(k_x+2 k_y\right), \notag
\\
\zeta_{z} &=& \lambda _{\alpha } \kappa _{\beta } \lambda _{\beta } \sin \left(2 k_x\right)
\nonumber \\
&-&\kappa _{\alpha } \left(\kappa _{\beta }^2 \sin \left(2 \left(k_x+k_y\right)\right)+\lambda _{\beta }^2 \sin \left(2 k_y\right)\right).
\end{eqnarray}
To investigate the critical behavior of the 2D quantum walk, we start from the curvature function
\begin{equation}
F(k_{x},k_{y},\alpha,\beta)= \bigg (\frac{\partial \boldsymbol n}{\partial k_{x}} \times \frac{\partial \boldsymbol n}{\partial k_{y}} \bigg) \cdot \boldsymbol n=\frac{\phi}{(\zeta_{x}^2+\zeta_{y}^2+\zeta_{z}^2)^{\frac{3}{2}}}, \label{Chern}
\end{equation}
whose intergral counts the skyrmion number of the ${\boldsymbol n}$ vector in the BZ, in which
\begin{eqnarray}
\phi=&& 2 \kappa _{\alpha } \lambda _{\beta } \left(\kappa _{\beta }^2+\lambda _{\beta }^2\right)
\bigg[
4 \kappa _{\alpha }^2 \kappa _{\beta }^2 \lambda _{\beta } \cos \left(k_x\right) \cos \left(k_x+2 k_y\right)+ \notag
\\ &&
\kappa_{\alpha } \lambda _{\alpha } \kappa _{\beta } \bigg(2 \kappa _{\beta }^2 \cos \left(2 k_y\right) \cos \left(2k_x+2k_y)\right)- \notag
\\ &&
\lambda _{\beta }^2 \left(2 \cos \left(2 k_x\right)+\cos \left(4
k_y\right)+3\right)\bigg) \notag
\\ &&
+2 \lambda _{\alpha }^2 \lambda _{\beta } \cos \left(2 k_y\right) \left(\lambda _{\beta }^2-\kappa _{\beta }^2 \cos \left(2 k_x\right)\right)
\bigg]. \notag
\end{eqnarray}
The controlability of momentum ${\boldsymbol k}$ in the protocol of Eq.~(\ref{protocol2D}) enables us to obtain the entire momentum profile of the Berry curvature, and hence the fidelity susceptibility in Eq.~(\ref{chiF_div_2D}) for the entire momentum space manifold. Now, we consider the rotation angle $\alpha$ as the tuning parameter. In contrast to the 1D case, the energy bands can close their gap at different values of $k_{x},k_{y}$ not limited to $0$ and $\pi$. Nevertheless, irrespective of the precise location of ${\boldsymbol k}_{c}$, the divergence and flipping of the curvature function always hold
\begin{eqnarray}
&&\lim\limits_{\alpha \rightarrow \alpha_{c}^{-}} F(k_{x}=k_{c},k_{y}=k_{c},\alpha)
\nonumber \\
&&= -\lim\limits_{\alpha \rightarrow \alpha_{c}^{+}} F(k_{x}=k_{c},k_{y}=k_{c},\alpha)= \pm \infty,
\end{eqnarray}
indicating that the curvature function can be invoked to study the critical behavior of the system. The plotted diagrams for curvature function show the emergence of a single peak as $\alpha \rightarrow \alpha_{c}$ (see Fig. \ref{Fig2}). This shows that band crossing is one ($n=1$) and peak-divergence scenario is applicable for this protocol as well. In what follows, for the sake of brevity and simplicity, we consider $k_{y}=-k_{x}$. In such a case, one can find the energy gap closes at $k_{x}=k_{c}=\pi/2$. It is straightforward to find the curvature function and the length scale at the critical points as
\begin{eqnarray}
&&F(k_{x}=-k_{y}=\frac{\pi}{2},\alpha) =\frac{2 {\rm Sig}(\kappa_{\alpha}) \left(\lambda_{2(\alpha-\beta)}-\lambda_{2\alpha}-\lambda_{2\beta} \right) }{1- \kappa_{2\alpha}},
\nonumber \\
&&\xi_{x}^2(k_{x}=-k_{y}=\frac{\pi}{2},\alpha) \approx \frac{\Xi}{(1- \kappa_{2\alpha})^{\frac{1}{2}}}, \label{xiDkc11}
\end{eqnarray}
in which
\begin{eqnarray}
\Xi = &&
2\sqrt{2} \kappa_{\alpha} (2\lambda_{\beta}^2(5+2 \kappa_{2\alpha}+\kappa_{2\beta})
\nonumber \\
&&-\lambda_{2\beta}\cot \left(\frac{\alpha
}{2}\right) (3\kappa_{2\beta}+\kappa_{2\alpha}-4)). \notag
\end{eqnarray}
By setting $\beta=\pi/2$, critical point occurs at $\alpha_{c}=0$, in which case we extract the critical exponents $\gamma=2$ and $\nu=1$, and the scaling law is valid through $\gamma= D\nu$ with $D=2$ which is in agreement with the results in Ref. \cite{Chen20191,Molignini19}.
\begin{figure*}[htb]
\begin{center}
\includegraphics[clip=true,width=0.5\columnwidth]{ED2}
\includegraphics[clip=true,width=1.2\columnwidth]{CurveD2V2}\\
\includegraphics[clip=true,width=0.5\columnwidth]{ED22}
\includegraphics[clip=true,width=1.2\columnwidth]{CurveD22V2}
\caption{Left panel, energy as a function of rotation angle, momenta $k_{x}$ and $k_{y}$ for two cases of $\beta=\pi/2$ with $\alpha=0$ and $\beta=\pi/2$ with $k_{y}=-k_{x}$ are plotted. Evidently, the curvature function has one peak which indicates that band crossing is one. The peak starts to grow as $\alpha \rightarrow \alpha_{c}$ and it diverges at $\alpha = \alpha_{c}$ with the peak flipping as we pass the critical point.} \label{Fig2}
\end{center}
\end{figure*}
To see the correlation function, we start from the stroboscopic eigenstates of the Hamiltonian
\begin{eqnarray}
|\psi_{\bf k\pm}\rangle=\frac{1}{\sqrt{2n(n\pm n_{z})}}\left(\begin{array}{c}
n_{z}\pm n \\
n_{x}+i n_{y}
\end{array}\right).
\end{eqnarray}
from which we see that the stroboscopic Berry Curvature of the filled band eigenstates coincides with the curvature function in Eq.~(\ref{Chern})
\begin{eqnarray*}
\partial_{k_{x}}\langle\psi_{\boldsymbol k-}|\partial_{k_{y}}|\psi_{\boldsymbol k-}\rangle-
\partial_{k_{y}}\langle\psi_{\boldsymbol k-}|\partial_{k_{x}}|\psi_{\boldsymbol k-}\rangle=\frac{1}{2}F({\boldsymbol k},\alpha,\beta),
\end{eqnarray*}
Thus the Fourier transform of the curvature function gives a correlation function that measures the overlap of the stroboscopic Wannier states according to Eq.~(\ref{Wannier_correlation_2D}). Moreover, using the Lorentzian shape in Eq.~(\ref{curv-22D}), the Fourier transform
\begin{eqnarray}
\tilde{F}_{2D}(\boldsymbol R,\alpha) &\approx&\frac{1}{2}\int\frac{d^{2}{\boldsymbol k}}{(2\pi)^{2}}\frac{F(\boldsymbol k_{c},\alpha)}{1+\xi^{2}\delta \boldsymbol k^{2}},
\end{eqnarray}
gives a correlation function that decays with ${\boldsymbol R}$ with the correlation length $\xi$. In case of $k_{y}=-k_{x}$ ($R_{y}=-R_{x}$), we observe that decay of the correlation function would be through a damped oscillation which is due to presence of at least two peaks in curvature function (see Fig. \ref{Fig22}). Now, we are in a position to find the fidelity susceptibility which can be done by first building up the quantum metric associated to the our quantum walk \eqref{2D_Regab_Imgab} and then calculating its determinant which leads to
\begin{eqnarray}
\chi_{F}=\text{det } g_{k_{x}k_{y}}= \frac{F^{2}(k_{x},k_{y},\alpha,\beta)}{4},
\end{eqnarray}
which has divergency at gapless points of energy bands and can be used to characterize the critical points.
\begin{figure*}[htb]
\begin{center}
\includegraphics[clip=true,width=1.2\columnwidth]{CorrFD22V2}
\caption{Correlation function, $\tilde{F}_{2D}(\boldsymbol R,\alpha)$, as a function of $\boldsymbol R$ for $\beta=\pi/2$ with $k_{y}=-k_{x}$ ($R_{y}=-R_{x}$). The correlation function decays through a damped oscillation since the curvature function acquires two peaks. Evidently, the correlation function, similar to cuvrature function, flips sign as system passes the critical point. } \label{Fig22}
\end{center}
\end{figure*}
\begin{figure*}[htb]
\begin{center}
\includegraphics[clip=true,width=1.5\columnwidth]{RG2D}
\includegraphics[clip=true,width=0.34\columnwidth]{phase_diagram_2D}
\caption{CRG approach applied to the 2D quantum walk, where the left four panels show the RG flow in the ${\boldsymbol M}=(\alpha,\beta)$ parameter space when the scaling procedure is applied to each of the four HSPs ${\boldsymbol k}_{0}=(0,0)$, $(\pi/2,\pi/2)$, $(\pi/2,0)$, and $(0,\pi/2)$ with the scaling direction fixed at ${\boldsymbol {\hat k}}_{s}={\bf {\hat k}}_{x}$. The color code indicates the logarithmic of the flow rate $\log|d{\boldsymbol M}/d\ell|$, and the green lines are where the flow rate diverges, indicating a topological phase transition caused by flipping of the curvature function at the corresponding ${\boldsymbol k}_{0}$. The right panel shows the phase boundaries by combining these results. } \label{Fig222}
\end{center}
\end{figure*}
We now discuss the application of the CRG approach to the 2D quantum walk, treating ${\boldsymbol M}=\left\{\alpha,\beta\right\}$ as a 2D parameter space. The resulting RG flow is shown in Fig. \ref{Fig222}, where we use the four HSPs ${\boldsymbol k}_{0}=(0,0)$, $(\pi/2,\pi/2)$, $(\pi/2,0)$ and $(0,\pi/2)$ and fix the scaling direction to be ${\bf {\hat k}}_{s}={\bf {\hat k}}_{x}$. The lines in the parameter space where the flow rate $|d{\bf M}/d\ell|$ diverges and the RG flow flows away from are the topological phase transitions caused by flipping the curvature function at the corresponding ${\bf k}_{0}$. Collecting the transition lines in all the four ${\bf k}_{0}$ cases correctly captures the phase diagrams for 2D quantum walk.
\section{Conclusion \label{sec:conclusion}}
In summary, we clarified the notion of quantum metric tensor near topological phase transitions within the context of 1D and 2D Dirac models. The quantum metric tensor defined in the manifold of momentum space turns out to represent a kind of geometric texture of the ${\hat{\bf d}}$-vector that parametrizes the Dirac Hamiltonian, in a way similar to the winding texture of Berry curvature and the skyrmion texture of Berry curvature. The determinant of the quantum metric tensor coincides with the square of the Berry connection in 1D and Berry curvature in 2D, from which we define the representative fidelity susceptibility. As a result, the fidelity susceptibility shares the same Lorentzian shape and critical exponent as these curvature functions, and moreover the correlation length yields a momentum scale over which the fidelity susceptibility decays.
We then turned to the simuation of these quantities by means of quantum walks for 1D class BDI and 2D class D Dirac models. It is shown that not only can the quantum walks map out the entire momentum profile of the curvature function, and hence the quantum metric tensor in the entire manifold, but also capture the critical exponents and scaling laws. Due to geometry of the gap-closing in 1D quantum walk, the stroboscopic Wannier state correlation function either displays a damped oscillation which happens for Dirac cone gap-closings, or monotonically decays for the Fermi arc case. For 2D quantum walk, since the curvature function admitted presence of two peaks corresponding to two critical points, the correlation function decayed through a damped oscillation. These results confirm the quantum walks as universal simulators of topological phase transitions, and introduces the notion of universality class into these simulators that can eventually be compared with real topological materials.
While the present work focuses on only two protocols of quantum walks, the same analysis of criticality can be done for protocols that simulate other symmetry classes and dimensions. In addition, the decay of the correlation function for robust edge states observed in inhomogenous quantum walk is another subject of the interest \cite{Kitagawa}. From the perspective of Foquet engineering, it remains to be explored how to properly design the protocol such that the quantum walk can simulate more exotic topological phases, such as nodal loops\cite{Molignini20}. On the fidelity susceptibility side, it remains to be analized how other kinds of curvature functions, such as that associated with the ${\mathbb Z}_{2}$ invariant and the 3D winding number\cite{Chen2,Chen19_book_chapter,Chen20191}, are related to the quantum metric tensor. We leave these intriguing issues for the future investigations.
\section{Acknowledgement}
The authors acknowledge fruitful discussions with P. Molignini, R. Chitra and S. H. Hendi. W. Chen is financially supported by the productivity in research fellowship from CNPq.
| proofpile-arXiv_065-318 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{se1}
The role of the space of holomorphic projective structures in the understanding of the uniformization theorem of
Riemann surfaces was emphasized by many authors (see, for instance, \cite{Gu,St} and references therein).
Recall that a holomorphic projective structure on a Riemann surface $X$ is given by (the equivalence class of) an
atlas $\{(U_i,\, \phi_i)\}_{i\in I}$ with local charts $X\,\supset \, U_i\, \stackrel{\phi_i}{\longrightarrow}\, {\mathbb C}{\mathbb P}^1$, $i\, \in\, I$,
such that all the transition maps $\phi_j\circ \phi^{-1}_i$
are restrictions of elements in the M\"obius group $\text{PGL}(2,{\mathbb C})$ of complex projective
transformations (see Section \ref{se2.1} for more details).
Historically, the idea of using holomorphic projective structures to prove uniformization theorem for Riemann
surfaces came from the study of second-order linear differential equations (see, \cite[Chapter VIII]{St}). The modern
point of view summarizes this equivalence between holomorphic projective connections and second-order linear
differential equations as an identification of the space of holomorphic projective connections with the space of
${\rm PGL}(2,{\mathbb C})$--opers (see \cite[Section I.5]{De} and \cite{BeDr}).
A more general notion is that of a branched holomorphic projective structure which was introduced and studied by
Mandelbaum in \cite{Ma1, Ma2}; more recently, a general notion of (non necessarily flat) branched Cartan geometry on complex manifolds was
introduced and studied in \cite{BD}. A branched holomorphic projective structure is defined by (the equivalence
class of) a holomorphic atlas with the local charts being finite branched coverings of open subsets in ${\mathbb
C}{\mathbb P}^1$, while the transition maps are restrictions of elements in the M\"obius group $\text{PGL}(2,{\mathbb
C})$ (see Section \ref{se5.1} for details). The branching divisor $S$ of a branched projective structure on $X$ is
the union of points in $X$ where the local projective charts admit a ramification point. The Riemann surface $X
\setminus S$ inherits a holomorphic projective structure in the classical sense. Branched holomorphic projective
structures play an important role in the study of hyperbolic metrics with conical singularities on Riemann surfaces
or in the study of codimension one transversally projective holomorphic foliations (see, for example, \cite{CDF}).
From a global geometric point of view, a holomorphic projective structure over a Riemann surface $X$ is known to
give a flat ${\mathbb C}{\mathbb P}^1$-bundle over $X$ together with a holomorphic section which is transverse to
the horizontal distribution defining the flat connection on the bundle. For a branched holomorphic projective
structure with branching divisor $S$, we have a similar description, but the section of
the projective bundle fails to be transverse
to the horizontal distribution (defining the flat structure) precisely at points in $S$ (see, for example, \cite{BDG,
CDF, GKM, LM} or Section \ref{se5.1} here).
In this article we study the branched holomorphic projective structures on a Riemann surface $X$ with fixed
branching divisor $S$. Throughout, we assume that $S$ is reduced, meaning that $S\, :=\, \sum_{i=1}^d x_i$,
with $x_i \,\in\, X$, distinct points.
The presentation is organized in the following way. Sections \ref{se2}, \ref{se3} and \ref{sec4} deal with the
geometry of classical holomorphic projective connections and Sections \ref{se5}, \ref{sec6} and \ref{sec7} are about
branched holomorphic projective connections. More precisely, Section \ref{se2} presents the geometrical setting of a
holomorphic projective structure on a Riemann surface $X$ and proves that
a projective structure induces a holomorphic connection on the
rank three holomorphic 2-jet bundle $J^2(TX)$. Moreover this connection has a natural geometric behavior: it is an
oper connection (see Definition \ref{def1}) with respect to the canonical filtration of $J^2(TX)$ induced by the
kernels of the natural forgetful projections of jet bundles $J^2(TX)\,\longrightarrow\,
J^1(TX)$ and $J^2(TX)\,\longrightarrow\, TX$. In Section
\ref{se3} we identify the space of holomorphic projective structures on the Riemann surface $X$ with the space of
oper connections on $J^2(TX)$ satisfying some extra natural geometrical properties (see Corollary \ref{cor1}).
Section \ref{sec4} translates the previous equivalence as an identification of the space of holomorphic projective
connections with the space of ${\rm SO}(3,{\mathbb C})$--opers.
Section \ref{se5} starts the study of branched holomorphic projective structures. It is shown that a branched
projective structure on a Riemann surface $X$ with branching divisor $S$ gives rise to a logarithmic connection on
the rank two 2-jet bundle $J^2((TX)\otimes {\mathcal O}_X(S))$ singular over $S$, with residues at $S$ satisfying
certain natural geometric conditions with respect to the canonical filtration of $J^2((TX)\otimes {\mathcal O}_X(S))$
(see Proposition \ref{prop3}).
The main results proved here are obtained in Section \ref{sec6} and in Section \ref{sec7}.
In Section \ref{se6.1} we
introduce the notion of a branched ${\rm SO}(3,{\mathbb C})$--oper singular at the divisor $S$. We
show that the space ${\mathcal P}_S$ of branched holomorphic projective structures with fixed branching divisor $S$
is naturally identified with the space of branched ${\rm SO}(3,{\mathbb C})$--opers singular at $S$ (see Theorem
\ref{thm1}).
We deduce that this space $\mathcal P_S$ also coincides with a subset of the set of all logarithmic connections with
singular locus $S$, satisfying certain natural geometric conditions, on the rank three holomorphic $2$-jet bundle
$J^2((TX)\otimes {\mathcal O}_X(S))$ (see Proposition \ref{prop4} and Theorem \ref{thm2}).
The above mentioned Theorem \ref{thm2} generalizes the main result in \cite{BDG} (Theorem 5.1) where, under the additional
assumption that the degree $d$ of $S$ is even and such that $d \,\neq\, 2g-2$ (with $g$ the genus of $X$), it was
proved that ${\mathcal P}_S$ coincides with a subset of the set of all logarithmic connections with singular locus
$S$, satisfying certain geometric conditions, on the rank two holomorphic jet bundle $J^1(Q)$, where $Q$ is a fixed
holomorphic line bundle on $X$ such that $Q^{\otimes 2}\,=\, TX\otimes {\mathcal O}_X(S)$. It may
be mentioned that for a branching divisor $S$ of general degree $d$, the bundle $Q$
considered in \cite{BDG} does not exist.
Let us clarify an important improvement in the methods developed in this work with respect to those in \cite{BDG}.
The $\text{PSL}(2,{\mathbb C})$-monodromy of a (unbranched) holomorphic projective structure on a compact Riemann
surface $X$ always admits a lift to $\text{SL}(2,{\mathbb C})$ (see \cite[Lemma 1.3.1]{GKM}). Geometrically
this means that the associated flat ${\mathbb C}{\mathbb P}^1$-bundle over $X$ is the projectivization of a rank
two vector bundle over $X$; hence one can work in the set-up of rank two vector bundles. This is not true anymore for
branched holomorphic projective structures if the degree of the branching divisor is odd; for this reason the results
and methods in \cite{BDG} cannot be extended to the case where the branching divisor of the branched holomorphic
projective structure is of odd degree.
Here we consider ${\rm SO}(3,{\mathbb C})$--opers instead of
the equivalent $\text{PSL}(2,{\mathbb C})$-opers and we develop the
notion of branched ${\rm SO}(3,{\mathbb C})$--opers. This enables us to investigate branched holomorphic
projective structure in the framework of rank three holomorphic (2-jet) vector bundles instead of the
projective bundles.
\section{Projective structure and second jet of tangent bundle}\label{se2}
\subsection{Projective structure}\label{se2.1}
The multiplicative group of nonzero complex numbers will be denoted by ${\mathbb C}^*$.
Let $\mathbb V$ be a complex vector space of dimension two. Let ${\mathbb P}(\mathbb V)$ denote
the projective line that parametrizes all one-dimensional subspaces of $\mathbb V$. Consider
the projective linear group $\text{PGL}(\mathbb V) \, :=\, \text{GL}(\mathbb V)/({\mathbb C}^*\cdot{\rm
Id}_{\mathbb V})\,=\, \text{SL}(\mathbb V)/(\pm {\rm
Id}_{\mathbb V})$. The action of $\text{GL}(\mathbb V)$ on $\mathbb V$ produces an action of
$\text{PGL}(\mathbb V)$ on ${\mathbb P}(\mathbb V)$. This way $\text{PGL}(\mathbb V)$ gets
identified with the group of all holomorphic automorphisms of ${\mathbb P}(\mathbb V)$.
Let $\mathbb X$ be a connected Riemann surface. A holomorphic coordinate function on $\mathbb X$
is a pair $(U,\, \phi)$, where $U\, \subset\, \mathbb X$ is an open subset and
$$
\phi\, :\, U\, \longrightarrow\, {\mathbb P}(\mathbb V)
$$
is a holomorphic embedding. A holomorphic coordinate atlas on $\mathbb X$ is a family of
holomorphic coordinate functions $\{(U_i,\, \phi_i)\}_{i\in I}$ such that
$\bigcup_{i\in I} U_i \,=\, \mathbb X$. So
$\phi_j\circ \phi^{-1}_i \, :\, \phi_i(U_i\cap U_j) \, \longrightarrow\, \phi_j(U_i\cap U_j)$
is a biholomorphic map for every $i,\, j\,\in\, I$ with $U_i\cap U_j\, \not=\, \emptyset$.
A projective structure on $\mathbb X$ is given by a holomorphic coordinate atlas
$\{(U_i,\, \phi_i)\}_{i\in I}$ such that for all ordered pairs $i,\, j\, \in\, I$,
with $U_i\cap U_j\, \not=\, \emptyset$, and every connected component $U\, \subset\, U_i\cap U_j$,
there is an element $G^U_{j,i}\, \in\, \text{PGL}(\mathbb V)$
satisfying the condition that the biholomorphic map
$$
\phi_j\circ \phi^{-1}_i \, :\, \phi_i(U) \, \longrightarrow\, \phi_j(U)
$$
is the restriction, to $\phi_i(U)$, of the automorphism of ${\mathbb P}(\mathbb V)$
given by of $G^U_{j,i}$. Note that $G^U_{j,i}$ is uniquely determined by $\phi_j\circ \phi^{-1}_i$.
Two holomorphic coordinate atlases $\{(U_i,\, \phi_i)\}_{i\in I}$ and $\{(U'_i,\, \phi'_i)\}_{i\in
J}$ satisfying the above condition are called \textit{equivalent} if their union $\{(U_i,\, \phi_i)\}_{i\in I}
\bigcup \{(U'_i,\, \phi'_i)\}_{i \in J}$ also satisfies the above condition. A \textit{projective structure} on
$\mathbb X$ is an equivalence class of atlases satisfying the above condition; see \cite{Gu},
\cite{He}, \cite{GKM} for projective structures.
Let $\gamma\, :\,{\mathbf P}\, \longrightarrow\, \mathbb X$ be a holomorphic ${\mathbb C}{\mathbb P}^1$--bundle
over $\mathbb X$. In other words, $\gamma$ is a surjective holomorphic submersion such that each fiber of it
is holomorphically isomorphic to the complex projective line ${\mathbb C}{\mathbb P}^1$. Let
\begin{equation}\label{tga}
T_\gamma\, :=\, \text{kernel}(d\gamma)\, \subset\,T{\mathbf P}
\end{equation}
be the relative holomorphic tangent bundle, where $T{\mathbf P}$ is the holomorphic
tangent bundle of ${\mathbf P}$. A \textit{holomorphic connection} on ${\mathbf P}$ is a holomorphic line
subbundle ${\mathcal H}\, \subset\, T{\mathbf P}$ such that the natural homomorphism
\begin{equation}\label{dc}
T_\gamma\oplus {\mathcal H}\, \longrightarrow\, T{\mathbf P}
\end{equation}
is an isomorphism; see \cite{At}.
Let ${\mathcal H}\, \subset\, T{\mathbf P}$ be a holomorphic connection on ${\mathbf P}$.
Let $s\, :\, {\mathbb X}\, \longrightarrow\,{\mathbf P}$ be a holomorphic section of
$\gamma$, meaning $\gamma\circ s\,=\, \text{Id}_{\mathbb X}$. Consider the differential $ds$ of $s$
$$
T {\mathbb X}\, \longrightarrow\, s^*T{\mathbf P}\,=\, (s^*T_\gamma)\oplus (s^*{\mathcal H})\, ,
$$
where the decomposition is the pullback of the decomposition in \eqref{dc}. Let
\begin{equation}\label{dc2}
\widehat{ds}\, :\, T {\mathbb X}\, \longrightarrow\, s^*T_\gamma
\end{equation}
be the homomorphism obtained by composing it with the natural projection
$(s^*T_\gamma)\oplus (s^*{\mathcal H})\, \longrightarrow\, s^*T_\gamma$.
Giving a projective structure on $\mathbb X$ is equivalent to giving a triple $(\gamma,\, {\mathcal H},\, s)$,
where
\begin{itemize}
\item $\gamma\, :\, {\mathbf P}\, \longrightarrow\, \mathbb X$ is a holomorphic ${\mathbb C}{\mathbb P}^1$--bundle,
\item ${\mathcal H}\, \subset\, T{\mathbf P}$ is a holomorphic connection, and
\item $s\, :\, {\mathbb X}\, \longrightarrow\,{\mathbf P}$ is a holomorphic section of
$\gamma$,
\end{itemize}
such that the homomorphism $\widehat{ds}$ in \eqref{dc2} is an isomorphism. More details on this
can be found in \cite{Gu}.
\subsection{Jet bundles}\label{se2.2}
We briefly recall the definition of a jet bundle of a holomorphic vector bundle on $\mathbb X$.
Let
$$
p_j \,\colon\, {\mathbb X}\times{\mathbb X} \,\longrightarrow\,{\mathbb X}\, , \ \ j \,=\, 1,\, 2\,,
$$
be the natural projection to the $j$--th factor. Let
$$
\Delta \,:=\, \{(x,\, x) \,\in\, {\mathbb X}\times{\mathbb X} \,\mid\, x \,\in\,
{\mathbb X}\}\, \subset\, {\mathbb X}\times{\mathbb X}
$$
be the reduced diagonal divisor. For a holomorphic vector bundle $W$ on ${\mathbb X}$, and
any integer $k\, \geq\, 0$, define the $k$--th order jet bundle
$$
J^k(W) \,:=\, p_{1*} \left((p^*_2W)/(p^*_2W\otimes
{\mathcal O}_{{\mathbb X}\times{\mathbb X}}(-(k+1)\Delta))\right)
\, \longrightarrow \, {\mathbb X}\, .
$$
The natural inclusion of ${\mathcal O}_{{\mathbb X}\times{\mathbb X}}(-(k+1)\Delta)$
in ${\mathcal O}_{{\mathbb X}\times{\mathbb X}}(-k\Delta)$ produces a surjective homomorphism
$J^{k}(W) \,\longrightarrow\, J^{k-1}(W)$. This way we obtain a short exact sequence of
holomorphic vector bundles on ${\mathbb X}$
\begin{equation}\label{e1}
0\, \longrightarrow\, K^{\otimes k}_{\mathbb X}\otimes W \,\longrightarrow\,
J^{k}(W) \,\longrightarrow\, J^{k-1}(W) \,\longrightarrow\, 0\, ,
\end{equation}
where $K_{\mathbb X}$ is the holomorphic cotangent bundle of ${\mathbb X}$.
For holomorphic vector bundles $W$ and $W'$ on ${\mathbb X}$,
any ${\mathcal O}_{\mathbb X}$--linear homomorphism $W\, \longrightarrow\, W'$ induces a homomorphism
\begin{equation}\label{e2b}
J^i(W)\, \longrightarrow\, J^i(W')
\end{equation}
for every $i\, \geq\, 0$.
For holomorphic vector bundles $W$ and $W'$ on ${\mathbb X}$, any any integer $m\, \geq\, 0$, define the
sheaf of differential operators of order $m$ from $W$ to $W'$
\begin{equation}\label{di1}
\text{Diff}^m_{\mathbb X}(W,\, W') \,:=\, \text{Hom}(J^m(W),\, W') \,\longrightarrow\,{\mathbb X}\, .
\end{equation}
Using the exact sequence in \eqref{e1} we have the short exact sequence
\begin{equation}\label{e2}
0\, \longrightarrow\, \text{Diff}^{m-1}_{\mathbb X}(W,\, W') \, \longrightarrow\,
\text{Diff}^m_{\mathbb X}(W,\, W') \, \stackrel{\sigma}{\longrightarrow}\, (T{\mathbb X})^{\otimes m}
\otimes \text{Hom}(W,\, W') \, \longrightarrow\, 0\, ,
\end{equation}
where $T{\mathbb X}$ is the holomorphic tangent bundle of ${\mathbb X}$. The homomorphism $\sigma$
in \eqref{e2} is called the symbol map.
\begin{remark}\label{rem-j}
Consider the short exact sequences
$$
0\, \longrightarrow\, K_{\mathbb X}\otimes T{\mathbb X}\,=\, {\mathcal O}_{\mathbb X} \,
\longrightarrow\, J^{1}(T{\mathbb X}) \,\longrightarrow\, T{\mathbb X} \,\longrightarrow\, 0
$$
and
$$
0\, \longrightarrow\, K^{\otimes 2}_{\mathbb X}\otimes T{\mathbb X}\,=\, K_{\mathbb X}
\,\longrightarrow\, J^{2}(T{\mathbb X}) \,\longrightarrow\,J^{1}(T{\mathbb X}) \,\longrightarrow\, 0
$$
as in \eqref{e1}. These two together imply that $\bigwedge^3 J^{2}(T{\mathbb X})\,=\,
K_{\mathbb X}\otimes\bigwedge^2 J^{1}(T{\mathbb X})\,=\, {\mathcal
O}_{\mathbb X}$. It is straight-forward to check that any for biholomorphism $\beta \, :\, {\mathbb X}\,
\longrightarrow\, {\mathbb Y}$, the homomorphism $J^2(T{\mathbb X})\, \longrightarrow\,
\beta^* J^2(T{\mathbb Y})$ corresponding to $\beta$
takes the section of $\bigwedge^3 J^{2}(T{\mathbb X})\,=\, {\mathcal
O}_{\mathbb X}$ given by
the constant function $1$ on ${\mathbb X}$ to the section of $\bigwedge^3 J^{2}(T{\mathbb Y})
\,=\, {\mathcal O}_{\mathbb Y}$ given by the constant function $1$ on ${\mathbb Y}$.
\end{remark}
\subsection{A third order differential operator}\label{se2.3}
We continue with the set-up of Section \ref{se2.1}. Let
$$
{\mathcal T}\, :=\, {\mathbb P}(\mathbb V)\times H^0({\mathbb P}(\mathbb V),\, T{\mathbb P}(\mathbb V))
\, \longrightarrow\,{\mathbb P}(\mathbb V)
$$
be the trivial holomorphic vector bundle of rank three over ${\mathbb P}(\mathbb V)$ with fiber
$H^0({\mathbb P}(\mathbb V),\, T{\mathbb P}(\mathbb V))$. For any integer $j\, \geq\, 1$, let
\begin{equation}\label{e0}
\psi_j\, :\, {\mathcal T}\, \longrightarrow\, J^j(T{\mathbb P}(\mathbb V))
\end{equation}
be the holomorphic ${\mathcal O}_{{\mathbb P}(\mathbb V)}$--linear
map that sends any $(x,\, s)\, \in\, {\mathbb P}(\mathbb V)\times H^0({\mathbb P}(\mathbb V),
\, T{\mathbb P}(\mathbb V))$ to the restriction of the section $s$ to the $j$--th order infinitesimal
neighborhood of the point $x\, \in\, {\mathbb P}(\mathbb V)$.
\begin{lemma}\label{lem1}
The homomorphism $\psi_2$ in \eqref{e0} is an isomorphism.
\end{lemma}
\begin{proof}
If $(x,\, s)\, \in\, \text{kernel}(\psi_2(x))$, then
$$
s \, \in\, H^0({\mathbb P}(\mathbb V),\, {\mathcal O}_{{\mathbb P}(\mathbb V)}(-3x)\otimes
T{\mathbb P}(\mathbb V))\, .
$$
But $H^0({\mathbb P}(\mathbb V),\, {\mathcal O}_{{\mathbb P}(\mathbb V)}(-3x)\otimes
T{\mathbb P}(\mathbb V))\,=\, 0$, because $\text{degree}({\mathcal O}_{{\mathbb P}(\mathbb V)}(-3x)\otimes
T{\mathbb P}(\mathbb V))\, <\, 0$. So the homomorphism $\psi_2$ is fiberwise injective. This implies that
$\psi_2$ is an isomorphism, because we have $\text{rank}({\mathcal T})\,=\, \text{rank}(J^2(T{\mathbb P}
(\mathbb V))$.
\end{proof}
\begin{lemma}\label{lem2}
There is a canonical holomorphic differential operator $\delta_0$ of order three from
$T{\mathbb P}(\mathbb V))$ to $K^{\otimes 2}_{{\mathbb P}(\mathbb V)}$. The symbol of $\delta_0$
is the section of
$$
(T{\mathbb P}(\mathbb V))^{\otimes 3}\otimes{\rm Hom}\Big(T{\mathbb P}(\mathbb V),\,
K^{\otimes 2}_{{\mathbb P}(\mathbb V)}\Big)\,=\, {\mathcal O}_{{\mathbb P}(\mathbb V)}
$$
given by the constant function $1$ on ${\mathbb P}(\mathbb V)$.
\end{lemma}
\begin{proof}
Consider the short exact sequence
\begin{equation}\label{e3}
0\, \longrightarrow\, K^{\otimes 3}_{{\mathbb P}(\mathbb V)}\otimes T{\mathbb P}(\mathbb V)
\,=\, K^{\otimes 2}_{{\mathbb P}(\mathbb V)} \,\stackrel{\iota_0}{\longrightarrow}\,
J^3(T{\mathbb P}(\mathbb V)) \,\longrightarrow\, J^2(T{\mathbb P}(\mathbb V)) \,\longrightarrow\, 0
\end{equation}
in \eqref{e1}. Using Lemma \ref{lem1}, define the homomorphism
$$
\psi_3\circ (\psi_2)^{-1}\, :\, J^2(T{\mathbb P}(\mathbb V))\,\longrightarrow\,J^3(T{\mathbb P}(\mathbb V))\, ,
$$
where the homomorphisms
$\psi_j$ are constructed in \eqref{e0}. This homomorphism $\psi_3\circ (\psi_2)^{-1}$
is a holomorphic splitting of the exact sequence in \eqref{e3}.
In other words, there is a unique surjective homomorphism
$$
\delta_0\, :\, J^3(T{\mathbb P}(\mathbb V))\,\longrightarrow\, K^{\otimes 2}_{{\mathbb P}(\mathbb V)}
$$
such that $\text{kernel}(\delta_0)\,=\, \text{image}(\psi_3\circ (\psi_2)^{-1})$ and
\begin{equation}\label{s}
\delta_0\circ\iota_0\,=\, \text{Id}_{K^{\otimes 2}_{{\mathbb P}(\mathbb V)}}\, ,
\end{equation}
where $\iota_0$ is the homomorphism in \eqref{e3}.
{}From the definition in \eqref{di1} it follows
that $$\delta_0\,\in\, H^0({\mathbb P}(\mathbb V),\, \text{Diff}^3_{{\mathbb P}(\mathbb V)}
(T{\mathbb P}(\mathbb V),\, K^{\otimes 2}_{{\mathbb P}(\mathbb V)}))\, .$$ Also, from \eqref{s} it
follows immediately that $\sigma(\delta_0)\,=\, 1$, where $\sigma$ is the symbol homomorphism
in \eqref{e2}.
\end{proof}
The trivialization of $J^2(T{\mathbb P}(\mathbb V))$ given by Lemma \ref{lem1} produces a
holomorphic connection on $J^2(T{\mathbb P}(\mathbb V))$; let
\begin{equation}\label{d}
\mathbb{D}_0\, :\, J^2(T{\mathbb P}(\mathbb V))\, \longrightarrow\, J^2(T{\mathbb P}(\mathbb V))\otimes
K_{{\mathbb P}(\mathbb V)}
\end{equation}
be this holomorphic connection on $J^2(T{\mathbb P}(\mathbb V))$. Note that any holomorphic
connection on a Riemann surface is automatically flat (see \cite{At} for holomorphic connections).
\begin{remark}\label{rem-1}
Let $U\, \subset\, {\mathbb P}(\mathbb V)$ be an open subset and
$$
s\, \in\, H^0(U,\, J^2(T{\mathbb P}(\mathbb V))\vert_U)\,=\, H^0(U,\, J^2(TU))
$$ a flat section for the connection
$\mathbb{D}_0$ in \eqref{d}. Since $\psi_2$ is an isomorphism (see Lemma \ref{lem1}), it follows that
the section $s'\, \in\, H^0(U,\, TU)$ given by $s$ using the natural projection
$J^2(T{\mathbb P}(\mathbb V))\, \longrightarrow\, T{\mathbb P}(\mathbb V)$ (see \eqref{e1}) has
the property that the section of $J^2(T{\mathbb P}(\mathbb V))\vert_U$ corresponding to $s'$ coincides
with $s$. If $U$ is connected, then $s'$ extends to a holomorphic
section of $T{\mathbb P}(\mathbb V)$ over ${\mathbb P}(\mathbb V)$.
\end{remark}
Let $\mathfrak{sl}(\mathbb V)$ be the Lie algebra of $\text{PGL}(\mathbb V)$; it consists of
endomorphisms of $\mathbb V$ of trace zero. Using the action of $\text{PGL}(\mathbb V)$ on
${\mathbb P}(\mathbb V)$ we get a homomorphism
\begin{equation}\label{a0}
\alpha_0\, :\, \mathfrak{sl}(\mathbb V)\, \longrightarrow\, H^0({\mathbb P}(\mathbb V),\, T{\mathbb P}(\mathbb V))\, .
\end{equation}
This $\alpha_0$ is an isomorphism, because it is injective and $\dim \mathfrak{sl}(\mathbb V)\,
=\, \dim H^0({\mathbb P}(\mathbb V),\, T{\mathbb P}(\mathbb V))$. Note that
$H^0({\mathbb P}(\mathbb V),\, T{\mathbb P}(\mathbb V))$ has the structure of a Lie algebra given by the
Lie bracket operation of vector fields.
The homomorphism $\alpha_0$ in \eqref{a0} is in fact an isomorphism of Lie algebras.
Therefore, from Lemma \ref{lem1} it follows that the fibers of
$J^2(T{\mathbb P}(\mathbb V))$ are identified with the Lie algebra $\mathfrak{sl}(\mathbb V)$.
In particular, the fibers of $J^2(T{\mathbb P}(\mathbb V))$ are Lie algebras.
For any $x\, \in\, {\mathbb P}(\mathbb V))$, and $v,\, w\, \in\, J^2(T{\mathbb P}(\mathbb V))_x$, let
\begin{equation}\label{d1}
[v,\, w]'\, \in\, J^2(T{\mathbb P}(\mathbb V))_x
\end{equation}
be the Lie bracket operation on the fiber $J^2(T{\mathbb P}(\mathbb V))_x$.
\begin{remark}\label{rem0}
Let $s$ and $t$ be two holomorphic vector fields on an open subset $U\, \subset\, {\mathbb
P}(\mathbb V)$. The holomorphic section of $J^2(T{\mathbb P}(\mathbb V))\vert_U$ defined by $s$
and $t$ will be denoted by $\widehat{s}$ and $\widehat{t}$ respectively. It should be clarified
that the holomorphic section $\widehat{[s,t]}$ of $J^2(T{\mathbb P}(\mathbb V))\vert_U$ given by
the Lie bracket $[s,\, t]$ of vector fields does not in general coincide with the section
$[\widehat{s},\, \widehat{t}]'$ defined by \eqref{d1}. The reason for it is
that the operation in \eqref{d1} is constructed
using the finite dimensional space consisting
of global holomorphic sections of $T{\mathbb P}(\mathbb V)$,
while the operation of Lie bracket of vector fields is constructed locally. However if $s$ and $t$ are
such that $\widehat{s}$ and $\widehat{t}$ are flat sections for the holomorphic connection $\mathbb{D}_0$
on $J^2(T{\mathbb P}(\mathbb V))$ in \eqref{d}, then the holomorphic section $\widehat{[s,\,t]}$ of
$J^2(T{\mathbb P}(\mathbb V))\vert_U$ given by the Lie bracket of vector fields $[s,\, t]$ does
coincide with the section $[\widehat{s},\, \widehat{t}]'$ defined by \eqref{d1}. Indeed, this
follows from the fact that these $s$ and $t$ are restrictions of global vector fields
on ${\mathbb P}(\mathbb V)$, if $U$ is connected; see Remark \ref{rem-1}.
\end{remark}
\begin{remark}\label{rem1}
The holomorphic connection $\mathbb{D}_0$ in \eqref{d} on $J^2(T{\mathbb P}(\mathbb V))$ preserves the Lie
algebra structure on the fibers of $J^2(T{\mathbb P}(\mathbb V))$ given in \eqref{d1}. This means that
$$
i_u \mathbb{D}_0([s,\, t]')\,=\, [i_u\mathbb{D}_0(s),\, t]'+
[s,\, i_u\mathbb{D}_0(t)]'
$$
for locally defined holomorphic sections $s$, $t$ and $u$ of $J^2(T{\mathbb P}(\mathbb V))$,
where $i_u$ is the contraction of $1$--forms by $u$. In particular,
the local system on ${\mathbb P}(\mathbb V)$ given by the flat sections for the connection
$\mathbb{D}_0$ is closed under the Lie bracket operation in \eqref{d1}.
Note that using Remark \ref{rem-1} we may construct a Lie algebra structure on the fibers
of $J^2(T{\mathbb P}(\mathbb V))$. Indeed, for $v,\, w\, \in\, J^2(T{\mathbb P}(\mathbb V))_x$, let
$\widetilde{v},\, \widetilde{w}$ be the flat sections of $J^2(T{\mathbb P}(\mathbb V))$, for the
connection $\mathbb{D}_0$, defined around $x$ such that $\widetilde{v}(x)\,=\, v$ and
$\widetilde{w}(x)\,=\, w$. Let $\widetilde{v}'$ (respectively, $\widetilde{w}'$) be the holomorphic
sections of $T{\mathbb P}(\mathbb V)$ defined around $x$
given by $\widetilde{v}$ (respectively, $\widetilde{w}$) using the natural projection
$J^2(T{\mathbb P}(\mathbb V))\, \longrightarrow\, T{\mathbb P}(\mathbb V)$ (see \eqref{e1}
and Remark \ref{rem-1}). Now define $[v,\, w]$ to be the element of
$J^2(T{\mathbb P}(\mathbb V))_x$ given by the locally defined
section $[\widetilde{v}',\,\widetilde{w}']$ of $T{\mathbb P}(\mathbb V)$. From Remark \ref{rem0}
it follows that this Lie algebra structure on the fibers of $J^2(T{\mathbb P}(\mathbb V))$ coincides with
the one in \eqref{d1}.
\end{remark}
\begin{remark}\label{rem2}
Let ${\mathbb L}_0$ be the complex local system on ${\mathbb P}(\mathbb V)$ given by the sheaf of
solutions of the differential operator $\delta_0$ in Lemma \ref{lem2}. From the construction of
$\delta_0$ it is straight-forward to deduce that ${\mathbb L}_0$ is identified with the local
system given by the sheaf of flat sections of $J^2(T{\mathbb P}(\mathbb V))$ for the connection
$\mathbb{D}_0$ in \eqref{d}. Therefore, from Remark \ref{rem1} we conclude that the stalks of the
complex local system ${\mathbb L}_0$ are closed under the Lie bracket operation of vector fields.
Moreover, the stalks of the local system ${\mathbb L}_0$ are identified with the Lie algebra
$\mathfrak{sl}(\mathbb V)$.
\end{remark}
\begin{proposition}\label{prop1}\mbox{}
\begin{enumerate}
\item Let $\mathbb X$ be a connected Riemann surface equipped with a projective structure $\mathcal
P$. Then $\mathcal P$ produces a holomorphic connection, which will be called ${\mathbb D}
({\mathcal P})$, on $J^2(T{\mathbb X})$. For any open subset $U\, \subset\, \mathbb X$, and any
section $s\, \in\, H^0(U,\, J^2(T{\mathbb X})\vert_U)\,=\, H^0(U,\, J^2(TU))$ flat for the
connection ${\mathbb D}({\mathcal P})$, there is a unique holomorphic section of $TU$ that produces
$s$. The space of section of $TU$ given by the flat sections of $J^2(U)$ is closed under the usual Lie bracket
operation of vector fields. The stalks for the local system on $\mathbb X$ given by
the sheaf of flat sections for ${\mathbb D}({\mathcal P})$ are closed under the usual Lie bracket
operation of vector fields, and moreover the stalks are isomorphic to the Lie algebra $\mathfrak{sl}(\mathbb V)$.
\item The projective structure $\mathcal P$ also produces a canonical holomorphic differential operator
$\delta({\mathcal P})\, \in\, H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))$ whose symbol is the constant function $1$ on ${\mathbb X}$.
\item The local system on $\mathbb X$ gives by the sheaf of flat sections for ${\mathbb
D}({\mathcal P})$ is identified with the local system given by the sheaf of solutions of
$\delta({\mathcal P})$. This sheaf of solutions of $\delta({\mathcal P})$ is closed under the Lie
bracket operation of vector fields.
\item The connection on $\bigwedge^3 J^2(T{\mathbb X})\,=\, {\mathcal O}_{\mathbb X}$ (see Remark \ref{rem-j})
induced by ${\mathbb D}({\mathcal P})$ coincides with the trivial connection on ${\mathcal O}_{\mathbb X}$
given by the de Rham differential $d$.
\end{enumerate}
\end{proposition}
\begin{proof}
The action of $\text{PGL}(\mathbb V)$ on ${\mathbb P}(\mathbb V)$ produces actions of
$\text{PGL}(\mathbb V)$ on $J^j(T{\mathbb P}(\mathbb V))$, $j\, \geq\, 0$, and $H^0({\mathbb P}(\mathbb V),\,
T{\mathbb P}(\mathbb V))$. The homomorphism $\psi_j$ in \eqref{e0} is clearly equivariant for the
actions of $\text{PGL}(\mathbb V)$, with $\text{PGL}(\mathbb V)$ acting diagonally on ${\mathbb
P}(\mathbb V)\times H^0({\mathbb P}(\mathbb V),\, T{\mathbb P}(\mathbb V))$. Therefore, from
Lemma \ref{lem1} we get a holomorphic connection on $J^2(T{\mathbb X})$. To explain this with
more details, take a
holomorphic coordinate atlas $\{(U_i,\, \phi_i)\}_{i\in I}$ in the equivalence class defining
$\mathcal P$. Using $\phi_i$, the holomorphic connection ${\mathbb D}_0\vert_{\phi_i(U_i)}$ in
\eqref{d} on $J^2(T{\mathbb P}(\mathbb V))\vert_{\phi_i(U_i)}$ produces a holomorphic connection
on $J^2(T{\mathbb X})\vert_{U_i}$. Using the above $\text{PGL}(\mathbb V)$--equivariance
property, these locally defined connections on $J^2(T{\mathbb X})\vert_{U_i}$, $~i\, \in\, I$,
patch together compatibly on the intersections of the open
subsets to produce a holomorphic connection ${\mathbb D}({\mathcal P})$ on
$J^2(T{\mathbb X})$.
Take any flat section $s\, \in\, H^0(U,\, J^2(T{\mathbb X})\vert_U)\,=\, H^0(U,\, J^2(TU))$ as in the
first statement of the proposition. From Remark \ref{rem-1} we conclude that the section
of $TU$ given by $s$, using the natural projection $J^2(TU)\, \longrightarrow\, TU$,
actually produces $s$. From Remark \ref{rem0} it follows that
the space of section of $TU$ given by the flat sections of $J^2(TU)$ is closed under the usual Lie bracket
operation of vector fields. Consequently,
the stalks for the local system on $\mathbb X$ given by the sheaf of flat sections for ${\mathbb
D}({\mathcal P})$ are closed under the Lie bracket operation, and they are isomorphic to the Lie
algebra $\mathfrak{sl}(\mathbb V)$, because the connection ${\mathbb D}_0$ has these properties; see
Remark \ref{rem1}.
Similarly, the second statement of the proposition follows from the fact that
the differential operator $\delta_0$ in Lemma \ref{lem2} is $\text{PGL}(\mathbb V)$--equivariant.
Indeed, given a holomorphic coordinate atlas $\{(U_i,\, \phi_i)\}_{i\in I}$ as above, we have
a differential operator on each $U_i$ given by $\delta_0$ using the coordinate function $\phi_i$.
These differential operator patch together compatibly to produce a differential operator
$$\delta({\mathcal P})\, \in\, H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))\, .$$
The third statement follows from Remark \ref{rem2} and the first statement of the proposition. For
any $s\, \in\, H^0(U,\, TU)$, where $U\, \subset\, {\mathbb X}$ is an open subset,
with $\delta({\mathcal P})(s)\,=\, 0$, the section of $J^2(T{\mathbb X})\vert_U$ given by $s$ is
flat with respect to the connection ${\mathbb D}({\mathcal P})$. Conversely, given a flat section
$s_1$ of $J^2(T{\mathbb X})\vert_U$, the section $s_2 \, \in\, H^0(U,\, TU)$, given by $s_1$ using the
natural projection $J^2(T{\mathbb X})\, \longrightarrow\, T{\mathbb X}$ (see \eqref{e1}), satisfies
the equation $\delta({\mathcal P})(s_2)\,=\, 0$.
{}From Remark \ref{rem-j} we know that if $\phi\, :\, {\mathbb U}_1\, \longrightarrow\, {\mathbb U}_2$
is a biholomorphism between two open subsets of ${\mathbb P}(\mathbb V)$, then the isomorphism
$\bigwedge^3 J^{2}(T{\mathbb U}_1)\, \longrightarrow\, \bigwedge^3 J^{2}(T{\mathbb U}_2)$ induced by
$\phi$ takes the section of $\bigwedge^3 J^{2}(T{\mathbb U}_1)$ given by the constant function $1$
on $U_1$ to the section of $\bigwedge^3 J^{2}(T{\mathbb U}_2)$ given by the constant function $1$
on $U_2$. In particular, this
holds for $\phi\, \in\, \text{PGL}(\mathbb V)\,=\, \text{Aut}({\mathbb P}(\mathbb V))$.
The connection on $\bigwedge^3 J^{2}(T{\mathbb P}(\mathbb V))$ induced by
the connection $\mathbb{D}_0$ on $J^2(T{\mathbb P}(\mathbb V))$ coincides with the trivial connection on
$\bigwedge^3 J^{2}(T{\mathbb P}(\mathbb V))$, because ${\mathbb P}(\mathbb V)$ is simply connected.
The fourth statement of the proposition follows from these.
\end{proof}
\subsection{Killing form and holomorphic connection}\label{se2.4}
Recall that the homomorphism
$\alpha_0$ in \eqref{a0} is a Lie algebra isomorphism between $H^0({\mathbb P}(\mathbb V),\,
T{\mathbb P}(\mathbb V))$ and $\mathfrak{sl}(\mathbb V)$. Consider the Killing form $\widehat B$ on the
Lie algebra $H^0({\mathbb P}(\mathbb V),\,
T{\mathbb P}(\mathbb V))$. Using the isomorphism $\psi_2$ in Lemma \ref{lem1}, this symmetric bilinear form
$\widehat B$ produces a fiberwise nondegenerate symmetric bilinear form
\begin{equation}\label{eB}
B_0\, \in\, H^0({\mathbb P}(\mathbb V),\, \text{Sym}^2(J^2(T{\mathbb P}(\mathbb V)))^*)
\,=\, \text{Sym}^2(H^0({\mathbb P}(\mathbb V),\, T{\mathbb P}(\mathbb V)))^*\, .
\end{equation}
Recall that each fiber of $J^2(T{\mathbb P}(\mathbb V))$ is the Lie algebra $\mathfrak{sl}(\mathbb V)$;
the form $B_0$ in \eqref{eB} is the fiberwise Killing form.
The symmetric form $B_0$ is preserved by the holomorphic connection $\mathbb{D}_0$ on
$J^2(T{\mathbb P}(\mathbb V))$ constructed in \eqref{d}. Indeed, this follows immediately from the fact that
both $\mathbb{D}_0$ and $B_0$ are constants with respect to the trivialization of $J^2(T{\mathbb P}(\mathbb V))$
given by $\psi_2$ in Lemma \ref{lem1}.
The vector bundle $J^2(T{\mathbb P}(\mathbb V))$ has a filtration of holomorphic subbundles
\begin{equation}\label{f1}
F^{\mathbb P}_1\,:=\, K_{{\mathbb P}(\mathbb V)}\, \subset\, F^{\mathbb P}_2\, \subset\,
J^2(T{\mathbb P}(\mathbb V))\, ,
\end{equation}
where $F^{\mathbb P}_2$ is the kernel of the composition
$$
J^2(T{\mathbb P}(\mathbb V))\, \longrightarrow\,J^1(T{\mathbb P}(\mathbb V)) \, \longrightarrow\,
T{\mathbb P}(\mathbb V)
$$
of the two projections in the two short exact sequences in Remark \ref{rem-j}; the subbundle
$K_{{\mathbb P}(\mathbb V)}\, \subset\, J^2(T{\mathbb P}(\mathbb V))$ in \eqref{f1} is the one in the
second of the two short exact sequences in Remark \ref{rem-j}. In particular, we have
$\text{rank}(F^{\mathbb P}_j)\,=\,j$. For any point $x\, \in\, {\mathbb P}(\mathbb V)$,
the fiber $(F^{\mathbb P}_1)_x$ is a nilpotent subalgebra of the Lie algebra $J^2(T{\mathbb P}(\mathbb V))_x
\,=\,\mathfrak{sl}(\mathbb V)$. Moreover, the fiber $(F^{\mathbb P}_2)_x$ is the unique Borel subalgebra of
$J^2(T{\mathbb P}(\mathbb V))_x$ containing $(F^{\mathbb P}_1)_x$. Consequently, we have
\begin{equation}\label{f2}
B_0(F^{\mathbb P}_1\otimes F^{\mathbb P}_1)\,=\, 0 \ \ \text{ and }\ \ (F^{\mathbb P}_1)^\perp\,=\,
F^{\mathbb P}_2\, ,
\end{equation}
where $(F^{\mathbb P}_1)^\perp$ denotes the orthogonal bundle for $F^{\mathbb P}_1$ with respect to the
form $B_0$ in \eqref{eB}.
Given a holomorphic vector bundle $W$ on a Riemann surface $\mathbb X$, a holomorphic connection $\mathcal D$ on
$W$, and a holomorphic subbundle $W'\,\subset\, W$, the composition of homomorphisms
$$
W\, \stackrel{\mathcal D}{\longrightarrow}\, W\otimes K_{\mathbb X} \,
\stackrel{q_{W'}\otimes{\rm Id}}{\longrightarrow}\,
(W/W')\otimes K_{\mathbb X}\, ,
$$
where $q_{W'}\, :\, W\,\longrightarrow\,W/W'$ is the natural quotient map, defines a holomorphic
section of $\text{Hom}(W',\, (W/W'))\otimes K_{\mathbb X}$. This element of $H^0({\mathbb X},\,
\text{Hom}(W',\, (W/W'))\otimes K_{\mathbb X})$ is called the \textit{second fundamental form} of
$W'$ for the connection $\mathcal D$. If ${\mathcal D}(W')\, \subset\, W''\otimes K_{\mathbb X}$,
where $W''$ is a holomorphic subbundle of $W$ containing $W'$, then the second
fundamental form of $W'$ for $\mathcal D$ is clearly given by a holomorphic section
\begin{equation}\label{j1}
\zeta_1\,\in\, H^0({\mathbb X},\, \text{Hom}(W',\, W''/W')\otimes K_{\mathbb X})
\end{equation}
using the natural inclusion of $\text{Hom}(W',\, W''/W')\otimes
K_{\mathbb X}$ in $\text{Hom}(W',\, W/W')\otimes K_{\mathbb X}$. Also, in this case, the second
fundamental form of $W''$ for $\mathcal D$ is given by a holomorphic section
\begin{equation}\label{j2}
\zeta_2\,\in\, H^0({\mathbb X},\,\text{Hom}(W''/W',\, W/W'')\otimes K_{\mathbb X})
\end{equation}
through the natural inclusion map
$$H^0({\mathbb X},\, \text{Hom}(W''/W',\, W/W'')\otimes K_{\mathbb X})\, \hookrightarrow\,
H^0({\mathbb X},\, \text{Hom}(W'',\, W/W'')\otimes K_{\mathbb X})\, .$$
For the filtration in \eqref{f1} of $J^2(T{\mathbb P}(\mathbb V))$ equipped with the holomorphic connection
$\mathbb{D}_0$ in \eqref{d}, we have
$$
\mathbb{D}_0(F^{\mathbb P}_1)\,=\, F^{\mathbb P}_2\otimes K_{{\mathbb P}(\mathbb V)}
\ \ \text{ and }\ \ \mathbb{D}_0(F^{\mathbb P}_2)\,=\,J^2(T{\mathbb P}(\mathbb V))\otimes
K_{{\mathbb P}(\mathbb V)}\, .
$$
These follow from a straight-forward computation.
Let
$$
S(F^{\mathbb P}_1, \mathbb{D}_0)\, \in \, H^0({\mathbb P}(\mathbb V),\, \text{Hom}(F^{\mathbb P}_1,\,
F^{\mathbb P}_2/F^{\mathbb P}_1)\otimes K_{{\mathbb P}(\mathbb V)})\,=\, H^0({\mathbb P}(\mathbb V),\,
{\mathcal O}_{{\mathbb P}(\mathbb V)})
$$
be the second fundamental form of $F^{\mathbb P}_1$ for the connection $\mathbb{D}_0$ (see \eqref{j1}).
Similarly, let
$$
S(F^{\mathbb P}_2, \mathbb{D}_0)\, \in \, H^0({\mathbb P}(\mathbb V),\, \text{Hom}(F^{\mathbb P}_2/
F^{\mathbb P}_1,\, J^2(T{\mathbb P}(\mathbb V))/F^{\mathbb P}_2)\otimes K_{{\mathbb P}(\mathbb V)})\,=\,
H^0({\mathbb P}(\mathbb V),\, {\mathcal O}_{{\mathbb P}(\mathbb V)})
$$
be the section that gives the second fundamental form of $F^{\mathbb P}_2$ for the connection
$\mathbb{D}_0$ (see \eqref{j2}). It is straight-forward to check that both $S(F^{\mathbb P}_1, \mathbb{D}_0)$
and $S(F^{\mathbb P}_2, \mathbb{D}_0)$ coincide with the element of $H^0({\mathbb P}(\mathbb V),\,
{\mathcal O}_{{\mathbb P}(\mathbb V)})$ given by the constant function $1$ on ${\mathbb P}(\mathbb V)$.
\section{Differential operators, connections and projective structures}\label{se3}
\subsection{Differential operators and connections}\label{se3.1}
For a holomorphic vector bundle $W$ on a Riemann surface $\mathbb X$, there is a tautological fiberwise injective
holomorphic homomorphism
\begin{equation}\label{c}
J^{i+j}(W)\, \longrightarrow\, J^i(J^j(W))
\end{equation}
for every $i,\, j\, \geq\, 0$. On $\mathbb X$, we have the
commutative diagram of holomorphic homomorphisms
\begin{equation}\label{e4}
\begin{matrix}
&& 0 && 0\\
&& \Big\downarrow && \Big\downarrow\\
0 & \longrightarrow & K^{\otimes 2}_{\mathbb X} & \stackrel{\iota}{\longrightarrow} &
J^{3}(T{\mathbb X}) & \stackrel{q}{\longrightarrow} & J^{2}(T{\mathbb X}) & \longrightarrow && 0\\
&& ~\, ~\Big\downarrow l && ~\,~ \Big\downarrow \lambda && \Vert\\
0 & \longrightarrow & J^{2}(T{\mathbb X}) \otimes K_{\mathbb X} & \stackrel{\iota'}{\longrightarrow} &
J^1(J^{2}(T{\mathbb X})) & \stackrel{q'}{\longrightarrow} & J^{2}(T{\mathbb X}) & \longrightarrow && 0\\
&& \Big\downarrow &&~\,~\Big\downarrow\mu \\
&& J^{1}(T{\mathbb X})\otimes K_{\mathbb X} & \stackrel{=}{\longrightarrow} &
J^{1}(T{\mathbb X})\otimes K_{\mathbb X}\\
&& \Big\downarrow && \Big\downarrow\\
&& 0 && 0
\end{matrix}
\end{equation}
where the horizontal short exact sequences are as in \eqref{e1}, the vertical short exact sequence
in the left is the short exact sequence in \eqref{e1} tensored with $K_{\mathbb X}$, and $\lambda$ is the
homomorphism in \eqref{c}; the homomorphism $\mu$ in \eqref{e4} is described below.
The projection $J^{2}(T{\mathbb X})\, \longrightarrow\,
J^{1}(T{\mathbb X})$ in \eqref{e1} induces a homomorphism
\begin{equation}\label{c2}
f_1\, :\, J^1(J^{2}(T{\mathbb X}))\, \longrightarrow\,J^1(J^{1}(T{\mathbb X}))
\end{equation}
(see \eqref{e2b}); set $W$ and $W'$ in \eqref{e2b} to be
$J^{2}(T{\mathbb X})$ and $J^{1}(T{\mathbb X})$ respectively to get $f_1$. On the other hand, let
$$
f'_2\, :\, J^1(J^{2}(T{\mathbb X}))\, \longrightarrow\, J^{2}(T{\mathbb X})
$$
be the projection in \eqref{e1}. Composing $f'_2$ with the homomorphism
$J^{2}(T{\mathbb X})\, \longrightarrow\, J^1(J^{1}(T{\mathbb X}))$ in
\eqref{c} we obtain a homomorphism
$$
f_2\, :\, J^1(J^{2}(T{\mathbb X}))\, \longrightarrow\, J^1(J^{1}(T{\mathbb X}))\, .
$$
The composition of homomorphisms
$$
J^1(J^{2}(T{\mathbb X}))\, \stackrel{f_2}{\longrightarrow}\, J^1(J^{1}(T{\mathbb X}))
\, \stackrel{f_3}{\longrightarrow}\, J^{1}(T{\mathbb X})\, ,
$$
where $f_3$ is projection in \eqref{e1},
coincides with the composition of homomorphisms
$$
J^1(J^{2}(T{\mathbb X}))\, \stackrel{f_1}{\longrightarrow}\, J^1(J^{1}(T{\mathbb X}))
\, \stackrel{f_3}{\longrightarrow}\, J^{1}(T{\mathbb X})\, ,
$$
where $f_1$ is the homomorphism in \eqref{c2}. Therefore, from \eqref{e1} we have the homomorphism
$$
\mu\ :=\, f_1-f_2\, :\, J^1(J^{2}(T{\mathbb X}))\, \longrightarrow\, J^{1}(T{\mathbb X})
\otimes K_{\mathbb X}\, ,
$$
where $f_1$ and $f_2$ are constructed above. This homomorphism $\mu$ is
the one in \eqref{e4}.
Let
\begin{equation}\label{eta}
\eta\, \in\, H^0({\mathbb X},\, \text{Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))\,=\, H^0({\mathbb X},\, K^{\otimes 2}_{\mathbb X}\otimes J^{3}(T{\mathbb X})^*)
\end{equation}
be a differential operator whose symbol is the constant function $1$
on ${\mathbb X}$. This means that $\eta$ gives a holomorphic
splitting of the top horizontal exact sequence in \eqref{e4}. Let
\begin{equation}\label{we}
\widehat{\eta}\, :\, J^{2}(T{\mathbb X})\, \longrightarrow\, J^{3}(T{\mathbb X})
\end{equation}
be the corresponding splitting homomorphism, meaning
\begin{itemize}
\item $\widehat{\eta}(J^{2}(T{\mathbb X}))\,=\, \text{kernel}(J^{3}(T{\mathbb X})\stackrel{\eta}{\rightarrow}
K^{\otimes 2}_{\mathbb X})$, and
\item $q\circ\widehat{\eta}\,=\, \text{Id}_{J^{2}(T{\mathbb X})}$, where $q$ is the projection in
\eqref{e4}.
\end{itemize}
{}From the commutativity of \eqref{e4} we conclude that the homomorphism
\begin{equation}\label{c3}
\lambda\circ \widehat{\eta}\, :\, J^{2}(T{\mathbb X})\, \longrightarrow\, J^1(J^{2}(T{\mathbb X}))\, ,
\end{equation}
where $\lambda$ is the homomorphism in \eqref{e4}, satisfies the equation
$$
q'\circ (\lambda\circ \widehat{\eta})\,=\, \text{Id}_{J^{2}(T{\mathbb X})}\, ,$$
where $q'$ is the projection in \eqref{e4}. Consequently, the homomorphism $\lambda\circ \widehat{\eta}$
defines a holomorphic connection on $J^{2}(T{\mathbb X})$ (see \cite{At}).
Let ${\rm Conn}(J^2(T{\mathbb X}))$ denote the space of all holomorphic connections on
$J^2(T{\mathbb X})$.
We summarize the above construction in the following lemma.
\begin{lemma}\label{lem3}
Consider the subset
$$
H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))_0\, \subset\,
H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))
$$
defined by the differential operators whose symbol is the constant
function $1$ on $\mathbb X$. There is a natural map
$$
\varpi\, :\, H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))_0\, \longrightarrow\,{\rm Conn}(J^2(T{\mathbb X}))
$$
that sends any $\eta$ as in
\eqref{eta} to the connection $\lambda\circ \widehat{\eta}$ in \eqref{c3}.
\end{lemma}
We shall describe the image of the map $\varpi$ in Lemma \ref{lem3}.
Consider the two short exact sequences in Remark \ref{rem-j}. Let
\begin{equation}\label{ef}
K_{\mathbb X} \, :=\, \text{kernel}(\mu_0)\, \subset\, F_2\, :=\, \mu^{-1}_0({\mathcal O}_{\mathbb X})
\, \subset\, J^{2}(T{\mathbb X})
\end{equation}
be the filtration of holomorphic subbundles given by them,
where $\mu_0\, :\, J^{2}(T{\mathbb X})\, \longrightarrow\, J^{1}(T{\mathbb X})$ is the projection in
\eqref{e1}; see \eqref{f1}.
As before, ${\rm Conn}(J^2(T{\mathbb X}))$ denotes the space of all holomorphic connections on
$J^2(T{\mathbb X})$.
\begin{definition}\label{def1}
A holomorphic connection ${\mathcal D}\, \in\, {\rm Conn}(J^2(T{\mathbb X}))$ will be
called an \textit{oper connection} if the following three conditions hold:
\begin{itemize}
\item ${\mathcal D}(K_{\mathbb X})\,=\, F_2\otimes K_{\mathbb X}$ (see \eqref{ef}),
\item the second fundamental form of $K_{\mathbb X}$ for $\mathcal D$, which, by the first
condition, is a holomorphic section of $\text{Hom}(K_{\mathbb X},\, {\mathcal O}_{\mathbb
X})\otimes K_{\mathbb X}\,=\, {\mathcal O}_{\mathbb X}$ (see \eqref{j1}), coincides with the constant function
$1$ on ${\mathbb X}$, and
\item the holomorphic section of $\text{Hom}(F_2/K_{\mathbb X},\,
J^{2}(T{\mathbb X})/F_2)\otimes K_{\mathbb X}\,=\, {\mathcal O}_{\mathbb X}$ that gives the
second fundamental form of $F_2$ for $\mathcal D$ --- see \eqref{j2} --- coincides with the constant function
$1$ on ${\mathbb X}$.
\end{itemize}
\end{definition}
See \cite{BeDr} for general opers; the oper connections in Definition \ref{def1} are
$\text{GL}(3, {\mathbb C})$--opers on $\mathbb X$.
\begin{lemma}\label{lemm0}
Take any $\eta\, \in\, H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))_0$ (see Lemma \ref{lem3}), and let
$$
\varpi(\eta)\, \in \, {\rm Conn}(J^2(T{\mathbb X}))
$$
be the holomorphic connection on $J^2(T{\mathbb X})$ given by $\eta$ in Lemma
\ref{lem3}. Then $\varpi(\eta)$ is an oper connection.
\end{lemma}
\begin{proof}
Take a projective structure $\mathcal P$ on $\mathbb X$. Let
$$\delta({\mathcal P})\, \in\, H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))_0$$ be the differential operator corresponding to $\mathcal P$
in Proposition \ref{prop1}(2). Let
$$
{\mathbb D}(\mathcal P)\, \in\, {\rm Conn}(J^2(T{\mathbb X}))
$$
be the connection in Proposition \ref{prop1}(1) corresponding to $\mathcal P$.
It can be shown that
\begin{itemize}
\item $\varpi(\delta({\mathcal P}))\,=\,{\mathbb D}(\mathcal P)$, and
\item ${\mathbb D}(\mathcal P)$ is an oper connection.
\end{itemize}
Indeed, from the proof of Proposition \ref{prop1} we know that it suffices to prove this
for the (unique) standard projective structure on ${\mathbb P}({\mathbb V})$. Now, both these statements
are straight-forward for the unique projective structure on ${\mathbb P}({\mathbb V})$.
Next note that $H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\, K^{\otimes
2}_{\mathbb X}))_0$ is an affine space for the complex vector space $H^0({\mathbb X},\,
\text{Hom}(J^2(T{\mathbb X}),\, K^{\otimes 2}_{\mathbb X}))$. So $\eta$ in the statement of the
lemma is of the form
\begin{equation}\label{the}
\eta\,=\, \delta({\mathcal P})+\theta\, ,
\end{equation}
where $\theta\, \in\, H^0({\mathbb X},\, \text{Hom}(J^2(T{\mathbb X}),\,
K^{\otimes 2}_{\mathbb X}))$. The composition of homomorphisms
$$
J^2(T{\mathbb X})\, \stackrel{\theta}{\longrightarrow}\, K^{\otimes 2}_{\mathbb X} \,
\stackrel{l}{\longrightarrow}\, J^{2}(T{\mathbb X}) \otimes K_{\mathbb X}\, ,
$$
where $l$ is the homomorphism in \eqref{e4}, will be denoted by $\widetilde{\theta}$. From
\eqref{the} we have
\begin{equation}\label{the2}
\varpi(\eta) - \varpi(\delta({\mathcal P}))\,=\, \varpi(\eta)- {\mathbb D}(\mathcal P)\,=\,
\widetilde{\theta}\, .
\end{equation}
Since ${\mathbb D}(\mathcal P)$ is an oper connection, from
\eqref{the2} it is straight-forward to deduce that $\varpi(\eta)$ is also an oper connection.
\end{proof}
Take any ${\mathcal D}\, \in\, {\rm Conn}(J^2(T{\mathbb X}))$. Using $\mathcal D$ we shall
construct an endomorphism of the vector bundle $J^{2}(T{\mathbb X})$. For that, let
\begin{equation}\label{e5}
p_0\, :\, J^{2}(T{\mathbb X})\, \longrightarrow\, T{\mathbb X}
\end{equation}
be the composition
$J^{2}(T{\mathbb X})\, \longrightarrow\, J^{1}(T{\mathbb X}) \,
\longrightarrow\, T{\mathbb X}$
of the projections in the two short exact sequences in Remark
\ref{rem-j}. For any $x\, \in\, \mathbb X$, and $v\, \in\, J^{2}(T{\mathbb X})_x$, let
$\widetilde{v}$ be the unique section of $J^{2}(T{\mathbb X})$ defined on a simply connected open
neighborhood of $x$ such that \begin{itemize} \item $\widetilde{v}$ is flat for the connection
$\mathcal D$, and
\item $\widetilde{v}(x)\,=\, v$.
\end{itemize}
Now we have a holomorphic homomorphism
\begin{equation}\label{fd}
F_{\mathcal D}\, :\, J^{2}(T{\mathbb X})\, \longrightarrow\, J^{2}(T{\mathbb X})
\end{equation}
that sends any $v\, \in\, J^{2}(T{\mathbb X})_x$, $x\, \in\, \mathbb X$, to the element
of $J^{2}(T{\mathbb X})_x$ defined by the section $p_0(\widetilde{v})$,
where $p_0$ is the projection in \eqref{e5}, and $\widetilde v$ is constructed
as above using $v$ and $\mathcal D$.
\begin{lemma}\label{lem4}
A holomorphic connection ${\mathcal D}\,\in\,{\rm Conn}(J^2(T{\mathbb X}))$ lies in
${\rm image}(\varpi)$ (see Lemma \ref{lem3}) if and only if
\begin{itemize}
\item $\mathcal D$ is an oper connection, and
\item $F_{\mathcal D}\,=\, {\rm Id}_{J^{2}(T{\mathbb X})}$, where $F_{\mathcal D}$
is constructed in \eqref{fd}.
\end{itemize}
\end{lemma}
\begin{proof}
As in the proof of Lemma \ref{lemm0}, first take a projective structure $\mathcal P$ on
$\mathbb X$. Let $\delta({\mathcal P})$ (respectively, ${\mathbb D}(\mathcal P)$) be
the differential operator (respectively, holomorphic connection) corresponding to $\mathcal P$
as in the proof of Lemma \ref{lemm0}. We saw that $\varpi(\delta(\mathcal P))\,=\,
{\mathbb D}({\mathcal P})$ and ${\mathbb D}({\mathcal P})$ is an oper connection.
It can be shown that $F_{{\mathbb D}({\mathcal P})}\,=\,
{\rm Id}_{J^{2}(T{\mathbb X})}$, where $F_{{\mathbb D}({\mathcal P})}$
is constructed as in \eqref{fd}. Indeed, it suffices to prove this for
the unique projective structure on ${\mathbb P}({\mathbb V})$, which is actually straight-forward.
Now take $\eta\,=\, \delta({\mathcal P})+\theta$ as in \eqref{the}. Since
$\varpi(\delta(\mathcal P))\,=\,{\mathbb D}({\mathcal P})$ is an oper connection, and
$F_{{\mathbb D}({\mathcal P})}\,=\,{\rm Id}_{J^{2}(T{\mathbb X})}$, from \eqref{the2}
it follows that $F_{\varpi (\eta)}\,=\, {\rm Id}_{J^{2}(T{\mathbb X})}$, where $F_{\varpi(\eta)}$
is constructed as in \eqref{fd} for the connection $\varpi(\eta)$; it was shown in Lemma \ref{lemm0} that
$\varpi(\eta)$ is an oper connection.
To prove the converse, take any ${\mathcal D}\,\in\,{\rm Conn}(J^2(T{\mathbb X}))$. Then
\begin{equation}\label{nb}
{\mathcal D}\,=\, {\mathbb D}(\mathcal P)+\beta\, ,
\end{equation}
where $\beta\, \in\, H^0({\mathbb X},\, {\rm End}(J^2(T{\mathbb X}))\otimes K_{\mathbb X})$.
Now assume that
\begin{itemize}
\item $\mathcal D$ is an oper connection, and
\item $F_{\mathcal D}\,=\, {\rm Id}_{J^{2}(T{\mathbb X})}$, where $F_{\mathcal D}$
is constructed in \eqref{fd}.
\end{itemize}
Since ${\mathbb D}(\mathcal P)$ also satisfies these two conditions, it follows that
there is a unique section
$$\widetilde{\beta}\, \in\, H^0({\mathbb X},\, \text{Hom}(J^2(T{\mathbb X}),\,
K^{\otimes 2}_{\mathbb X}))
$$
such that $\beta$ in \eqref{nb} coincides with the composition of homomorphisms
$$
J^2(T{\mathbb X})\, \stackrel{\widetilde\beta}{\longrightarrow}\, K^{\otimes 2}_{\mathbb X} \,
\stackrel{l}{\longrightarrow}\, J^{2}(T{\mathbb X}) \otimes K_{\mathbb X}\, ,
$$
where $l$ is the homomorphism in \eqref{e4}. Consequently, we have
$$
\varpi(\delta({\mathcal P})+\widetilde{\beta})\,=\, \mathcal D\, .
$$
This proves the lemma.
\end{proof}
\subsection{Differential operator given by projective structures}\label{sec3.2}
Given a projective structure $\mathcal P$ on a Riemann surface $\mathbb X$, recall that in Proposition
\ref{prop1}(2) we constructed an element of $H^0({\mathbb X},\, \text{Diff}^3_{\mathbb
X}(T{\mathbb X},\, K^{\otimes 2}_{\mathbb X}))_0$.
\begin{proposition}\label{prop2}
The space of all projective structures on $\mathbb X$ is in a natural bijection with the subspace
of $H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))_0$ consisting all differential operators $\delta$ satisfying
the following two conditions:
\begin{enumerate}
\item The connection on $\bigwedge^3 J^{2}(T{\mathbb X})$ induced by the connection
$\varpi(\delta)$ on $J^2(T{\mathbb X})$ (see Lemma \ref{lem3}) coincides with the trivial
connection on $\bigwedge^3 J^{2}(T{\mathbb X})$ (see Remark \ref{rem-j}).
\item If $s$ and $t$ are locally defined holomorphic sections of $T{\mathbb X}$ such that
$\delta(s)\,=\, 0\, =\, \delta(t)$, then $\delta([s,\, t])\,=\, 0$, where $[s,\, t]$ is the
usual Lie bracket of the vector fields $s$ and $t$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $\textbf{P}({\mathbb X})$ denote the space of all projective structures on $\mathbb X$. Let
$$\textbf{D}({\mathbb X})\, \subset\, H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))_0
$$
be the subset consisting of all differential operators $\delta$ satisfying the following conditions:
\begin{enumerate}
\item The connection on $\bigwedge^3 J^{2}(T{\mathbb X})$ induced by the connection
$\varpi(\delta)$ on $J^2(T{\mathbb X})$ coincides with the trivial
connection on $\bigwedge^3 J^{2}(T{\mathbb X})$.
\item If $s$ and $t$ are locally defined holomorphic sections of $T{\mathbb X}$ such that
$\delta(s)\,=\, 0\, =\, \delta(t)$, then $\delta([s,\, t])\,=\, 0$.
\end{enumerate}
Let ${\mathcal P}\,\in\, \textbf{P}({\mathbb X})$ be a projective structure on $\mathbb X$. Let
$$
\delta\, \in\, H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))_0
$$
be the differential operator given by $\mathcal P$ (see Proposition \ref{prop1}(2)).
In view of Proposition \ref{prop1}(4), the first one
of the above two conditions on $\delta$ is satisfied; also, the second
condition is satisfied because of Proposition \ref{prop1}(3). Therefore, we get a map
\begin{equation}\label{Th}
\Theta\, :\, \textbf{P}({\mathbb X})\, \longrightarrow\,\textbf{D}({\mathbb X})
\end{equation}
that sends any $\mathcal P$ to the corresponding differential operator $\delta$.
There is a natural map
\begin{equation}\label{Psi}
\Psi\, :\, \textbf{D}({\mathbb X})\, \longrightarrow\,\textbf{P}({\mathbb X})
\end{equation}
(see \cite[p.~14, (3.7)]{Bi}). To clarify, set $n\,=\,2$ in the definition of $\mathcal B$ in
\cite[p.~13]{Bi}. Then ${\mathcal B}_0$ in \cite[p.~13]{Bi} coincides with the subset of
$H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))$ consisting of all $\delta'$ such that
\begin{itemize}
\item the symbol of $\delta'$ is the constant function $1$, and
\item the holomorphic connection on $\bigwedge^3 J^{2}(T{\mathbb X})$ induced by
the connection $\varpi(\delta')$ on $J^{2}(T{\mathbb X})$
coincides with the trivial connection on $\bigwedge^3 J^{2}(T{\mathbb X})\,=\, {\mathcal O}_{\mathbb X}$.
\end{itemize}
For the maps $\Theta$ and $\Psi$ constructed in \eqref{Th} and \eqref{Psi} respectively, we have
\begin{equation}\label{e6}
\Psi\circ\Theta\,=\, \text{Id}_{\textbf{P}({\mathbb X})}\, ;
\end{equation}
this follows from the combination of the facts that
\begin{itemize}
\item the map $F$ in \cite[p.~19, (5.4)]{Bi} is a bijection,
\item the map $\Psi$ in \eqref{Psi} coincides with the composition of the map $F$ in
\cite[p.~19, (5.4)]{Bi} with the natural projection $\textbf{P}({\mathbb X})\times
H^0({\mathbb X} , \, K^{\otimes 3}_{\mathbb X})\, \longrightarrow\,
H^0({\mathbb X} , \, K^{\otimes 3}_{\mathbb X})$, and
\item $F^{-1}({\mathcal P},\, 0)\,=\, \Theta ({\mathcal P})$ for all
${\mathcal P}\, \in\, \textbf{P}({\mathbb X})$, where $\Theta$ is the map in \eqref{Th}.
\end{itemize}
{}From \eqref{e6} we conclude that the map $\Theta$ in \eqref{Th} is injective.
We will prove that the map $\Theta$ is surjective as well.
Let
$$
H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))_1 \, \subset\,
H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))_0
$$
be the subset consisting of all
$\eta\, \in\, H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))_0$
such that the connection on $\bigwedge^3 J^{2}(T{\mathbb X})$ induced by the connection
$\varpi(\eta)$ on $J^2(T{\mathbb X})$ (see Lemma \ref{lem3}) coincides with the trivial
connection on $\bigwedge^3 J^{2}(T{\mathbb X})$ (see Remark \ref{rem-j}). Let
\begin{equation}\label{Psip}
\Psi'\, :\, H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))_1 \, \longrightarrow\,\textbf{P}({\mathbb X})
\end{equation}
be the map in \cite[p.~14, (3.7)]{Bi}; recall that the map $\Psi$ in \eqref{Psi}
is the restriction of this map $\Psi'$ to the subset $\textbf{D}({\mathbb X})$ of
$H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))_1$.
Take any
\begin{equation}\label{eeta}
\eta\, \in\, H^0({\mathbb X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\,
K^{\otimes 2}_{\mathbb X}))_1\, .
\end{equation}
{}From the
isomorphism $F$ in \cite[p.~19, (5.4)]{Bi} we know that there is a holomorphic section
$$
\xi\, \in\, H^0({\mathbb X},\, K^{\otimes 3}_{\mathbb X})
$$
such that
\begin{equation}\label{xi}
\eta\,=\, \Theta(\Psi'(\eta))+\xi\, ,
\end{equation}
where $\Psi'$ and $\Theta$ are the maps in \eqref{Psip} and \eqref{Th} respectively.
We now impose the following condition on $\eta$ in \eqref{eeta}:
If $s$ and $t$ are locally defined holomorphic sections of $T{\mathbb X}$ such that
$\eta(s)\,=\, 0\, =\, \eta(t)$, then $\eta([s,\, t])\,=\, 0$.
Since $\Theta(\Psi'(\eta))\, \in\, \textbf{D}({\mathbb X})$, in particular,
$\Theta(\Psi'(\eta))$ satisfies the above condition, from the above
condition on $\eta$ it follows that the section $\xi$ in \eqref{xi} vanishes
identically. So, we have $\eta\,=\, \Theta(\Psi'(\eta))$. This implies that
the map $\Theta$ is surjective.
\end{proof}
As before, ${\rm Conn}(J^2(T{\mathbb X}))$ denotes the space of all holomorphic
connections on $J^2(T{\mathbb X})$.
Lemma \ref{lem4} and Proposition \ref{prop2} combine together to give the following:
\begin{corollary}\label{cor1}
The space of all projective structures on $\mathbb X$ is in a natural bijection with the subset
of ${\rm Conn}(J^2(T{\mathbb X}))$ defined by all connections
$\mathcal D$ satisfying the following four conditions:
\begin{enumerate}
\item $\mathcal D$ is an oper connection,
\item $F_{\mathcal D}\,=\, {\rm Id}_{J^{2}(T{\mathbb X})}$, where $F_{\mathcal D}$
is constructed in \eqref{fd},
\item the connection on $\bigwedge^3 J^{2}(T{\mathbb X})$ induced by $\mathcal D$, coincides with
the trivial connection on $\bigwedge^3 J^{2}(T{\mathbb X})$, and
\item if $s$ and $t$ are locally defined holomorphic sections of $T{\mathbb X}$ such that
the sections of $J^{2}(T{\mathbb X})$ corresponding to $s$ and $t$ are flat with respect to
${\mathcal D}$, then the section of $J^{2}(T{\mathbb X})$ corresponding to $[s,\, t]$ is also
flat with respect to ${\mathcal D}$.
\end{enumerate}
\end{corollary}
\begin{proof}
In Proposition \ref{prop1} we constructed a map from the projective structures on $\mathbb X$ to
the holomorphic connections on $J^{2}(T{\mathbb X})$. The holomorphic connections on
$J^{2}(T{\mathbb X})$ obtained this way satisfy all the four conditions in the statement of the
corollary. See Proposition \ref{prop1} for conditions (3) and (4); see the proof
of Lemma \ref{lemm0} for condition (1); see the proof of Lemma \ref{lem4} for (2).
Conversely, let $\mathcal D$ be a holomorphic connection on $J^{2}(T{\mathbb X})$
satisfying the four conditions. In view of Lemma \ref{lem4}, from the first two conditions we
conclude that ${\mathcal D}\,=\, \varpi(\delta)$
for some $\delta\, \in\, H^0({\mathbb
X},\, {\rm Diff}^3_{\mathbb X}(T{\mathbb X},\, K^{\otimes 2}_{\mathbb X}))_0$.
In view of Proposition \ref{prop2}, from the third and fourth conditions we conclude that
$\mathcal D$ corresponds to a projective structures on $\mathbb X$.
\end{proof}
\section{Projective structures and orthogonal opers}\label{sec4}
Projective structures on a Riemann surface $\mathbb X$ are precisely the $\text{PSL}(2,{\mathbb C})$--opers
on $\mathbb X$. On the other hand, we have the isomorphism $\text{PSL}(2,{\mathbb C})\,=\, \text{SO}(3,{\mathbb C})$. This
isomorphism is obtained by identifying ${\mathbb C}^3$ equipped with the standard nondegenerate symmetric form
and $\text{Sym}^2({\mathbb C}^2)$ equipped with the nondegenerate symmetric form constructed using the
standard anti-symmetric form on ${\mathbb C}^2$; this identification produces a homomorphism from
$\text{SL}(2,{\mathbb C})$ to $\text{SO}(3,{\mathbb C})$, which factors through $\text{PSL}(2,{\mathbb C})$,
producing an isomorphism of $\text{PSL}(2,{\mathbb C})$ with $\text{SO}(3,{\mathbb C})$.
Therefore, projective structures on $\mathbb X$ are
precisely the $\text{SO}(3,{\mathbb C})$--opers on $\mathbb X$. In this subsection we shall elaborate this
point of view.
A holomorphic $\text{SO}(3,{\mathbb C})$--bundle on a Riemann surface $\mathbb X$ consists of a holomorphic vector
bundle $W$ of rank three on $\mathbb X$ together with a holomorphic section $B_W\, \in\, H^0({\mathbb X},\,
\text{Sym}^2(W^*))$, such that
\begin{itemize}
\item $\bigwedge^3 W$ is identified with ${\mathcal O}_{\mathbb X}$ by a given isomorphism,
\item $B_W$ is fiberwise nondegenerate, and
\item the given isomorphism of $\bigwedge^3 W$ with ${\mathcal O}_{\mathbb X}$ takes the
bilinear form on $\bigwedge^3 W$ induced by $B_W$ to the standard constant bilinear form on
${\mathcal O}_{\mathbb X}$ (the corresponding quadratic form takes the section of ${\mathcal O}_{\mathbb X}$
given by the constant function $1$ on $\mathbb X$ to the function $1$).
\end{itemize}
A \textit{filtered} $\text{SO}(3,{\mathbb C})$--bundle on $\mathbb X$ is a holomorphic
$\text{SO}(3,{\mathbb C})$--bundle $(W,\, B_W)$ together with a filtration of holomorphic subbundles
$$
F^W_1\, \subset\, F^W_2\, \subset\, W
$$
such that
\begin{enumerate}
\item $F^W_1$ is holomorphically identified with $K_{\mathbb X}$ by a given isomorphism,
\item $B_W(F^W_1\otimes F^W_1)\, =\, 0$,
\item $F^W_2/F^W_1$ is holomorphically identified with ${\mathcal O}_{\mathbb X}$ by a given isomorphism,
\item $B_W(F^W_1\otimes F^W_2)\, =\, 0$; in other words, $(F^W_1)^\perp\, =\, F^W_2$.
\end{enumerate}
Note that the first and third conditions together imply that $W/F^W_2$ is holomorphically identified with
$\bigwedge^3 W \otimes (K_{\mathbb X})^* \,=\, T\mathbb X$.
A \textit{holomorphic connection} on a filtered $\text{SO}(3,{\mathbb C})$--bundle $(W,\, B_W,\, \{F^W_i\}_{i=1}^2)$
is a holomorphic connection $D_W$ on $W$ such that
\begin{itemize}
\item the holomorphic connection $D_W$ preserves the bilinear form $B_W$ on $W$,
\item the holomorphic connection on $\bigwedge^3 W\,=\, {\mathcal O}_{\mathbb X}$ induced by $D_W$ coincides
with the holomorphic
connection on ${\mathcal O}_{\mathbb X}$ given by the de Rham differential $d$,
\item $D_W(F^W_1)\,=\, F^W_2\otimes K_{\mathbb X}$, and
\item the second fundamental form of $F^W_1$ for $D_W$, which is a holomorphic section of
$\text{Hom}(K_{\mathbb X},\, {\mathcal O}_{\mathbb X})\otimes K_{\mathbb X}\,=\, {\mathcal O}_{\mathbb
X}$ (see \eqref{j1}), coincides with the section of ${\mathcal O}_{\mathbb
X}$ given by the constant function $1$ on $\mathbb X$.
\end{itemize}
An $\text{SO}(3,{\mathbb C})$--\textit{oper} on ${\mathbb X}$ is a filtered $\text{SO}(3,{\mathbb C})$--bundle
$(W,\, B_W,\, \{F^W_i\}_{i=1}^2)$ equipped with a holomorphic connection $D_W$.
The above conditions imply that
the holomorphic section of $\text{Hom}({\mathcal O}_{\mathbb X},\, T{\mathbb X})\otimes K_{\mathbb X}\,=\,
{\mathcal O}_{\mathbb X}$ that gives the second fundamental form of $F^W_2$ for $D_W$ --- see \eqref{j2} --- coincides
with the one given by the constant function $1$. See \cite{BeDr} for more on
$\text{SO}(3,{\mathbb C})$--opers.
Recall from \eqref{eB} that $J^2(T{\mathbb P}(\mathbb V))$ is equipped with the bilinear form $B_0$, and
it has the filtration $\{F^{\mathbb P}_i\}_{i=1}^2$ constructed in \eqref{f1}. From Section \ref{se2.4} we conclude
that $$(J^2(T{\mathbb P}(\mathbb V)),\, B_0,\, \{F^{\mathbb P}_i\}_{i=1}^2,\, \mathbb{D}_0)$$ is an
$\text{SO}(3,{\mathbb C})$--oper on ${\mathbb P}(\mathbb V)$, where $\mathbb{D}_0$
is the holomorphic connection on $J^2(T{\mathbb P}(\mathbb V))$ constructed in \eqref{d}.
Consider the action of $\text{PGL}({\mathbb V})$ on $J^2(T{\mathbb P}(\mathbb V))$ given by the action
of $\text{PGL}({\mathbb V})$ on ${\mathbb P}(\mathbb V)$. Both $B_0$ and the filtration $\{F^{\mathbb P}_i\}_{i=1}^2$
are clearly preserved by this action of $\text{PGL}({\mathbb V})$ on $J^2(T{\mathbb P}(\mathbb V))$. As noted
in the proof of Proposition \ref{prop1}(1), the connection $\mathbb{D}_0$ is preserved by the
action of $\text{PGL}({\mathbb V})$ on $J^2(T{\mathbb P}(\mathbb V))$.
Let $(W,\, B_W,\, \{F^W_i\}_{i=1}^2,\, D_W)$ be an $\text{SO}(3,{\mathbb C})$--oper on ${\mathbb X}$.
Denote by $P(W)$ the projective bundle on $\mathbb X$ that parametrizes the lines in the fibers of $W$. Let
\begin{equation}\label{pw}
{\mathbf P}_W\, \subset\, P(W)
\end{equation}
be the ${\mathbb C}{\mathbb P}^1$--bundle on $\mathbb X$ that parametrizes the isotropic lines for $B_W$
(the lines on which the corresponding quadratic form vanishes). So the given condition
$B_W(F^W_1\otimes F^W_1)\, =\, 0$ implies that the line subbundle $F^W_1\, \subset\, W$
produces a holomorphic section
$$
s_W\, :\, {\mathbb X}\, \longrightarrow\, {\mathbf P}_W
$$
of the ${\mathbb C}{\mathbb P}^1$--bundle in \eqref{pw}.
The connection $D_W$ produces a holomorphic connection on $P(W)$, which, in turn,
induces a holomorphic connection on ${\mathbf P}_W$; this holomorphic
connection on ${\mathbf P}_W$ will be denoted by $\mathcal{H}_W$. It is straight-forward to check that
the triple $({\mathbf P}_W,\, \mathcal{H}_W,\, s_W)$ defines a projective structure on $\mathbb X$ (the
condition for the triple to define a projective structure is recalled in Section \ref{se2.1}).
Conversely, let $\mathcal P$ be a projective structure on $\mathbb X$. Take a
holomorphic coordinate atlas $\{(U_i,\, \phi_i)\}_{i\in I}$ in the equivalence class defining
$\mathcal P$. Using $\phi_i$,
\begin{itemize}
\item the bilinear form $B_0$ on $J^2(T{\mathbb P}(\mathbb V))\vert_{\phi_i(U_i)}$ constructed in
\eqref{eB} produces a bilinear form $B_{\mathcal P}(i)$ on $J^2(T{\mathbb X})\vert_{U_i}$,
\item the filtration $\{F^{\mathbb P}_j\}_{j=1}^2$ of $J^2(T{\mathbb P}(\mathbb V))\vert_{\phi_i(U_i)}$
constructed in \eqref{f1} produces a filtration $\{F^{\mathcal P}_j(i)\}_{j=1}^2$ of
$J^2(T{\mathbb X})\vert_{U_i}$, and
\item the holomorphic connection ${\mathbb D}_0\vert_{\phi_i(U_i)}$ in
\eqref{d} on $J^2(T{\mathbb P}(\mathbb V))\vert_{\phi_i(U_i)}$ produces a holomorphic connection
${\mathbb D}_{\mathcal P}(i)$ on $J^2(T{\mathbb X})\vert_{U_i}$.
\end{itemize}
Since $B_0$, $\{F^{\mathbb P}_j\}_{j=1}^2$ and ${\mathbb D}_0$ are all $\text{PGL}({\mathbb V})$--equivariant, each of
the locally defined structures $\{B_{\mathcal P}(i)\}_{i\in I}$, $\{\{F^{\mathcal P}_j(i)\}_{j=1}^2\}_{i\in I}$
and $\{{\mathbb D}_{\mathcal P}(i)\}_{i\in I}$ patch together compatibly to define
\begin{itemize}
\item a holomorphic nondegenerate symmetric bilinear form $B_{\mathcal P}$ on $J^2(T{\mathbb X})$,
\item a filtration $\{F^{\mathcal P}_j\}_{j=1}^2$ of holomorphic subbundles of $J^2(T{\mathbb X})$, and
\item a holomorphic connection ${\mathbb D}_{\mathcal P}$ on $J^2(T{\mathbb X})$.
\end{itemize}
Since $(J^2(T{\mathbb P}(\mathbb V)),\, B_0,\, \{F^{\mathbb P}_i\}_{i=1}^2,\, \mathbb{D}_0)$ is an
$\text{SO}(3,{\mathbb C})$--oper on ${\mathbb P}(\mathbb V)$, we conclude that
$$(J^2(T{\mathbb X}),\, B_{\mathcal P},\, \{F^{\mathcal P}_j\}_{j=1}^2,\, {\mathbb D}_{\mathcal P})$$
is an $\text{SO}(3,{\mathbb C})$--oper on ${\mathbb X}$.
It is straight-forward to check that the above two constructions, namely from $\text{SO}(3,
{\mathbb C})$--opers on ${\mathbb X}$ to projective structures on ${\mathbb X}$ and vice versa, are inverses
of each other.
The above construction of an $\text{SO}(3,{\mathbb C})$--oper on ${\mathbb X}$ from $\mathcal P$ has the
following alternative description.
Let $\gamma\, :\, {\mathbf P}_{\mathcal P}\, \longrightarrow\, {\mathbb X}$ be a holomorphic ${\mathbb
C}{\mathbb P}^1$--bundle, ${\mathcal H}_{\mathcal P}$ a holomorphic connection on ${\mathbf P}_{\mathcal P}$
and $s_{\mathcal P}$ a holomorphic section of $\gamma$, such that the triple $$(\gamma,\, {\mathcal H}_{\mathcal P},
\, s_{\mathcal P})$$ gives the projective structure $\mathcal P$ (see Section \ref{se2.1}). Let
$$
W\, :=\, \gamma_* T_\gamma\, \longrightarrow\, {\mathbb X}
$$
be the direct image, where $T_\gamma$ is defined in \eqref{tga}. For any $x\, \in\, \mathbb X$, the
fiber $W_x$ is identified with $H^0(\gamma^{-1}(x), \, T(\gamma^{-1}(x)))$, so $W_x$ is a Lie
algebra isomorphic to $\mathfrak{sl}(2,{\mathbb C})$; the Lie algebra structure is given by the
Lie bracket of vector fields. Let $B_W$ denote the Killing form on $W$.
If $\mathbb V$ is a rank two holomorphic
vector bundle on $\mathbb X$ such that ${\mathbb P}({\mathbb V})\,=\, {\mathbf P}_{\mathcal P}$, then
$W\,=\, {\rm ad}({\mathbb V})\, \subset\, \text{End}({\mathbb V})$ (the subalgebra bundle of trace zero endomorphisms).
It should be clarified, that although $\mathbb V$ is not uniquely determined by ${\mathbf P}_{\mathcal P}$, any
two choices of $\mathbb V$ with ${\mathbb P}({\mathbb V})\,=\, {\mathbf P}_{\mathcal P}$ differ by tensoring
with a holomorphic line bundle on $\mathbb X$. As a consequence, $\text{End}({\mathbb V})$ and ${\rm ad}({\mathbb V})$
are uniquely determined by ${\mathbf P}_{\mathcal P}$.
The bilinear form $B_W$ coincides with the bilinear form on ${\rm ad}({\mathbb V})$ defined by the Killing bilinear form of the endomorphism algebra (defined from the trace).
Since the image $s_{\mathcal P}({\mathbb X})\, \subset\, {\mathbf P}_{\mathcal P}$
of the section $s_{\mathcal P}$ is a divisor, the
holomorphic vector bundle $W$ has a filtration of holomorphic subbundles
\begin{equation}\label{of2}
F^{\mathbb P}_1\, :=\,
\gamma_* (T_\gamma\otimes {\mathcal O}_{{\mathbf P}_{\mathcal P}}(-2s_{\mathcal P}({\mathbb X})))
\, \subset\, F^{\mathbb P}_2\, :=\,
\gamma_* (T_\gamma\otimes {\mathcal O}_{{\mathbf P}_{\mathcal P}}(-s_{\mathcal P}({\mathbb X})))
\, \subset\, \gamma_* T_\gamma\,=:\, W\, .
\end{equation}
Recall from Section \ref{se2.1} that the homomorphism $\widehat{ds_{\mathcal P}}\, :\,
T{\mathbb X}\, \longrightarrow\, (s_{\mathcal P})^*T_\gamma$ (see \eqref{dc2}) is an
isomorphism. Therefore, $\widehat{ds_{\mathcal P}}$ gives a holomorphic isomorphism between $T\mathbb X$ and
the normal bundle ${\mathbf N}\,=\, N_{s_{\mathcal P}({\mathbb X})}$ of the divisor $s_{\mathcal
P}({\mathbb X})\,\subset\, {\mathbf P}_{\mathcal P}$. On the other hand, by the Poincar\'e
adjunction formula, ${\mathbf N}^*$ is identified with the restriction ${\mathcal O}_{{\mathbf
P}_{\mathcal P}}(-s_{\mathcal P}({\mathbb X}))\vert_{s_{\mathcal P}({\mathbb X})}$ of the
holomorphic line bundle ${\mathcal O}_{{\mathbf P}_{\mathcal P}}(-s_{\mathcal P}({\mathbb X}))$
to the divisor $s_{\mathcal P}({\mathbb X})$; see \cite[p.~146]{GH} for the Poincar\'e adjunction
formula. Therefore, the pulled back holomorphic line bundle
\begin{equation}\label{lb}
(s_{\mathcal P})^*(T_\gamma\otimes
{\mathcal O}_{{\mathbf P}_{\mathcal P}}(-2s_{\mathcal P}({\mathbb X})))
\end{equation}
is identified with $T\mathbb X\otimes K^{\otimes 2}_{\mathbb X}\,=\, K_{\mathbb X}$.
Since the line bundle in \eqref{lb} is canonically identified with $F^{\mathbb P}_1$ in \eqref{of2},
we conclude that $F^{\mathbb P}_1$ is identified with $K_{\mathbb X}$.
The quotient line bundle $F^{\mathbb P}_2/F^{\mathbb P}_1$ in \eqref{of2} is identified with
$$
(s_{\mathcal P})^*(T_\gamma\otimes
{\mathcal O}_{{\mathbf P}_{\mathcal P}}(-s_{\mathcal P}({\mathbb X})))\, .
$$
Since $(s_{\mathcal P})^*({\mathcal O}_{{\mathbf P}_{\mathcal P}}(-s_{\mathcal P}({\mathbb X})))$
is identified with $K_{\mathbb X}$, it follows that $F^{\mathbb P}_2/F^{\mathbb P}_1$ is
identified with ${\mathcal O}_{\mathbb X}$.
For any $x\, \in\, \mathbb X$, the subspace $(F^{\mathbb P}_1)_x\, \subset\, W_x$ is a nilpotent subalgebra, and
$(F^{\mathbb P}_2)_x\, \subset\, W_x$ is the unique Borel subalgebra containing $(F^{\mathbb P}_1)_x$.
Hence we have $B_W(F^{\mathbb P}_1\otimes F^{\mathbb P}_1)\,=\, 0$ and
$(F^{\mathbb P}_1)^\perp \,=\, F^{\mathbb P}_2$.
Consequently, $(W,\, B_W,\, \{F^{\mathbb P}_i\}_{i=1}^2)$ is a filtered $\text{SO}(3,{\mathbb
C})$--bundle on $\mathbb X$. The holomorphic connection ${\mathcal H}_{\mathcal P}$ on ${\mathbf
P}_{\mathcal P}$ produces a holomorphic connection
\begin{equation}\label{j-1}
{\mathbb D}_{\mathcal P}
\end{equation}
on $W$. Indeed,
from the fact that $\text{Aut}({\mathbb C}{\mathbb P}^1)\,=\, \text{PSL}(2, {\mathbb C})$ it follows
that the ${\mathbb C}{\mathbb P}^1$--bundle ${\mathbf P}_{\mathcal P}$ gives a holomorphic principal
$\text{PSL}(2, {\mathbb C})$--bundle ${\mathbf P}_{\mathcal P}(\text{PSL}(2, {\mathbb C}))$
on $\mathbb X$, and ${\mathcal H}_{\mathcal P}$ produces a
holomorphic connection on ${\mathbf P}_{\mathcal P}(\text{PSL}(2, {\mathbb C}))$. This
holomorphic connection on ${\mathbf P}_{\mathcal P}(\text{PSL}(2, {\mathbb C}))$ produces a
holomorphic connection on the vector bundle $W$ associated to ${\mathbf P}_{\mathcal P}(\text{PSL}(2, {\mathbb C}))$
for the adjoint action of $\text{PSL}(2, {\mathbb C})$ on its Lie algebra
$\mathfrak{sl}(2,{\mathbb C})$; this connection on $W$ is denoted by ${\mathbb D}_{\mathcal P}$.
The connection ${\mathbb D}_{\mathcal P}$ in \eqref{j-1} is in fact a
holomorphic connection on the filtered $\text{SO}(3,{\mathbb C})$--bundle $(W,\, B_W,\,
\{F^{\mathbb P}_i\}_{i=1}^2)$. The resulting $\text{SO}(3,{\mathbb C})$--oper $(W,\, B_W,\, \{F^{\mathbb
P}_i\}_{i=1}^2,\, {\mathbb D}_{\mathcal P})$ coincides with the one constructed earlier from $\mathcal
P$.
\section{Branched projective structures and logarithmic connections}\label{se5}
\subsection{Branched projective structure}\label{se5.1}
Let $X$ be a connected Riemann surface. Fix a nonempty finite subset
\begin{equation}\label{e7}
{\mathbb S}\, :=\, \{x_1, \, \cdots ,\, x_d\}\, \subset\, X
\end{equation}
of $d$ distinct points. For each $x_i$, $1\, \leq\, i\, \leq\, d$, fix an integer $n_i\, \geq\, 1$.
Let
\begin{equation}\label{e0n}
S\, :=\, \sum_{i=1}^d n_i\cdot x_i
\end{equation}
be the effective divisor on $X$.
The group of all holomorphic automorphisms of ${\mathbb C}{\mathbb P}^1$ is the M\"obius group
$\text{PGL}(2,{\mathbb C})$. Any
$$
\begin{pmatrix}
a & b\\
c& d
\end{pmatrix} \, \in\, \text{SL}(2,{\mathbb C})
$$
acts on ${\mathbb C}{\mathbb P}^1\,=\, {\mathbb C}\cup\{\infty\}$ as
$z\, \longmapsto\, \frac{az+b}{cz+d}$; the center ${\mathbb Z}/2\mathbb Z$ of
$\text{SL}(2,{\mathbb C})$ acts trivially,
thus producing an action of $\text{PGL}(2,{\mathbb C})\,=\, \text{SL}(2,{\mathbb C})/({\mathbb Z}
/2\mathbb Z)$ on ${\mathbb C}{\mathbb P}^1$.
A branched projective structure on $X$ with branching type $S$ (defined in \eqref{e0n})
is given by data
\begin{equation}\label{de1}
\{(U_j,\, \phi_j)\}_{j\in J}\, ,
\end{equation}
where
\begin{enumerate}
\item $U_j\, \subset\, X$ is a connected open subset with $\# (U_j\bigcap S_0)\, \leq\, 1$ such that
$\bigcup_{j\in J} U_j\,=\, X$,
\item $\phi_j\, :\, U_j\,\longrightarrow\, {\mathbb C}{\mathbb P}^1$ is a holomorphic map
which is an immersion on the complement $U_j\setminus (U_j\bigcap S_0)$,
\item if $U_j\bigcap S_0\,=\, x_i$, then $\phi_j$ is of degree $n_i+1$ and totally ramified at $x_i$,
while $\phi_j$ is an embedding if $U_j\bigcap S_0\,=\, \emptyset$, and
\item for every $j,\, j'\, \in\, J$ with $U_j\bigcap U_{j'}\,\not=\, \emptyset$, and
every connected component $U\, \subset\, U_j\bigcap U_{j'}$, there is an
element $f^U_{j,j'}\, \in \, \text{PGL}(2,{\mathbb C})$, such that $\phi_{j}\, =\, f^U_{j,j'}\circ
\phi_{j'}$ on $U$.
\end{enumerate}
Two such data $\{(U_j,\, \phi_j)\}_{j\in J}$ and $\{(U'_j,\, \phi'_j)\}_{j\in J'}$ satisfying the above
conditions are called \textit{equivalent} if their union $\{(U_j,\, \phi_j)\}_{j\in J}
\bigcup \{(U'_j,\, \phi'_j)\}_{j\in J'}$ also satisfies the above
conditions.
A \textit{branched projective structure} on $X$ with branching type $S$ is an equivalence
class of data $\{(U_j,\, \phi_j)\}_{j\in J}$ satisfying the above conditions. This definition
was introduced in \cite{Ma1}, \cite{Ma2} (see also \cite{BD} for the more general notion of a branched Cartan geometry).
We will now describe an equivalent formulation of the definition of a branched projective structure.
Consider a triple $(\gamma,\, {\mathcal H},\, s)$, where
\begin{itemize}
\item $\gamma\, :\, {\mathbf P}\, \longrightarrow\, \mathbb X$ is a holomorphic ${\mathbb C}{\mathbb P}^1$--bundle,
\item ${\mathcal H}\, \subset\, T{\mathbf P}$ is a holomorphic connection (as before,
$T{\mathbf P}$ is the holomorphic tangent bundle and ${\mathcal H}\oplus \text{kernel}(d\gamma)
\,=\, T{\mathbf P}$), and
\item $s\, :\, {\mathbb X}\, \longrightarrow\,{\mathbf P}$ is a holomorphic section of
$\gamma$,
\end{itemize}
such that the divisor for the homomorphism $\widehat{ds}$ in \eqref{dc2} coincides with $S$
in \eqref{e0n}.
This triple $(\gamma,\, {\mathcal H},\, s)$ gives a branched projective structure on $\mathbb X$
with branching type $S$.
Two such triples $(\gamma_1,\, {\mathcal H}_1,\, s_1)$ and $(\gamma_2,\, {\mathcal H}_2,\, s_2)$ are
called equivalent if there is a holomorphic isomorphism
$$
\mathbb{I}\, :\, {\mathbf P}_1\, \longrightarrow\, {\mathbf P}_2
$$
such that
\begin{itemize}
\item $\gamma_1\,=\, \gamma_2\circ\mathbb{I}$,
\item $d\mathbb{I} ({\mathcal H}_1)\,=\, \mathbb{I}^*{\mathcal H}_2$, where $d\mathbb{I}\, :\,
T{\mathbf P}_1\, \longrightarrow\, \mathbb{I}^*T{\mathbf P}_2$ is the differential of the map
$\mathbb{I}$, and
\item $\mathbb{I}\circ s_1\,=\, s_2$.
\end{itemize}
Two equivalent triples produce the same branched projective structure on $\mathbb X$. More precisely,
this map from the equivalence classes of triples to the
branched projective structures on $\mathbb X$ with branching type $S$ is both injective and surjective.
More details on this can be found in \cite[Section 2.1]{BDG}.
For the convenience of the exposition, we would assume that we are in the generic situation where the
divisor $S$ in \eqref{e0n} is reduced. In other words, $n_i\,=\, 1$ for all $1\, \leq\, i\,\leq\, d$,
and
\begin{equation}\label{a1}
S\, =\, \sum_{i=1}^d x_i\, .
\end{equation}
\subsection{Logarithmic connections}\label{se5.2}
Let $Y$ be a connected Riemann surface and ${\mathcal S}\, =\, \sum_{i=1}^m y_i$ be
an effective divisor on $Y$, where $y_1,\, \cdots,\, y_m$ are $m$ distinct points of $Y$.
The holomorphic cotangent bundle of $Y$ will
be denoted by $K_Y$. For any point $y\, \in\, \mathcal S$, the
fiber $(K_Y\otimes {\mathcal O}_Y({\mathcal S}))_y$ is identified with $\mathbb C$ by sending any
meromorphic $1$-form defined around
$y$ to its residue at $y$. In other words, for any holomorphic coordinate function
$z$ on $Y$ defined around the point $y$, with $z(y)\,=\, 0$, consider the isomorphism
\begin{equation}\label{ry}
R_y\, :\, (K_Y\otimes {\mathcal O}_Y({\mathcal S}))_y\, \longrightarrow\, {\mathbb C}\, ,
\ \ c\cdot \frac{dz}{z} \, \longmapsto\, c\, .
\end{equation}
This isomorphism is in fact independent of the choice of the above coordinate function $z$.
Let $W$ be a holomorphic vector bundle on $Y$.
A \textit{logarithmic connection} on $W$ singular over $\mathcal S$ is a holomorphic
differential operator of order one
$$
D\, :\, W\, \longrightarrow\, W\otimes K_Y\otimes {\mathcal O}_Y({\mathcal S})
$$
such that $D(fs) \,=\, f D(s) + s\otimes df$ for all locally defined holomorphic function $f$
on $Y$ and all locally defined holomorphic section $s$ of $W$. This means that
$$
D\, \in\, H^0(Y,\,\text{Diff}^1_Y(W,\, W\otimes K_Y\otimes {\mathcal O}_Y({\mathcal S})))
$$
such that the symbol of $D$
is the holomorphic section of $\text{End}(W)\otimes{\mathcal O}_Y({\mathcal S})$ given
by $\text{Id}_W\otimes 1$, where $1$ is the constant function $1$ on $Y$.
Note that a logarithmic connection $D$ defines a holomorphic connection on the restriction of $W$
to the complement $Y\setminus \mathcal S$. The logarithmic connection $D$ is called an extension
of this holomorphic connection on $W\vert_{Y\setminus S}$.
For a logarithmic connection $D_0$ on $W$ singular over $\mathcal S$, and a point $y\, \in\,
{\mathcal S}$, consider the composition of homomorphisms
$$
W\, \stackrel{D_0}{\longrightarrow}\, W\otimes K_Y\otimes {\mathcal O}_Y({\mathcal S})
\, \stackrel{\text{Id}_W\otimes
R_y}{\longrightarrow}\, W_y\otimes{\mathbb C}\,=\, W_y\, ,
$$
where $R_y$ is the homomorphism in \eqref{ry}. This composition of homomorphism vanishes
on the subsheaf $W\otimes {\mathcal O}_Y(-y)\, \subset\, W$, and hence it produces a homomorphism
$$
\text{Res}(D_0,y)\, :\, W/(W\otimes {\mathcal O}_Y(-y))\,=\, W_y\, \longrightarrow\, W_y\, .
$$
This endomorphism $\text{Res}(D_0,y)$ of $W_y$ is called the \textit{residue} of the
logarithmic connection $D_0$ at the point $y$; see \cite[p.~53]{De}.
The notion of second fundamental form in Section \ref{se2.4} extends to the set-up of logarithmic
connections.
Let $W$ be a holomorphic vector bundle on $Y$ equipped with a logarithmic connection
$D$ singular over $\mathcal S$. Let $W'\,\subset\, W$ be a holomorphic subbundle.
Then the composition of homomorphisms
$$
W\, \stackrel{D}{\longrightarrow}\, W\otimes K_Y\otimes {\mathcal O}_Y({\mathcal S})\,
\stackrel{q_{W'}\otimes{\rm Id}}{\longrightarrow}\,
(W/W')\otimes K_Y\otimes{\mathcal O}_Y({\mathcal S})\, ,
$$
where $q_{W'}\, :\, W\,\longrightarrow\,W/W'$ is the natural quotient map, defines a holomorphic
section of $\text{Hom}(W',\, (W/W'))\otimes K_Y\otimes{\mathcal O}_Y({\mathcal S})$, which is called the
\textit{second fundamental form} of
$W'$ for $D$. If $W''\, \subset\, W$ is a holomorphic subbundle containing $W'$ such that
$D(W')\, \subset\, W''\otimes K_Y\otimes{\mathcal O}_Y({\mathcal S})$, then the second
fundamental form of $W'$ for $D$ is given by a section
\begin{equation}\label{j1n}
\zeta_1\, \in\,
H^0(Y,\, \text{Hom}(W',\, W''/W')\otimes K_Y\otimes{\mathcal O}_Y({\mathcal S}))
\end{equation}
using the natural inclusion map
$$
H^0(Y,\,\text{Hom}(W',\,W''/W')\otimes K_Y\otimes{\mathcal O}_Y({\mathcal S}))\,\hookrightarrow
\, H^0(Y,\,\text{Hom}(W',\,W/W')\otimes K_Y\otimes{\mathcal O}_Y({\mathcal S}))\, .$$
The second fundamental form of this subbundle $W''$ for $D$ is given by a section
\begin{equation}\label{j2n}
\zeta_2\, \in\,
H^0(Y,\,\text{Hom}(W''/W',\, W/W'')\otimes K_Y\otimes{\mathcal O}_Y({\mathcal S}))
\end{equation}
through the natural inclusion map
$$H^0(Y,\,\text{Hom}(W''/W',\, W/W'')\otimes K_Y\otimes{\mathcal O}_Y({\mathcal S}))\, \hookrightarrow\,
H^0(Y,\,\text{Hom}(W'',\, W/W'')\otimes K_Y\otimes{\mathcal O}_Y({\mathcal S}))\, .$$
\subsection{Logarithmic connection from a branched projective structure}\label{5.3}
We now use the notation of Section \ref{se5.1}.
Let
\begin{equation}\label{e8}
{\mathbb T}\, :=\, (TX)\otimes {\mathcal O}_X(S)
\end{equation}
be the holomorphic line bundle on $X$, where $S$ is the divisor in \eqref{a1}. From the isomorphism
in \eqref{ry} it follows that the fiber ${\mathcal O}_X(S)_y$ is identified with $T_yX$ for every
$y\,\in\, \mathbb S$ in \eqref{e7}. For any point $y\, \in\, \mathbb S$, let
\begin{equation}\label{e9}
F^1_y\,:=\, (TX)\otimes {\mathcal O}_X(S)\otimes (T^*X)^{\otimes 2})_y\,=\,
{\mathbb C}\, \subset\, F^2_y\, \subset\, J^2({\mathbb T})_y
\end{equation}
be the filtration of subspaces of the fiber $J^2({\mathbb T})_y$ over $y$ constructed as in \eqref{ef};
more precisely, $F^1_y$ is the kernel of the projection $\alpha_y\, :\, J^2({\mathbb T})_y\,
\longrightarrow\, J^1({\mathbb T})_y$ (see \eqref{e1}), and $$F^2_y\,=\, \alpha^{-1}_y({\mathbb T}_y\otimes
(K_X)_y) \,=\, \alpha^{-1}_y({\mathcal O}(S)_y)$$
(see \eqref{e1}). In other words, $F^2_y$ is the kernel of the composition of homomorphisms
$$
J^2({\mathbb T})_y\, \stackrel{\alpha_y}{\longrightarrow}\, J^1({\mathbb T})_y\,
\longrightarrow\, {\mathbb T}_y
$$
(see \eqref{e1}). Note that $(TX)\otimes{\mathcal O}_X(S)\otimes (T^*X)^{\otimes 2})_y\,=\,
{\mathbb C}$, because ${\mathcal O}_X(S)_y\,=\, T_yX$.
The complement $X\setminus {\mathbb S}$ will be denoted by $\mathbb X$, where $\mathbb S$
is the subset in \eqref{e7}.
Let $P$ be a branched projective structure on $X$ of branching type $S$, where $S$ is the divisor
in \eqref{a1}. So $P$ gives a projective structure on $\mathbb X\,:=\, X\setminus {\mathbb S}$;
this projective structure on $\mathbb X$ will be denoted by $\mathcal P$. As shown in Proposition
\ref{prop1}(1), the projective structure $\mathcal P$ produces a holomorphic connection ${\mathbb
D} ({\mathcal P})$ on the vector bundle $J^2(T{\mathbb X})$.
\begin{proposition}\label{prop3}
The above holomorphic connection ${\mathbb D} ({\mathcal P})$ on $J^2(T{\mathbb X})$ extends to
a logarithmic connection ${\mathbb D} (P)$ on $J^2({\mathbb T})$, where $\mathbb T$ is
the holomorphic line bundle in \eqref{e8}.
For any $x_i\, \in\, \mathbb S$ in \eqref{e7}, the eigenvalues of the residue
${\rm Res}({\mathbb D} (P),x_i)$ are $\{-2,\, -1,\, 0\}$.
The eigenspace for the eigenvalue $-2$ of ${\rm Res}({\mathbb D} (P),x_i)$
is the line $F^1_{x_i}$ in \eqref{e9}. The eigenspace for the eigenvalue $-1$ of
${\rm Res}({\mathbb D} (P),x_i)$ is contained in the subspace $F^2_{x_i}$ in \eqref{e9}.
\end{proposition}
\begin{proof}
This proposition follows from some general properties of logarithmic connections which
we shall explain first.
Let $Y$ be a Riemann surface and $y_0\, \in\, Y$ a point; let $\iota\, :\, y_0\,
\hookrightarrow \, Y$ be the inclusion map. Take a holomorphic vector bundle ${\mathcal W}$ on $Y$, and
let ${\mathcal W}_{y_0}\, \longrightarrow\, Q\, \longrightarrow\, 0$ be a quotient of the fiber of ${\mathcal W}$
over the point $y_0$. Let
\begin{equation}\label{ed}
0\, \longrightarrow\, V\, \stackrel{\beta}{\longrightarrow}\,{\mathcal W} \, \longrightarrow\,
{\mathcal Q}\, :=\, i_*Q\,\longrightarrow\, 0
\end{equation}
be a short exact sequence of coherent analytic sheaves on $Y$;
so the sheaf $\mathcal Q$ is supported on the reduced point $y_0$. Let
\begin{equation}\label{ed1}
0\, \longrightarrow\, \text{kernel}(\beta(y_0))\, \longrightarrow\, V_{y_0}
\, \stackrel{\beta(y_0)}{\longrightarrow}\, {\mathcal W}_{y_0} \, \longrightarrow\,
\text{cokernel}(\beta(y_0))\,=\, Q \, \longrightarrow\, 0
\end{equation}
be the exact sequence of vector spaces obtained by restricting the exact sequence in
\eqref{ed} to the point $y_0$. It can be shown that there is a canonical isomorphism
\begin{equation}\label{ed2}
\text{kernel}(\beta(y_0))\, \stackrel{\sim}{\longrightarrow}\, Q\otimes (T_{y_0}Y)^*\, .
\end{equation}
To prove this, take any $v\, \in\, \text{kernel}(\beta(y_0))$, and let $\widetilde v$ be
a holomorphic section of $V$ defined around $y_0$ such that ${\widetilde v}(y_0)\,=\, v$.
The locally defined section $\beta({\widetilde v})$ of ${\mathcal W}$ vanishes at $y_0$, so its $1$-jet at $y_0$
gives an element of ${\mathcal W}_{y_0}\otimes (T_{y_0}Y)^*$; this element of
${\mathcal W}_{y_0}\otimes (T_{y_0}Y)^*$ will be denoted by $v_1$. Let $v'\, \in\, Q\otimes (T_{y_0}Y)^*$
denote the image of $v_1$ under the homomorphism $\beta(y_0)\times \text{Id}_{(T_{y_0}Y)^*}$. It is
straightforward to check that $v'$ does not depend on the choice of the above extension $\widetilde v$
of $v$. Consequently, we get a homomorphism as in \eqref{ed2}. This homomorphism is in fact an isomorphism.
Let $\nabla^{\mathcal W}$ be a logarithmic connection on ${\mathcal W}$ singular over $y_0$. Then
$\nabla^{\mathcal W}$ induces
a logarithmic connection on the subsheaf $V$ in \eqref{ed} if and only if the residue
$\text{Res}(\nabla^{\mathcal W},y_0)\, \in\, \text{End}({\mathcal W}_{y_0})$ preserves the subspace
$\beta(y_0)(V_{y_0})\, \subset\, {\mathcal W}_{y_0}$ in \eqref{ed1}.
Now assume that $\nabla^{\mathcal W}$ induces
a logarithmic connection $\nabla^1$ on $V$. Then $\text{Res}(\nabla^1,y_0)$ preserves the
subspace $\text{kernel}(\beta(y_0)) \, \subset\, V_{y_0}$ in \eqref{ed1}, and the
endomorphism of $$V_{y_0}/\text{kernel}(\beta(y_0))\,=\, \beta(y_0)(V_{y_0})$$ induced
by $\text{Res}(\nabla^1,y_0)$ coincides with the restriction of
$\text{Res}(\nabla^{\mathcal W},y_0)$ to $\beta(y_0)(V_{y_0})$. Let $\text{Res}(\nabla^{\mathcal W},y_0)_Q\ \in\,
\text{End}(Q)$ be the endomorphism induced by $\text{Res}(\nabla^{\mathcal W},y_0)$.
The restriction of
$\text{Res}(\nabla^1,y_0)$ to $\text{kernel}(\beta(y_0)) \, \subset\, V_{y_0}$ coincides
with $\text{Id}+\text{Res}(\nabla^{\mathcal W},y_0)_Q$; from \eqref{ed2} it follows that
$$\text{End}(\text{kernel}(\beta(y_0)))\,=\, \text{End}(Q)\, ,$$
so $\text{Res}(\nabla^{\mathcal W},y_0)_Q$ gives an endomorphism of $\text{kernel}(\beta(y_0))$.
Conversely, let $\nabla^V$ be a logarithmic connection on $V$ singular over $y_0$. Then $\nabla^V$ induces
a logarithmic connection on the holomorphic vector bundle $\mathcal W$ in \eqref{ed} if and only if the residue
$\text{Res}(\nabla^V,y_0)\, \in\, \text{End}(V_{y_0})$ preserves the subspace
$\text{kernel}(\beta(y_0)) \, \subset\, V_{y_0}$ in \eqref{ed1}.
Now assume that $\nabla^V$ induces a logarithmic connection $\nabla'$ on ${\mathcal W}$. Then $\nabla'$
gives the logarithmic connection $\nabla^V$ on $V$. Consequently, the residues of
$\nabla^V$ and $\nabla'$ are related in the fashion described above.
Take a point $x_i\, \in\, {\mathbb S}$ (see \eqref{e7}). Let $x_i\, \in\, U\, \subset\, X$ be a
sufficiently small contractible open neighborhood of $x_i$ in $X$, in particular,
$U\bigcap {\mathbb S}\,=\, x_i$. Let
$$
{\mathbb D}_1\, :=\, \{z\, \in\, {\mathbb C}\, \mid\, |z| \, <\, 1\}\, \subset\, \mathbb C
$$
be the unit disk. Fix a biholomorphism
$$
\widetilde{\gamma}\, :\, {\mathbb D}_1\, \longrightarrow\, U
$$
such that $\widetilde{\gamma}(0)\,=\, x_i$. Let
\begin{equation}\label{e10}
\gamma\, :\, U\, \longrightarrow\, {\mathbb D}_1\, ,\ \ x\, \longmapsto\,
(\widetilde{\gamma}^{-1}(x))^2
\end{equation}
be the branched covering map. Then using $\gamma$, the branched projective structure on $U$, given by the
branched projective structure $P$ on $X$ of branching type $S$, produces an usual (unbranched) projective
structure on ${\mathbb D}_1$; this projective structure on ${\mathbb D}_1$ will be denoted by $P_1$.
Now substituting $({\mathbb D}_1,\, P_1)$ in place of $({\mathbb X},\, {\mathcal P})$ in
Proposition \ref{prop1}(1), we get a holomorphic connection ${\mathbb D}(P_1)$ on $J^2(T{\mathbb
D}_1)$. The holomorphic vector
bundle $\gamma^*J^2(T{\mathbb D}_1)$ over $U$, where $\gamma$ is the map in \eqref{e10},
is equipped with the holomorphic connection $\gamma^*{\mathbb D}(P_1)$.
The differential $d\gamma\, :\, TU\, \longrightarrow\, \gamma^*T{\mathbb D}_1$ induces
an isomorphism ${\mathbb T}\vert_U\, \stackrel{\sim}{\longrightarrow}\, \gamma^*T{\mathbb D}_1$, where
${\mathbb T}$ is the line bundle in \eqref{e8}. This, in turn, produces an isomorphism
\begin{equation}\label{is}
J^2({\mathbb T})\vert_U\, \stackrel{\sim}{\longrightarrow}\, J^2(\gamma^*T{\mathbb D}_1)\, .
\end{equation}
On the other hand, there is a natural homomorphism $\gamma^*J^2(T{\mathbb D}_1)
\, \longrightarrow\, J^2(\gamma^*T{\mathbb D}_1)$. Combining this with the inverse of the
isomorphism in \eqref{is} we get a homomorphism
\begin{equation}\label{p1}
\gamma^*J^2(T{\mathbb D}_1) \, \longrightarrow\, J^2({\mathbb T})\vert_U\, .
\end{equation}
This homomorphism is an isomorphism over $U\setminus\{x_i\}$.
Apply the above mentioned criterion, for the extension of a logarithmic connection on
the vector bundle $V$ in \eqref{ed} to a logarithmic connection on
the vector bundle ${\mathcal W}$, to the homomorphism in \eqref{p1} and
the holomorphic connection
$\gamma^*{\mathbb D}(P_1)$ on $\gamma^*J^2(T{\mathbb D}_1)$. We conclude that
$\gamma^*{\mathbb D}(P_1)$ induces a logarithmic connection on
$J^2({\mathbb T})\vert_U$.
The first statement of the proposition follows from this.
The other two statements of the proposition follow from the earlier mentioned properties
of residues of the logarithmic connections on the vector bundles $V$ and $\mathcal W$ in \eqref{ed} that
are induced by each other.
\end{proof}
\section{Branched projective structures and branched ${\rm SO}(3,{\mathbb C})$-opers}\label{sec6}
\subsection{Branched ${\rm SO}(3,{\mathbb C})$-opers}\label{se6.1}
We shall now use the terminology in Section \ref{sec4}.
\begin{definition}\label{def2}
Let $(W,\, B_W)$ be a holomorphic $\text{SO}(3,{\mathbb C})$--bundle on a connected Riemann surface $X$.
A \textit{branched filtration} on $(W,\, B_W)$, of type $S$ (see \eqref{a1}),
is a filtration of holomorphic subbundles
\begin{equation}\label{of}
F^W_1\, \subset\, F^W_2\, \subset\, W
\end{equation}
such that
\begin{itemize}
\item $F^W_1$ is holomorphically identified with $K_X\otimes{\mathcal O}_X(-S)$
by a given isomorphism,
\item $B_W(F^W_1\otimes F^W_1)\, =\, 0$,
\item $F^W_2/F^W_1$ is holomorphically identified with ${\mathcal O}_X$ by a given isomorphism,
\item $B_W(F^W_1\otimes F^W_2)\, =\, 0$ (equivalently, $(F^W_1)^\perp\, =\, F^W_2$).
\end{itemize}
\end{definition}
Compare the above definition with the definition of a filtration of $(W,\, B_W)$ given in Section \ref{sec4}.
They differ only at the first condition.
The above conditions imply that
\begin{equation}\label{w2}
W/F^W_2\,=\, (\bigwedge\nolimits ^3 W) \otimes (K_X\otimes{\mathcal O}_X(-S))^* \,=\,
TX\otimes{\mathcal O}_X(S)\, .
\end{equation}
A \textit{branched filtered} $\text{SO}(3,{\mathbb C})$--bundle, of type $S$, is a holomorphic $\text{SO}(3,
{\mathbb C})$--bundle $(W,\, B_W)$ equipped with a branched filtration $\{F^W_i\}_{i=1}^2$ of type $S$ as in \eqref{of}.
\begin{definition}\label{def3}
A \textit{branched holomorphic connection} on a branched filtered $\text{SO}(3,{\mathbb C})$--bundle
$(W,\, B_W,\, \{F^W_i\}_{i=1}^2)$, of type $S$, is a holomorphic connection $D_W$ on $W$ such that
\begin{itemize}
\item the holomorphic connection $D_W$ preserves the bilinear form $B_W$ on $W$,
\item the holomorphic connection on $\bigwedge^3 W\,=\, {\mathcal O}_X$ induced by $D_W$ coincides
with the holomorphic connection on ${\mathcal O}_X$ given by the de Rham differential $d$,
\item $D_W(F^W_1)\,\subset\, F^W_2\otimes K_X$, and
\item the second fundamental form of $F^W_1$ for $D_W$, which is a holomorphic section of
$$\text{Hom}(K_X\otimes{\mathcal O}_X(-S),\, {\mathcal O}_X)\otimes K_X
\,=\, {\mathcal O}_X(S)
$$ (see \eqref{j1}), coincides with the section of ${\mathcal O}_X(S)$ given by the constant function
$1$ on $X$.
\end{itemize}
\end{definition}
\begin{definition}\label{def4}
A \textit{branched} $\text{SO}(3,{\mathbb C})$--\textit{oper}, of type $S$, on $X$ is a
branched filtered $\text{SO}(3,{\mathbb C})$--bundle
$$(W,\, B_W,\, \{F^W_i\}_{i=1}^2)\, ,$$ of type $S$, equipped with a branched holomorphic connection $D_W$.
\end{definition}
Since the divisor $S$ is fixed, we would drop mentioning it explicitly.
The above conditions imply that
the holomorphic section of $$\text{Hom}({\mathcal O}_X,\, TX\otimes{\mathcal O}_X(S))\otimes K_X\,=\,
{\mathcal O}_X(S)$$ that gives the second fundamental form of $F^W_2$ for $D_W$ --- see \eqref{j2} --- coincides
with the section of ${\mathcal O}_X(S)$ given by the constant function $1$ on $X$.
\subsection{Branched projective structures are branched $\text{SO}(3,{\mathbb C})$-opers}
Take a branched $\text{SO}(3,{\mathbb C})$--oper $(W,\, B_W,\, \{F^W_i\}_{i=1}^2, \, D_W)$ on $X$ of type $S$.
Let $P(W)$ be the projective bundle on $X$ that parametrizes the lines in the fibers of $W$. As in
\eqref{pw},
$$
P(W)\, \supset\, {\mathbf P}_W\, \stackrel{\gamma}{\longrightarrow} \, X
$$
is the ${\mathbb C}{\mathbb P}^1$--bundle on $X$ that parametrizes the isotropic lines for $B_W$.
The line subbundle $F^W_1\, \subset\, W$, being isotropic, produces a holomorphic section
\begin{equation}\label{sw}
s_W\, :\, X\, \longrightarrow\, {\mathbf P}_W
\end{equation}
of the projection $\gamma$.
The connection $D_W$ produces a holomorphic connection on $P(W)$, and this connection on $P(W)$
induces a holomorphic connection on ${\mathbf P}_W$; recall that a
holomorphic connection on ${\mathbf P}_W$ is a holomorphic line subbundle of $T{\mathbf P}_W$ transversal
to the relative tangent bundle $T_\gamma$ for the projection $\gamma$. Let
\begin{equation}\label{nec}
\mathcal{H}_W\, \subset\, T{\mathbf P}_W
\end{equation}
be the holomorphic connection on ${\mathbf P}_W$ given by $D_W$.
\begin{lemma}\label{lem5}
The triple $({\mathbf P}_W,\, \mathcal{H}_W,\, s_W)$ (see \eqref{sw} and \eqref{nec}) defines a
branched projective structure on $X$ of branching type $S$.
\end{lemma}
\begin{proof}
Recall that the last condition in the definition of a branched holomorphic connection on
$(W,\, B_W,\, \{F^W_i\}_{i=1}^2)$ (see Definition \ref{def3})
says that the second fundamental form of $F^W_1$ for $D_W$ is the
holomorphic section of
$$\text{Hom}(K_X\otimes{\mathcal O}_X(-S),\, {\mathcal O}_X)\otimes K_X
\,=\, {\mathcal O}_X(S)
$$
given by the constant function $1$ on $X$. On the other hand, the divisor for the second fundamental form of
$F^W_1$ for $D_W$ coincides with the divisor for the homomorphism $$\widehat{ds_W}\, :\, TX
\, \longrightarrow\, (s_W)^*T_\gamma$$ (see \eqref{dc2}). Consequently,
the divisor for the homomorphism $\widehat{ds_W}$ is $S$. Hence
$({\mathbf P}_W,\, \mathcal{H}_W,\, s_W)$ defines a
branched projective structure on $X$ of branching type $S$; see Section \ref{se5.1}.
\end{proof}
Now let $\mathcal P$ be a branched projective structure on $X$ of branching type $S$.
Let $$\gamma\, :\, {\mathbf P}_{\mathcal P}\, \longrightarrow\, X$$ be a holomorphic ${\mathbb
C}{\mathbb P}^1$--bundle, ${\mathcal H}_{\mathcal P}$ a holomorphic connection on ${\mathbf P}_{\mathcal P}$
and $s_{\mathcal P}$ a holomorphic section of $\gamma$, such that the triple
\begin{equation}\label{tr}
(\gamma,\, {\mathcal H}_{\mathcal P}, \, s_{\mathcal P})
\end{equation}
gives the branched projective structure $\mathcal P$; see Section \ref{se5.1}. Define the
holomorphic vector bundle of rank three on $X$
\begin{equation}\label{ew}
W\, :=\, \gamma_* T_\gamma\, \longrightarrow\, X\, ,
\end{equation}
where $T_\gamma\, \subset\, T{\mathbf P}_{\mathcal P}$ is the relative holomorphic tangent
for the projection $\gamma$ (as in \eqref{tga}). If $s$ and $t$ are locally defined holomorphic
sections of $T_\gamma\, \subset\, T{\mathbf P}_{\mathcal P}$, then the Lie bracket
$[s,\, t]$ is also a section of $T_\gamma$.
Consequently, each fiber of $W\, =\, \gamma_* T_\gamma$ is a Lie algebra
isomorphic to $\mathfrak{sl}(2,{\mathbb C})$. Let
\begin{equation}\label{ew1}
B_W\, \in\, H^0(X,\, \text{Sym}^2(W^*))
\end{equation}
be the holomorphic section given by the fiberwise Killing form on $W$.
As shown in Section \ref{sec4}, the pair $(W,\, B_W)$ can also be constructed by choosing a rank two holomorphic
vector bundle $\mathbb V$ on $X$ such that ${\mathbb P}({\mathbb V})\,=\, {\mathbf P}_{\mathcal P}$; then $W\,=\,
{\rm ad}({\mathbb V})\, \subset\, \text{End}({\mathbb V})$ (the subalgebra bundle of trace zero endomorphisms) and
$B_W$ is constructed using the trace map of endomorphisms.
Exactly as in \eqref{of2}, construct the filtration
\begin{equation}\label{ot3}
F^{\mathbb P}_1\, :=\,
\gamma_* (T_\gamma\otimes {\mathcal O}_{{\mathbf P}_{\mathcal P}}(-2s_{\mathcal P}(X)))
\, \subset\, F^{\mathbb P}_2\, :=\,
\gamma_* (T_\gamma\otimes {\mathcal O}_{{\mathbf P}_{\mathcal P}}(-s_{\mathcal P}(X)))
\, \subset\, \gamma_* T_\gamma\,=:\, W
\end{equation}
of $W$.
\begin{lemma}\label{lem6}
The triple $(W,\, B_W,\, \{F^{\mathbb P}_i\}_{i=1}^2)$ constructed in \eqref{ew}, \eqref{ew1}, \eqref{ot3}
is a branched filtered ${\rm SO}(3,{\mathbb C})$--bundle of type $S$ (see \eqref{a1}).
\end{lemma}
\begin{proof}
The holomorphic line bundle $F^{\mathbb P}_1$ in \eqref{ot3} admits a canonical isomorphism
\begin{equation}\label{l1}
F^{\mathbb P}_1\, \stackrel{\sim}{\longrightarrow}\, (s_{\mathcal P})^*(T_\gamma\otimes
{\mathcal O}_{{\mathbf P}_{\mathcal P}}(-2s_{\mathcal P}(X)))\, ,
\end{equation}
because the restriction of $T_\gamma\otimes {\mathcal O}_{{\mathbf P}_{\mathcal P}}(-2s_{\mathcal P}(X))$
to any fiber of $\gamma$ is holomorphically trivializable, and the evaluation, at a given point, of
the global sections of a holomorphically trivializable bundle is an isomorphism.
The quotient bundle $F^{\mathbb P}_2/F^{\mathbb P}_1$ admits a canonical isomorphism
\begin{equation}\label{l2}
F^{\mathbb P}_2/F^{\mathbb P}_1\, \stackrel{\sim}{\longrightarrow}\,
(s_{\mathcal P})^*(T_\gamma\otimes
{\mathcal O}_{{\mathbf P}_{\mathcal P}}(-s_{\mathcal P}(X)))\, ,
\end{equation}
which is again constructed by evaluating the holomorphic sections of
$T_\gamma\otimes {\mathcal O}_{{\mathbf P}_{\mathcal P}}(-s_{\mathcal P}(X))\vert_{\gamma^{-1}(x)}$
at the point $s_{\mathcal P}(x)$ for every $x\, \in\, X$.
Now, by the Poincar\'e adjunction formula, $(s_{\mathcal P})^*
{\mathcal O}_{{\mathbf P}_{\mathcal P}}(-s_{\mathcal P}(X))$
is identified with the dual bundle $(s_{\mathcal P})^*{\mathbf N}^*$, where
${\mathbf N}\,=\, N_{s_{\mathcal P}(X)}$ is the normal bundle of
the divisor $s_{\mathcal P}(X)\, \subset\, {\mathbf P}_{\mathcal P}$. On the other hand,
$\mathbf N$ is canonically identified with the restriction
$T_\gamma\vert_{s_{\mathcal P}(X)}$, because
the divisor $s_{\mathcal P}(X)$ is transversal to the fibration $\gamma$. Consequently,
$(s_{\mathcal P})^*{\mathcal O}_{{\mathbf P}_{\mathcal P}}(-s_{\mathcal P}(X))$
is identified with $(s_{\mathcal P})^*T^*_\gamma$.
Recall from Section \ref{se5.1} that the divisor for the homomorphism $\widehat{ds_{\mathcal P}}\, :\,
TX\, \longrightarrow\, (s_{\mathcal P})^* T_\gamma$ (see
\eqref{dc2}) coincides with $S$. Consequently, $\widehat{ds_{\mathcal P}}$ identifies $(TX)
\otimes {\mathcal O}_X(S)$ with $(s_{\mathcal P})^*T_\gamma$. Therefore, the
line bundle $(s_{\mathcal P})^*(T_\gamma\otimes
{\mathcal O}_{{\mathbf P}_{\mathcal P}}(-2s_{\mathcal P}(X)))$
in \eqref{l1} is identified with $K_X\otimes {\mathcal O}_X(-S)$,
and the line bundle $(s_{\mathcal P})^*(T_\gamma\otimes
{\mathcal O}_{{\mathbf P}_{\mathcal P}}(-s_{\mathcal P}(X)))$
in \eqref{l2} is identified with ${\mathcal O}_X$.
{}From the above descriptions of $F^{\mathbb P}_1$ and $F^{\mathbb P}_2$ it follows that
for each point $x\, \in\, X$, the fiber $(F^{\mathbb P}_1)_x\, \subset\, W_x$ is a nilpotent
subalgebra of the Lie algebra $W_x$, and $(F^{\mathbb P}_2)_x\, \subset\, W_x$ is the unique
Borel subalgebra of $W_x$ containing $(F^{\mathbb P}_1)_x$. These imply that $B_W(F^{\mathbb
P}_1\otimes F^{\mathbb P}_1)\, =\, 0$, and $(F^{\mathbb P}_1)^\perp\,=\, F^{\mathbb P}_2$.
Hence $(W,\, B_W,\, \{F^{\mathbb P}_i\}_{i=1}^2)$ is
branched filtered ${\rm SO}(3,{\mathbb C})$--bundle of type $S$.
\end{proof}
Consider the holomorphic connection ${\mathcal H}_{\mathcal P}$ in \eqref{tr} on
the holomorphic ${\mathbb C}{\mathbb P}^1$--bundle ${\mathbf P}_{\mathcal P}$. It produces
a holomorphic connection on the direct image $W\, =\, \gamma_* T_\gamma$; see \eqref{j-1}. This
holomorphic connection on $W$ will be denoted by ${\mathbb D}_{\mathcal P}$.
\begin{lemma}\label{lem7}
The above connection ${\mathbb D}_{\mathcal P}$ on $W$ is a branched holomorphic connection
on the branched filtered ${\rm SO}(3,{\mathbb C})$--bundle $(W,\, B_W,\, \{F^{\mathbb P}_i\}_{i=1}^2)$
in Lemma \ref{lem6}. In other words, $$(W,\, B_W,\, \{F^{\mathbb P}_i\}_{i=1}^2,\, {\mathbb D}_{\mathcal P})$$
is a ${\rm SO}(3,{\mathbb C})$--oper.
\end{lemma}
\begin{proof}
The connection ${\mathbb D}_{\mathcal P}$ preserves the bilinear form $B_W$ on $W$,
because the action of $\text{PSL}(2,{\mathbb C})$ on $\text{Sym}^2({\mathbb C}^2)$ preserves
the symmetric bilinear form on $\text{Sym}^2({\mathbb C}^2)$ given by the standard
symplectic form on ${\mathbb C}^2$. Also, the holomorphic connection on $\bigwedge^3 W$ induced by
${\mathbb D}_{\mathcal P}$ coincides with the holomorphic
connection on ${\mathcal O}_X$ given by the de Rham differential $d$, because the action of
$\text{PSL}(2,{\mathbb C})$ on $\bigwedge^3 \text{Sym}^2({\mathbb C}^2)$ is the trivial action.
Since any holomorphic connection on a holomorphic bundle over a Riemann surface is automatically
integrable, the ${\mathbb C}{\mathbb P}^1$--bundle ${\mathbf P}_{\mathcal P}$ is locally isomorphic
the trivial holomorphic
${\mathbb C}{\mathbb P}^1$--bundle, and ${\mathcal H}_{\mathcal P}$ in \eqref{tr} is locally
holomorphically isomorphic to the trivial connection on the trivial holomorphic
${\mathbb C}{\mathbb P}^1$--bundle. So $W$ is locally
holomorphically isomorphic to the trivial vector bundle whose fibers are quadratic polynomials in
one variable, and ${\mathbb D}_{\mathcal P}$ is the trivial connection on this locally defined trivial
holomorphic vector bundle. With respect to these trivialization, and a suitable pair $(U,\, \phi)$ as
in \eqref{de1} compatible with the projective structure $\mathcal P$, the section $s_{\mathcal P}$ in
\eqref{tr} around any point $x_i\, \in\, \mathbb S$ is of the form $z\, \longmapsto (z,\, z^2)$, where
$z$ is a holomorphic function around $x_i$ with $z(x_i)\,=\, 0$.
In view of the above observations, from a straight-forward computation it follows that
\begin{itemize}
\item ${\mathbb D}_{\mathcal P}(F^{\mathbb P}_1)\,\subset\, F^{\mathbb P}_2\otimes K_X$, and
\item the second fundamental form of $F^{\mathbb P}_1$ for ${\mathbb D}_{\mathcal P}$, which is a
holomorphic section of
$$\text{Hom}(K_X\otimes{\mathcal O}_X(-S),\, {\mathcal O}_X)\otimes K_X
\,=\, {\mathcal O}_X(S)\, ,
$$
coincides with the section of ${\mathcal O}_X(S)$ given by the constant function $1$ on $X$.
\end{itemize}
This completes the proof.
\end{proof}
The above construction of a branched projective structure on $X$ from a
branched ${\rm SO}(3,{\mathbb C})$--oper (see Lemma \ref{lem5}), and the construction of a
branched ${\rm SO}(3,{\mathbb C})$--oper from a branched projective structure (see Lemma \ref{lem7}),
are clearly inverses of each other.
We summarize the constructions done in this subsection in the following theorem.
\begin{theorem}\label{thm1}
There is a natural bijective correspondence between the branched projective structures on $X$
with branching type $S$ and the branched ${\rm SO}(3,{\mathbb C})$--opers on $X$ of type $S$.
\end{theorem}
\subsection{Logarithmic connection from branched $\text{SO}(3,{\mathbb C})$-opers}\label{se6.3}
In Proposition \ref{prop3} we constructed a logarithmic connection on $J^2({\mathbb T})$ from a branched
projective structure on $X$, where ${\mathbb T}\,=\,
(TX)\otimes {\mathcal O}_X(S)$ (see \eqref{e8}). On the other hand, Theorem
\ref{thm1} identifies branched projective structure on $X$ with branched ${\rm SO}(3,{\mathbb C})$--opers
on $X$. Thus a branched ${\rm SO}(3,{\mathbb C})$--oper gives a logarithmic connection on $J^2({\mathbb T})$.
Now we shall give a direct construction of the logarithmic connection on $J^2({\mathbb T})$
associated to a branched ${\rm SO}(3,{\mathbb C})$--oper on $X$.
Let
\begin{equation}\label{e11}
F^1_{\mathbb T}\, \subset\, F^2_{\mathbb T}\, \subset\, J^2({\mathbb T})
\end{equation}
be the filtration of holomorphic subbundles constructed as in \eqref{ef}; so,
$F^1_{\mathbb T}$ is the kernel of the natural projection
\begin{equation}\label{e14}
\textbf{c}_1\, :\, J^2({\mathbb T})\, \longrightarrow\, J^1({\mathbb T})
\end{equation}
(see \eqref{e1}), and $F^2_{\mathbb T}$
is the kernel of the homomorphism
\begin{equation}\label{e13}
\textbf{c}_2\, :\, J^2({\mathbb T})\, \longrightarrow\, {\mathbb T}
\end{equation}
obtained by composing $J^2({\mathbb T})\,\stackrel{\textbf{c}_1}{\longrightarrow}\,
J^1({\mathbb T})$ with the natural homomorphism $J^1({\mathbb T})\, \longrightarrow\, {\mathbb T}$
in \eqref{e1}. Note that the restriction of the filtration in \eqref{e11}
to any point $y\, \in\, \mathbb S$ coincides with the filtration in \eqref{e9}. We have
\begin{equation}\label{f1n}
F^1_{\mathbb T}\,=\, {\mathbb T}\otimes K^2_X\,=\, K_X\otimes{\mathcal O}_X(S)\, , \ \
F^2_{\mathbb T}/F^1_{\mathbb T} \,=\, {\mathbb T}\otimes K_X\,=\, {\mathcal O}_X(S)\, ,\ \
J^2({\mathbb T})/F^2_{\mathbb T}\,=\, {\mathbb T}\, .
\end{equation}
Let
\begin{equation}\label{tr2}
(W,\, B_W,\, \{F^W_i\}_{i=1}^2, \, D_W)
\end{equation}
be a branched $\text{SO}(3,{\mathbb C})$--oper on $X$ of type $S$. Recall from \eqref{w2} and \eqref{e8}
that $W/F^W_2\,=\, TX\otimes {\mathcal O}_X(S)\, =\, {\mathbb T}$.
Let
\begin{equation}\label{q0}
q_0\,:\, W \, \longrightarrow\, {\mathbb T}\,=\,W/F^W_2
\end{equation}
be the quotient map.
Take any point $x\, \in\, X$ and any $w\, \in\, W_x$. Let $\widetilde{w}$ be the unique holomorphic section of
$W$, defined on a simply connected open neighborhood of $x$, such that
$\widetilde{w}$ is flat with respect to the holomorphic connection $D_W$ in \eqref{tr2}, and
\begin{equation}\label{e12}
\widetilde{w}(x)\,=\, w\, .
\end{equation}
Let
\begin{equation}\label{Phi}
\Phi\, :\, W \, \longrightarrow\, J^2({\mathbb T})
\end{equation}
be the homomorphism that sends any $w\, \in\, W_x$, $x\, \in\, X$, to the element of
$J^2({\mathbb T})_x$ given by the restriction of the section $q_0(\widetilde{w})$ to the second order
infinitesimal neighborhood of $x$, where $q_0$ is the homomorphism in \eqref{q0} and $\widetilde{w}$
is constructed as above from $w$. This construction is similar to the construction of
the homomorphisms $\psi_j$ in \eqref{e0}.
\begin{proposition}\label{prop4}\mbox{}
\begin{enumerate}
\item For the homomorphism $\Phi$ in \eqref{Phi},
$$\Phi(F^W_1)\, \subset\, F^1_{\mathbb T}\ \ \ { and } \ \ \ \Phi(F^W_2)\, \subset\, F^2_{\mathbb T}\, $$ where
$\{F^i_{\mathbb T}\}_{i=1}^2$ and $\{F^W_i\}_{i=1}^2$ are the filtrations in \eqref{e11} and \eqref{tr2} respectively.
\item The homomorphism $\Phi$ takes the holomorphic connection $D_W$ in \eqref{tr2}
to a logarithmic connection on $J^2({\mathbb T})$ whose singular locus is $\mathbb S$ in \eqref{e7}.
The logarithmic connection on $J^2({\mathbb T})$ induced by $D_W$ will be denoted by ${\mathbb D}_J$.
\item For any $x_i\, \in\, \mathbb S$ (see \eqref{e7}), the residue ${\rm Res}({\mathbb D}_J, x_i)$
of ${\mathbb D}_J$ at $x_i$ has eigenvalues $\{-2,\, -1,\, 0\}$.
\item The eigenspace of ${\rm Res}({\mathbb D}_J, x_i)$ for the eigenvalue $-2$ is the line
$(F^1_{\mathbb T})_{x_i}\, \subset\, J^2({\mathbb T})_{x_i}$ in \eqref{e11}.
The eigenspace of ${\rm Res}({\mathbb D}_J, x_i)$ for the eigenvalue $-1$ is contained in the
subspace $(F^2_{\mathbb T})_{x_i}\, \subset\, J^2({\mathbb T})_{x_i}$ in \eqref{e11}.
\end{enumerate}
\end{proposition}
\begin{proof}
To prove statement (1), first take any $x\, \in\, X$ and any $w\, \in\, (F^W_2)_x$.
Since $q_0(w)\,=\, 0$, where $q_0$ is the projection in \eqref{q0}, from \eqref{e12} it
follows immediately that $\textbf{c}_2\circ\Phi(w)\,=\, 0$, where $\textbf{c}_2$ is the homomorphism
in \eqref{e13}. This implies that $\Phi(F^W_2)\, \subset\, F^2_{\mathbb T}$. To prove statement (1),
we need to show that $\Phi(F^W_1)\, \subset\, F^1_{\mathbb T}$.
Take any $x\, \in\, X$ and any $w\, \in\, (F^W_1)_x$. From the given
condition that $D_W(F^W_1)\, \subset\, F^W_2\otimes K_X$ it follows that $\textbf{c}_1\circ\Phi(w)\,=\,
0$, where $\textbf{c}_1$ is the homomorphism in \eqref{e14}. More precisely, the restriction
$\Phi\vert_{F^W_1}$ coincides with the natural inclusion homomorphism
$$
F^W_1\,=\, K_X\otimes {\mathcal O}_X(-S)\, \hookrightarrow\, K_X\otimes {\mathcal O}_X(S)\,=\,
F^1_{\mathbb T}\, ;
$$
see \eqref{f1n} for $K_X\otimes{\mathcal O}_X(S)\,=\,
F^1_{\mathbb T}$ and Definition \ref{def2} for $F^W_1\,=\, K_X\otimes{\mathcal O}_X(-S)$.
Consequently, we have
$\Phi(F^W_1)\, \subset\, F^1_{\mathbb T}$. This proves (1).
Denote $X\setminus {\mathbb S}$ by $\mathbb X$.
We note that the restriction
$$
\Phi\vert_{\mathbb X}\, :\, W\vert_{\mathbb X} \, \longrightarrow\, J^2({\mathbb T})\vert_{\mathbb X}
\,=\, J^2(T{\mathbb X})
$$
is a holomorphic isomorphism.
{}From Theorem \ref{thm1} we know that the branched
$\text{SO}(3,{\mathbb C})$--oper $(W,\, B_W,\, \{F^W_i\}_{i=1}^2, \, D_W)$
in \eqref{tr2} defines a branched projective structure on $X$ of type $S$. Let $\mathcal P$ denote the
projective structure on ${\mathbb X}\,=\, X\setminus \mathbb S$ given by this
branched projective structure on $X$. Using Proposition
\ref{prop1}(1), the projective structure $\mathcal P$ yields a holomorphic connection ${\mathbb
D} ({\mathcal P})$ on $J^2(T{\mathbb X})$. This holomorphic connection ${\mathbb
D} ({\mathcal P})$ evidently coincides with the holomorphic connection ${\mathbb D}_J\vert_{\mathbb X}$
on $J^2({\mathbb T})\vert_{\mathbb X}\,=\, J^2(T{\mathbb X})$ given by the connection $D_W$
using the isomorphism $\Phi\vert_{\mathbb X}$. Consequently, the statements
(2), (3) and (4) in the proposition follow from Proposition \ref{prop3} and statement (1).
\end{proof}
Proposition \ref{prop4} yields the following:
\begin{corollary}\label{cor-1}
For the logarithmic connection ${\mathbb D}_J$ on $J^2({\mathbb T})$ in Proposition \ref{prop4}(2),
$$
{\mathbb D}_J(F^1_{\mathbb T})\, = \, F^2_{\mathbb T}\otimes K_X\otimes{\mathcal O}_X(S)\, ,
$$
and
$$
{\mathbb D}_J(F^2_{\mathbb T})\, =\, J^2({\mathbb T})\otimes K_X\otimes{\mathcal O}_X(S)\, ,
$$
where $\{F^i_{\mathbb T}\}_{i=1}^2$ is the filtration in \eqref{e11}.
\end{corollary}
\begin{proof}
{}From Proposition \ref{prop4}(2) we know that the logarithmic connection
${\mathbb D}_J$ is given by the connection $D_W$
using $\Phi$. Consequently, the corollary follows from Proposition \ref{prop4}(1) and the
properties, given in Proposition \ref{prop4}(3) and Proposition \ref{prop4}(4),
of the residue of the logarithmic connection ${\mathbb D}_J$.
\end{proof}
Recall the second fundamental form for a logarithmic connection
defined in Section \ref{se5.2}. Consider the logarithmic connection
${\mathbb D}_J$ on $J^2({\mathbb T})$ in Proposition \ref{prop4}(2)
and the subbundles $F^1_{\mathbb T},\, F^2_{\mathbb T}$ in \eqref{e11}.
Let
$$
\textbf{S}({\mathbb D}_J, F^1_{\mathbb T}) \ \ \ \text{ and }\ \ \
\textbf{S}({\mathbb D}_J, F^1_{\mathbb T})
$$
be the second fundamental forms of $F^1_{\mathbb T}$ and $F^2_{\mathbb T}$ respectively for ${\mathbb D}_J$.
{}From Corollary \ref{cor-1} and \eqref{j1n} we know that
\begin{equation}\label{h-1}
\textbf{S}({\mathbb D}_J, F^1_{\mathbb T})\, \in\, H^0(X,\, \text{Hom}(F^1_{\mathbb T},\,
F^2_{\mathbb T}/F^1_{\mathbb T})\otimes K_X\otimes{\mathcal O}_X(S))\,=\, H^0(X,\, {\mathcal O}_X(S))\, ;
\end{equation}
see \eqref{f1n}. From Corollary \ref{cor-1} and \eqref{j2n} we have
\begin{equation}\label{h-2}
\textbf{S}({\mathbb D}_J, F^2_{\mathbb T})\, \in\, H^0(X,\, \text{Hom}(F^2_{\mathbb T}/F^1_{\mathbb T},\,
J^2({\mathbb T})/F^2_{\mathbb T})\otimes K_X\otimes{\mathcal O}_X(S))\,=\, H^0(X,\, {\mathcal O}_X(S))\, ;
\end{equation}
see \eqref{f1n}.
\begin{lemma}\label{lem8}
The second fundamental forms ${\bf S}({\mathbb D}_J, F^1_{\mathbb T})$ and
${\bf S}({\mathbb D}_J, F^2_{\mathbb T})$, in \eqref{h-1} and \eqref{h-2} respectively,
coincide with the section of ${\mathcal O}_X(S)$
given by the constant function $1$ on $X$.
\end{lemma}
\begin{proof}
As in the proof of Proposition \ref{prop4}, consider the branched projective structure on $X$ of type $S$
given by the $\text{SO}(3,{\mathbb C})$--oper $(W,\, B_W,\, \{F^W_i\}_{i=1}^2, \, D_W)$
in \eqref{tr2} using Theorem \ref{thm1}. It defines a projective structure on
${\mathbb X}\,=\, X\setminus \mathbb S$. Now from the statement (1) in Corollary \ref{cor1} we conclude
that the restrictions to ${\mathbb X}$ of both ${\bf S}({\mathbb D}_J, F^1_{\mathbb T})$ and
${\bf S}({\mathbb D}_J, F^2_{\mathbb T})$ coincide with the section of ${\mathcal O}_{\mathbb X}$
given by the constant function $1$ on ${\mathbb X}$. Hence the second fundamental forms
${\bf S}({\mathbb D}_J, F^1_{\mathbb T})$ and
${\bf S}({\mathbb D}_J, F^2_{\mathbb T})$ coincide with the section of ${\mathcal O}_X(S)$
given by the constant function $1$ on $X$, because ${\mathbb X}$ is a dense open subset of $X$.
\end{proof}
\subsection{A twisted symmetric form}\label{se6.4}
We continue with the set-up of Section \ref{se6.3}.
Using the homomorphism $\Phi$ in \eqref{Phi}, the nondegenerate symmetric form $B_W$ on
$W$ in \eqref{tr2} produces a nondegenerate symmetric form
\begin{equation}\label{bj}
\mathbb{B}_J\, \in\, H^0(X,\, \text{Sym}^2(J^2({\mathbb T})^*)\otimes{\mathcal O}_X(2S))\, .
\end{equation}
This follows from the fact that the image of the following composition of homomorphisms
$$
J^2({\mathbb T})^* \, \stackrel{\Phi^*}{\longrightarrow}\, W^*\, \stackrel{\sim}{\longrightarrow}\,
W \, \stackrel{\Phi}{\longrightarrow}\, J^2({\mathbb T})
$$
coincides with the subsheaf $J^2({\mathbb T})\otimes{\mathcal O}_X(-2S) \, \subset\,
J^2({\mathbb T})$; the above isomorphism
$W^*\, \stackrel{\sim}{\longrightarrow}\, W$ is given by the nondegenerate symmetric form $B_W$.
The logarithmic connection ${\mathbb D}_J$ on $J^2({\mathbb T})$ in Proposition \ref{prop4}(2), and the
canonical logarithmic connection on ${\mathcal O}_X(2S)$ defined by the de Rham differential, together
define a logarithmic connection on the vector bundle $\text{Sym}^2(J^2({\mathbb T})^*)
\otimes{\mathcal O}_X(2S)$. This logarithmic connection on $\text{Sym}^2(J^2({\mathbb T})^*)
\otimes{\mathcal O}_X(2S)$ will be denoted by $\text{Sym}^2({\mathbb D}_J)'$.
\begin{proposition}\label{prop5}\mbox{}
\begin{enumerate}
\item The form $\mathbb{B}_J$ in \eqref{bj} is covariant constant with respect to the above
logarithmic connection ${\rm Sym}^2({\mathbb D}_J)'$ on the vector bundle
${\rm Sym}^2(J^2({\mathbb T})^*)\otimes{\mathcal O}_X(2S)$.
\item For the subbundles $F^1_{\mathbb T},\, F^2_{\mathbb T}$ in \eqref{e11},
$$
\mathbb{B}_J(F^1_{\mathbb T}\otimes F^1_{\mathbb T})\,=\, 0\ \ { and }\ \
(F^1_{\mathbb T})^\perp \,=\, F^2_{\mathbb T}\, ,
$$
where $(F^1_{\mathbb T})^\perp\, \subset\, J^2({\mathbb T})$ is the subbundle orthogonal to
$F^1_{\mathbb T}$ with respect to $\mathbb{B}_J$.
\end{enumerate}
\end{proposition}
\begin{proof}
This proposition can be proved exactly as Lemma \ref{lem8} is proved. Indeed, from
the statement (1) in Corollary \ref{cor1} we know that both the statements in the proposition
holds over ${\mathbb X}\,=\, X\setminus \mathbb S$. Hence the proposition follows.
\end{proof}
Consider the logarithmic connection ${\mathbb D}_J$ in Proposition \ref{prop4}(2) and the twisted
holomorphic symmetric
bilinear form $\mathbb{B}_J$ in \eqref{bj} on the vector bundle
$J^2({\mathbb T})$ constructed from the branched $\text{SO}(3,{\mathbb C})$--oper
$(W,\, B_W,\, \{F^W_i\}_{i=1}^2, \, D_W)$ in \eqref{tr2}. We will now show that $(W,\, B_W,\, \{F^W_i\}_{i=1}^2,
\, D_W)$ can be reconstructed back from this pair
\begin{equation}\label{cp}
(\mathbb{B}_J,\, {\mathbb D}_J)\, .
\end{equation}
For each point $x_i \in \, \mathbb S$ (see \eqref{e7}), let $L_i\, \subset\, J^2({\mathbb T})_{x_i}$ be the
eigenspace for the eigenvalue $0$ of the residue ${\rm Res}({\mathbb D}_J, x_i)$. Let $\mathcal F$ be the
holomorphic vector bundle on $X$ that fits in the short exact sequence
\begin{equation}\label{g1}
0\, \longrightarrow\, {\mathcal F} \, \longrightarrow\, J^2({\mathbb T})\, \longrightarrow\,
\bigoplus_{i=1}^d J^2({\mathbb T})_{x_i}/L_i \, \longrightarrow\, 0
\end{equation}
of coherent analytic sheaves on $X$. Recall the criterion for a logarithmic connection on the vector
bundle $\mathcal W$ in \eqref{ed} to induce a logarithmic connection on the vector
bundle $V$ in \eqref{ed}. Applying this criterion to \eqref{g1}
it follows that ${\mathbb D}_J$ induces a logarithmic connection
${\mathbb D}'_J$ on $\mathcal F$. Moreover, the eigenvalues of the residue ${\rm Res}({\mathbb D}'_J, x_i)$
at $x_i\, \in\, {\mathbb S}$ are $(-1,\, 0,\, 0)$; this again follows from the expression of the residue,
of the logarithmic connection on the vector
bundle $V$ in \eqref{ed} induced by a logarithmic connection on $\mathcal W$, in terms of the
residue of the logarithmic connection on $\mathcal W$.
For each point $x_i \in \, \mathbb S$, let $M_i\, \subset\, {\mathcal F}_{x_i}$ be the
eigenspace for the eigenvalue $0$ of the residue ${\rm Res}({\mathbb D}'_J, x_i)$. Let $\mathcal E$ be the
holomorphic vector bundle on $X$ that fits in the short exact sequence
\begin{equation}\label{g2}
0\, \longrightarrow\, {\mathcal E} \, \longrightarrow\, {\mathcal F}\, \longrightarrow\,
\bigoplus_{i=1}^d {\mathcal F}_{x_i}/M_i \, \longrightarrow\, 0
\end{equation}
of coherent analytic sheaves on $X$. From the above mentioned criterion
it follows that the logarithmic connection ${\mathbb D}'_J$ induces a logarithmic connection
${\mathbb D}''_J$ on $\mathcal E$. Moreover, the eigenvalues of the residue ${\rm Res}({\mathbb D}''_J, x_i)$
at any $x_i\, \in\, {\mathbb S}$ are $(0,\, 0,\, 0)$.
Using the homomorphism $\Phi$ in \eqref{Phi}, consider $W$ as a subsheaf of $J^2({\mathbb T})$.
On the other hand, from \eqref{g1} and \eqref{g2} we have ${\mathcal E}\, \subset\,
{\mathcal F}\, \subset\,J^2({\mathbb T})$, using which ${\mathcal E}$ will be considered as
a subsheaf of $J^2({\mathbb T})$.
It is straightforward to check that ${\mathcal E}\, \subset\, J^2({\mathbb T})$
coincides with the subsheaf $W$ of $J^2({\mathbb T})$. This identification between ${\mathcal E}$ and $W$ takes
${\mathbb D}''_J$ to the holomorphic connection $D_W$ in \eqref{tr2}. In particular,
the logarithmic connection ${\mathbb D}''_J$ is actually a nonsingular connection on ${\mathcal E}$.
The nondegenerate twisted symmetric form $\mathbb{B}_J$ on $J^2({\mathbb T})$ in \eqref{bj}
produces a twisted symmetric form on the subsheaf ${\mathcal E}\, \subset\, J^2({\mathbb T})$.
The above identification between ${\mathcal E}$ and $W$ takes this twisted symmetric form on $\mathcal E$
given by $\mathbb{B}_J$ to $B_W$ in \eqref{tr2}. In particular, the twisted symmetric form on $\mathcal E$
given by $\mathbb{B}_J$ is nondegenerate and there is no nontrivial twisting.
The filtration $\{F^W_i\}_{i=1}^2$ of $W\,=\, {\mathcal E}$ in \eqref{tr2}
is given by the filtration $\{F^i_{\mathbb T}\}_{i=1}^2$ of $J^2({\mathbb T})$ in \eqref{e11}. In other words,
$F^W_i$ is the unique holomorphic subbundle of ${\mathcal E}$ such that the space of holomorphic sections
of $F^W_i$ over any open subset $U\, \subset\, X$ is the space of holomorphic sections $s$ of ${\mathcal E}\vert_U$
such that $s\vert_{U\cap (X\setminus \mathbb S)}$ is a section of $F^i_{\mathbb T}$ over
$U\cap (X\setminus \mathbb S)$.
This way we recover the branched $\text{SO}(3,{\mathbb C})$--oper
$(W,\, B_W,\, \{F^W_i\}_{i=1}^2, \, D_W)$ in \eqref{tr2} from the pair $(\mathbb{B}_J,\, {\mathbb D}_J)$
in \eqref{cp} constructed from it.
\section{A characterization of branched $\text{SO}(3,{\mathbb C})$-opers}\label{sec7}
Let ${\mathfrak g}\,=\, \text{Lie}(\text{SO}(3,{\mathbb C}))$ be the Lie algebra of
$\text{SO}(3,{\mathbb C})$. We will need a property of ${\mathfrak g}$ which is formulated below.
Up to conjugacy, there is only one nonzero nilpotent element in ${\mathfrak g}$. Indeed,
this follows immediately from the fact that ${\mathfrak g}\,=\, \mathfrak{sl}(2, {\mathbb C})$.
Let
$$
A\, \in\, {\mathfrak g}\,=\, \text{Lie}(\text{SO}(3,{\mathbb C}))
$$
be a nonzero nilpotent element. From the above observation we know that $\dim A({\mathbb C}^3)\,=\, 2$.
Therefore, if $B\, \in\, {\mathfrak g}$ is a nilpotent element such that
$B(V_0)\,=\, 0$, where $V_0\, \subset\, {\mathbb C}^3$ is some subspace of dimension two, then
\begin{equation}\label{b1}
B\,=\, 0\, .
\end{equation}
Take a pair
\begin{equation}\label{tp}
({\mathbb B},\, {\mathbf D})\, ,
\end{equation}
where
\begin{itemize}
\item ${\mathbb B}\, \in\, H^0(X,\, \text{Sym}^2(J^2({\mathbb T})^*)\otimes{\mathcal O}_X(2S))$ is
fiberwise nondegenerate bilinear form on $J^2({\mathbb T})$ with values in $\mathcal{O}_X(2S)$, and
\item ${\mathbf D}$ is a logarithmic connection on $J^2({\mathbb T})$ singular over $S$,
\end{itemize}
such that the following five conditions hold:
\begin{enumerate}
\item For the subbundles $F^1_{\mathbb T},\, F^2_{\mathbb T}$ in \eqref{e11},
$$
\mathbb{B}(F^1_{\mathbb T}\otimes F^1_{\mathbb T})\,=\, 0\ \ \text{ and }\ \
(F^1_{\mathbb T})^\perp \,=\, F^2_{\mathbb T}\, ,
$$
where $(F^1_{\mathbb T})^\perp\, \subset\, J^2({\mathbb T})$ is the subbundle orthogonal to
$F^1_{\mathbb T}$ with respect to $\mathbb{B}$.
\item The section $\mathbb{B}$ is covariant constant with respect to the
logarithmic connection on the vector bundle ${\rm Sym}^2(J^2({\mathbb T})^*)\otimes{\mathcal O}_X(2S)$ induced by
${\mathbf D}$ and the logarithmic connection on $\mathcal{O}_X(2S)$ given by the de Rham differential $d$.
\item ${\mathbf D}(F^1_{\mathbb T})\, = \, F^2_{\mathbb T}\otimes K_X\otimes{\mathcal O}_X(S)$ and
${\mathbf D}(F^2_{\mathbb T})\, =\, J^2({\mathbb T})\otimes K_X\otimes{\mathcal O}_X(S)$.
\item For any $x_i\, \in\, \mathbb S$ (see \eqref{e7}), the residue ${\rm Res}({\mathbf D}, x_i)$
of ${\mathbf D}$ at $x_i$ has eigenvalues $\{-2,\, -1,\, 0\}$.
\item The eigenspace of ${\rm Res}({\mathbf D}, x_i)$ for the eigenvalue $-2$ is the line
$(F^1_{\mathbb T})_{x_i}\, \subset\, J^2({\mathbb T})_{x_i}$ in \eqref{e11}.
The eigenspace of ${\rm Res}({\mathbf D}, x_i)$ for the eigenvalue $-1$ is contained in the
subspace $(F^2_{\mathbb T})_{x_i}\, \subset\, J^2({\mathbb T})_{x_i}$.
\end{enumerate}
In other words, the pair $({\mathbb B},\, {\mathbf D})$ satisfies all properties obtained in
Proposition \ref{prop4}, Corollary \ref{cor-1}, Lemma \ref{lem8} and Proposition \ref{prop5} for
the pair in \eqref{cp} corresponding to the branched $\text{SO}(3,{\mathbb C})$--oper
$(W,\, B_W,\, \{F^W_i\}_{i=1}^2, \, D_W)$ in \eqref{tr2}. However this does not ensure that
$({\mathbb B},\, {\mathbf D})$ defines a branched $\text{SO}(3,{\mathbb C})$--oper. The reason for this
is that the logarithmic connection ${\mathbf D}$ might possess local monodromy around some points
of the subset $\mathbb S$ in \eqref{e7}. On the other hand, if the local monodromy of ${\mathbf D}$
around every point of $\mathbb S$ is trivial, then it can be shown that $({\mathbb B},\, {\mathbf D})$ defines
a branched $\text{SO}(3,{\mathbb C})$--oper (see Remark \ref{reml}).
Take a point $x_i\, \in \, \mathbb S$. Since the eigenvalues of ${\rm Res}({\mathbf D}, x_i)$
are $\{-2,\, -1,\, 0\}$ (the fourth one of the five conditions above), the local monodromy of ${\mathbf D}$ around
$x_i$ is unipotent (meaning $1$ is the only eigenvalue). Let
\begin{equation}\label{el}
L^i_2,\, L^i_1,\, L^i_0\, \subset\, J^2({\mathbb T})_{x_i}
\end{equation}
be the eigenspaces of ${\rm Res}({\mathbf D}, x_i)$ for the eigenvalues $-2,\, -1,\, 0$ respectively, so
\begin{equation}\label{el0}
L^i_2\oplus L^i_1\oplus L^i_0\, =\, J^2({\mathbb T})_{x_i}\, .
\end{equation}
Using
the logarithmic connection $\mathbf D$, we will construct a homomorphism
\begin{equation}\label{el2}
\varphi_i\, \in \, \text{Hom}(L^i_1\otimes (K_X)_{x_i},\, L^i_2\otimes (K^{\otimes 2}_X)_{x_i})
\,=\, \text{Hom}(L^i_1,\, L^i_2\otimes (K_X)_{x_i})\, .
\end{equation}
Take any $$v\, \in\, L^i_1\otimes (K_X)_{x_i}\, \subset\, (J^2({\mathbb T})\otimes K_X)_{x_i}$$
(see \eqref{el}).
Note that ${\mathcal O}_X(-x_i)_{x_i}\,=\, (K_X)_{x_i}$ (see \eqref{ry}).
Let $\widetilde{v}$ be a holomorphic section of $J^2({\mathbb T})\otimes {\mathcal O}_X(-x_i)$ defined on
some open neighborhood $U$ of $x_i$ such that
\begin{equation}\label{ch1}
\widetilde{v}(x_i)\,=\, v\, ;
\end{equation}
here the identification ${\mathcal O}_X(-x_i)_{x_i}\,=\, (K_X)_{x_i}$ is used. Fix
the open subset $U$ such that $U\bigcap {\mathbb S}\,=\, x_i$.
In particular, $\widetilde{v}$ is a holomorphic section of $J^2({\mathbb T})$ over $U$, and we have
$$
{\mathbf D}(\widetilde{v})\, \in\, H^0(U,\, J^2({\mathbb T})\otimes K_X\otimes{\mathcal O}_X(S))
\,=\, H^0(U,\, J^2({\mathbb T})\otimes K_X\otimes {\mathcal O}_X(x_i))\, .
$$
We will show that ${\mathbf D}(\widetilde{v})$ lies in the image of the natural inclusion map
$$H^0(U,\, J^2({\mathbb T})\otimes K_X\otimes{\mathcal O}_X(-x_i))\, \hookrightarrow\,
H^0(U,\, J^2({\mathbb T})\otimes K_X\otimes{\mathcal O}_X(x_i))\, .$$
For that, first note that the section $\widetilde{v}$ can be expressed as
$$
\widetilde{v}\,=\, f\cdot s_1+ s_2\, ,
$$
where
\begin{itemize}
\item $f$ is a holomorphic function on $U$ with $f(x_i)\,=\, 0$,
\item $s_1\, \in\, H^0(U,\, J^2({\mathbb T}))$ with $s_1(x_i)\, \in\, L^i_1$ (see \eqref{el}), and
\item $s_2\, \in\, H^0(U,\, J^2({\mathbb T}))$, and it vanishes at $x_i$ of order at least two.
\end{itemize}
Now consider the section
$$
{\mathbf D}(\widetilde{v})\,=\, {\mathbf D}(fs_1)+{\mathbf D}(s_2)\, .
$$
Since $s_2$ vanishes at $x_i$ of order at least two, it follows that
$${\mathbf D}(s_2)\,\in\, H^0(U,\, J^2({\mathbb T})\otimes K_X\otimes{\mathcal O}_X(-x_i))\, .$$
Consequently, to prove that
\begin{equation}\label{h1}
{\mathbf D}(\widetilde{v})\,\in \, H^0(U,\, J^2({\mathbb T})\otimes K_X\otimes{\mathcal O}_X(-x_i))\,
\hookrightarrow\, H^0(U,\, J^2({\mathbb T})\otimes K_X\otimes{\mathcal O}_X(x_i))\, ,
\end{equation}
it suffices to show that ${\mathbf D}(fs_1)\,\in \,H^0(U,\, J^2({\mathbb T})\otimes K_X
\otimes{\mathcal O}_X(-x_i))$.
The Leibniz rule for ${\mathbf D}$ says that
\begin{equation}\label{h2}
{\mathbf D}(fs_1)\,=\, f{\mathbf D}(s_1)+df\otimes s_1\, .
\end{equation}
Since $s_1(x_i)\, \in\, L^i_1$, and $L^i_1$ is the eigenspace of ${\rm Res}({\mathbf D}, x_i)$
for the eigenvalue $-1$, from \eqref{h2} it follows that $${\mathbf D}(fs_1)\,\in\,
H^0(U,\, J^2({\mathbb T})\otimes K_X\otimes{\mathcal O}_X(-x_i))\, .$$
Indeed, if we consider $f{\mathbf D}(s_1)$ and $df\otimes s_1$ as sections of
$(J^2({\mathbb T})\otimes K_X)\vert_U$, then $f{\mathbf D}(s_1)(x_i)\,=\, -v$ by the residue condition, and
$(df\otimes s_1)(x_i)\,=\, v$ by \eqref{ch1}. These imply that the section
$$
{\mathbf D}(fs_1) \,\in\, H^0(U,\, J^2({\mathbb T})\otimes K_X)
$$
vanishes at $x_i$, making it a section of $(J^2({\mathbb T})\otimes K_X\otimes{\mathcal O}_X(-x_i))\vert_U$.
Since ${\mathbf D}(fs_1)\,\in\,
H^0(U,\, J^2({\mathbb T})\otimes K_X\otimes{\mathcal O}_X(-x_i))$, we conclude
that \eqref{h1} holds.
Using the decomposition in \eqref{el0}, the fiber $(J^2({\mathbb T})\otimes K_X\otimes{\mathcal O}_X(-S))_{x_i}$
decomposes as
$$
(J^2({\mathbb T})\otimes K_X\otimes{\mathcal O}_X(-S))_{x_i}
$$
\begin{equation}\label{s2}
\,=\, ((L^i_2\otimes K_X\otimes{\mathcal O}_X(-S))_{x_i})\oplus
((L^i_1\otimes K_X\otimes{\mathcal O}_X(-S))_{x_i})\oplus ((L^i_0\otimes K_X\otimes{\mathcal O}_X(-S))_{x_i})\, .
\end{equation}
For the section ${\mathbf D}(\widetilde{v})$ in \eqref{h1}, let
\begin{equation}\label{s3}
{\mathbf D}(\widetilde{v})^i_2\, \in\, (L^i_2\otimes K_X\otimes{\mathcal O}_X(-S))_{x_i}\,=\, L^i_2\otimes
(K^{\otimes 2}_X)_{x_i}
\end{equation}
be the component of ${\mathbf D}(\widetilde{v})(x_i)$
for the decomposition in \eqref{s2}; recall from \eqref{ry} that
we have ${\mathcal O}_X(-S)_{x_i}\,=\, (K_X)_{x_i}$.
We will now show that the element ${\mathbf D}(\widetilde{v})^i_2\, \in\, L^i_2\otimes
(K^{\otimes 2}_X)_{x_i}$ in \eqref{s3} is
independent of the choice of the section $\widetilde{v}$ in \eqref{ch1} that extends $v$.
To prove this, take any
$$
\widehat{v}\, \in\, H^0(U,\, J^2({\mathbb T})\otimes {\mathcal O}_X(-x_i))
$$
such that $\widehat{v}(x_i)\,=\, v$. So the section $\widetilde{v}-\widehat{v}$ of
$J^2({\mathbb T})\vert_U$ vanishes at $x_i$ of order at least two. Therefore, we can write
$$
\widetilde{v}-\widehat{v}\,=\, f_2s_2+ f_1s_1+f_0s_0\, ,
$$
where $f_0,\, f_1,\, f_2$ are holomorphic functions on $U$ vanishing at $x_i$ of order at least two,
and $s_2(x_i)\, \in\, L^i_2$, $s_1(x_i)\, \in\, L^i_1$, $s_0(x_i)\, \in\, L^i_0$.
It is straightforward
to check that ${\mathbf D}(f_1s_1)$ and ${\mathbf D}(f_0s_0)$ do not contribute to the component
$L^i_2\otimes (K^{\otimes 2}_X)_{x_i}$ in \eqref{s2}; as before, ${\mathcal O}_X(-S)_{x_i}$
is identified with $(K_X)_{x_i}$. Therefore, to prove that
${\mathbf D}(\widetilde{v})^i_2$ in \eqref{s3} is
independent of the choice of the section $\widetilde{v}$, it suffices to show that
${\mathbf D}(f_2s_2)$ also does not contribute to the component
$L^i_2\otimes (K^{\otimes 2}_X)_{x_i}$ in \eqref{s2}. But this follows from the facts that
$s_2(x_i)\, \in\, L^i_2$, and $L^i_2$ is the eigenspace of
${\rm Res}({\mathbf D}, x_i)$ for the eigenvalue $-2$. Hence we conclude that ${\mathbf D}
(\widetilde{v})^i_2$ is independent of the choice of $\widetilde{v}$.
Now we construct the homomorphism in $\varphi_i$ in \eqref{el2} by sending any $v\, \in\,
L^i_1\otimes (K_X)_{x_i}$ (as in \eqref{ch1}) to ${\mathbf D}(\widetilde{v})^i_2$ in \eqref{s3}
constructed from $v$.
\begin{theorem}\label{thm2}
The pair $({\mathbb B},\, {\mathbf D})$ in \eqref{tp} defines a
branched ${\rm SO}(3,{\mathbb C})$--oper if and only if
$\varphi_i\,=\, 0$ for every $x_i\, \in\, \mathbb S$, where $\varphi_i$ is the homomorphism
in \eqref{el2}.
\end{theorem}
\begin{proof}
We first invoke the algorithm, described at the end of Section \ref{se6.4}, to recover
a branched $\text{SO}(3,{\mathbb C})$--oper from the corresponding logarithmic connection and
the twisted bilinear form on $J^2({\mathbb T})$.
Let $\mathcal F$ be the
holomorphic vector bundle on $X$ that fits in the short exact sequence of coherent analytic sheaves on $X$
\begin{equation}\label{t1}
0\, \longrightarrow\, {\mathcal F} \, \longrightarrow\, J^2({\mathbb T})\, \longrightarrow\,
\bigoplus_{i=1}^d J^2({\mathbb T})_{x_i}/L^i_0 \, \longrightarrow\, 0\, ,
\end{equation}
where $L^i_0$ is the eigenspace in \eqref{el}.
Applying the criterion for a logarithmic connection on the vector
bundle $\mathcal W$ in \eqref{ed} to induce a logarithmic connection on the vector
bundle $V$ in \eqref{ed}, the logarithmic connection ${\mathbf D}$
on $J^2({\mathbb T})$ induces a logarithmic connection
${\mathbf D}'$ on $\mathcal F$; the eigenvalues of the residue ${\rm Res}({\mathbf D}', x_i)$
of ${\mathbf D}'$ at any $x_i\, \in\, {\mathbb S}$ are $(-1,\, 0,\, 0)$.
For each point $x_i\, \in \, \mathbb S$, let $M_i\, \subset\, {\mathcal F}_{x_i}$ be the
eigenspace for the eigenvalue $0$ of the residue ${\rm Res}({\mathbf D}', x_i)$. Let $\mathcal E$ be the
holomorphic vector bundle on $X$ that fits in the short exact sequence
\begin{equation}\label{t2}
0\, \longrightarrow\, {\mathcal E} \, \longrightarrow\, {\mathcal F}\, \longrightarrow\,
\bigoplus_{i=1}^d {\mathcal F}_{x_i}/M_i \, \longrightarrow\, 0
\end{equation}
of coherent analytic sheaves on $X$. Applying the above mentioned criterion
we conclude that the logarithmic connection ${\mathbf D}'$ induces a logarithmic connection
${\mathbf D}''$ on $\mathcal E$; the eigenvalues of the residue ${\rm Res}({\mathbf D}'', x_i)$
of ${\mathbf D}''$ at $x_i\, \in\, {\mathbb S}$ are $(0,\, 0,\, 0)$.
Although all the eigenvalues of ${\rm Res}({\mathbf D}'', x_i)$ are zero, the residue
${\rm Res}({\mathbf D}'', x_i)$ need not vanish in general; it can be
a nilpotent endomorphism. We shall investigate the
residue ${\rm Res}({\mathbf D}'', x_i)$.
Let
$$
\iota\,:\, {\mathcal E} \hookrightarrow\, J^2({\mathbb T})
$$
be the inclusion map obtained from the injective homomorphisms in \eqref{t1} and \eqref{t2}.
Since $\iota$ is an isomorphism over ${\mathbb X}\,=\, X\setminus\mathbb S$,
any holomorphic subbundle $V$ of $J^2({\mathbb T})$ generates a holomorphic subbundle $\widetilde{V}$
of ${\mathcal E}$. This $\widetilde{V}$ is uniquely determined by the condition that
the space of holomorphic sections
of $\widetilde{V}$ over any open subset $U\, \subset\, X$ is the space of
holomorphic sections $s$ of ${\mathcal E}\vert_U$ such that $s\vert_{U\cap (X\setminus{\mathbb S})}$
is a section of $V$ over $U\cap (X\setminus{\mathbb S})$.
Let $\widetilde{F}^1_{\mathbb T}$ and $\widetilde{F}^2_{\mathbb T}$ be the holomorphic subbundles
of ${\mathcal E}$ corresponding to the holomorphic subbundles $F^1_{\mathbb T}$ and
$F^2_{\mathbb T}$ of $J^2({\mathbb T})$ in \eqref{e11}.
It is straightforward to check that for any point $x\, \in\, \mathbb S$,
\begin{equation}\label{z1}
{\rm Res}({\mathbf D}'', x_i)((\widetilde{F}^1_{\mathbb T})_{x_i})\,=\, 0\ \
\text{ and }\ \ {\rm Res}({\mathbf D}'', x_i)((\widetilde{F}^2_{\mathbb T})_{x_i})\, \subseteq\,
(\widetilde{F}^1_{\mathbb T})_{x_i}\, .
\end{equation}
Moreover, the homomorphism $(\widetilde{F}^2_{\mathbb T})_{x_i}/(\widetilde{F}^1_{\mathbb T})_{x_i}
\,\longrightarrow\, (\widetilde{F}^1_{\mathbb T})_{x_i}$ induced by ${\rm Res}({\mathbf D}'', x_i)$
coincides with the homomorphism $\varphi_i$ in \eqref{el2}.
If $({\mathbb B},\, {\mathbf D})$ in \eqref{tp} defines a
branched ${\rm SO}(3,{\mathbb C})$--oper, then ${\mathbf D}''$ is a nonsingular connection, meaning
${\rm Res}({\mathbf D}'', x_i)\,=\, 0$ for every $x_i\, \in\, \mathbb S$, and consequently,
$\varphi_i\,=\, 0$ for all $x_i\, \in\, \mathbb S$.
Conversely, if $\varphi_i\,=\, 0$ for all $x_i\, \in\, \mathbb S$, then using \eqref{z1} it follows
that ${\rm Res}({\mathbf D}'', x_i)((\widetilde{F}^2_{\mathbb T})_{x_i})\, =\,0$. Now from
\eqref{b1} we conclude that ${\rm Res}({\mathbf D}'', x_i)\,=\, 0$. Hence the
logarithmic connection ${\mathbf D}''$ is nonsingular. Therefore, $({\mathbb B},\,
{\mathbf D})$ defines a branched ${\rm SO}(3,{\mathbb C})$--oper. This completes the proof.
\end{proof}
\begin{remark}\label{reml}
Assume that the local monodromy of the logarithmic connection ${\mathbf D}$ around every point of
$\mathbb S$ is trivial. Then the local monodromy of the logarithmic connection
${\mathbf D}''$ around every point of $\mathbb S$ is trivial, because the monodromies of
${\mathbf D}$ and ${\mathbf D}''$ coincide. Consider the following four facts:
\begin{enumerate}
\item For every point $x_i\, \in\, \mathbb S$, the eigenvalues of ${\rm Res}({\mathbf D}'', x_i)$
are $(0,\, 0,\, 0)$.
\item The local monodromy of ${\mathbf D}''$ around every $x_i\, \in\, \mathbb S$ is trivial.
\item The local monodromy, around $x_i\, \in\, \mathbb S$,
of any logarithmic connection $D^0$ lies in the conjugacy class of $\exp (-2\pi\sqrt{-1}{\rm Res}(D^0, x_i))$.
\item $\exp (-2\pi\sqrt{-1} A)\, \not=\, I$ for any nonzero nilpotent complex matrix $A$.
\end{enumerate}
These together imply that ${\rm Res}({\mathbf D}'', x_i)\,=\, 0$ for every point $x_i\, \in\, \mathbb S$.
Hence ${\mathbf D}''$ is a nonsingular connection, and $({\mathbb B},\, {\mathbf D})$ in \eqref{tp} defines a
branched ${\rm SO}(3,{\mathbb C})$--oper.
\end{remark}
\section*{Acknowledgements}
This work has been supported by the French government through the UCAJEDI Investments in the
Future project managed by the National Research Agency (ANR) with the reference number
ANR2152IDEX201. The first-named author is partially supported by a J. C. Bose Fellowship, and
school of mathematics, TIFR, is supported by 12-R$\&$D-TFR-5.01-0500.
| proofpile-arXiv_065-319 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Conclusion and future work}
\label{sec:conclusion}
We have presented SmartQueue\xspace, a deep reinforcement learning query scheduler that seeks to maximize buffer hit rates in database management systems. While simple, SmartQueue\xspace was able to provide substantial improvements over naive and simple heuristics, suggesting that cache-aware deep learning powered query schedulers are a promising research direction. SmartQueue\xspace is only an early prototype, and in the future we plan to conduct a full experimental study of SmartQueue\xspace. In general, we believe the following areas of future work are promising.
\vspace{1mm} \noindent \textbf{Neural network architecture.} While effective in our initial experiments, a fully connected neural network is likely not the correct inductive bias~\cite{inductive_bias_ml} for this problem. A fully connected neural network is not likely to innately carry much useful information for query scheduling~\cite{inductive_bias_rl}, nor is there much of an intuitive connection between a fully-connected architecture and the query scheduling problem~\cite{inductive_bias_dl}. The first layer of our network learns one linear combination per neuron of the entire input. These linear combinations would have to be extremely sparse to learn features like "the query reads this block, which is cached." Other network architectures -- like locally connected neural networks~\cite{locally-connected} -- may provide significant benefit.
\vspace{1mm} \noindent \textbf{SLAs.} Improving raw workload latency is helpful, but often applications have much more complex performance requirements (e.g., some queries are more important than others). Integrating query priorities and customizable Service Level Agreements (SLAs) into SmartQueue\xspace by modifying the reward signal could result in an buffer-aware and SLA-compliant scheduler.
\vspace{1mm} \noindent \textbf{Query optimization.} Different query plans may perform differently with different buffer states. Integrating SmartQueue\xspace into the query optimizer -- so that query plans can be selected to maximize buffer usage -- may provide significant performance gains.
\vspace{1mm} \noindent \textbf{Buffer management.} SmartQueue\xspace only considers query ordering, and assumes that the buffer management policy is opaque. A larger system could consider both query ordering and buffer management, choosing to evict or hold buffered blocks based on future queries. Such a system could represent an end-to-end query scheduling and buffer management policy.
\section{Preliminary Results} \label{sec:experiments}
Here, we present preliminary experiments demonstrating that SmartQueue\xspace
can generate query ordering that increase the buffer hit ratio and improve query execution times compared with alternative non-learned schedulers.
\paragraph*{Experimental Setup} Our experimental study used workloads generated using the 99 query templates of the TPC-DS benchmark~\cite{tpcds}. We deployed a database with a size of 49GB on single node server with 4 cores, 32GB of RAM. For our experiments, we generated $1,000$ random query instances out of these 99 templates and placed them in a random order in the execution queue. The benchmark includes 165 tables and indexes, and the number of blocks for each of these ranged between $100$ and $130,0000$. However, after downsizing both the query vector and buffer state bitmaps, our representation vectors have a size of {$165 \times 1,000$}, including index tables.
We run our experiments on PostgreSQL~\cite{url-postgres} with a shared buffer pool size of 2GB.\footnote{We configured PostgreSQL to bypass the OS filesystem cache. In future work, multiple levels of caching should be considered.} For each query, we collect its query plan without executing the query by using the \texttt{EXPLAIN} command.
SmartQueue\xspace uses a fully-connected neural network. Our DRL agent was implemented with Keras\cite{keras} and uses 2 hidden layers with 128 nerons each. We also use an adaptive learning rate optimization algorithm (Adam~\cite{adam}) and our loss function is the mean squared error.
In our study, we compare SmartQueue with two alternative scheduling approaches. \emph{First-Come-First-Served (FCFS)} simply executes queries in the order they appear in the queue. \emph{Greedy} employs a simple heuristic to identify the query with the best expected hit ratio given the current contents of the buffer pool. Specifically, for each queued query it calculates the dot product of the buffer state bitmap with the data requests bitmap, estimating essentially the probability of buffer hits for each data block request. We then order all queries based on the sum of these probabilities over all blocks and execute the query with the highest sum value. Following the execution, the new buffer state is calculated and the heuristic is applied again until the queue is empty. This greedy approach focuses on short-terms buffer hits improvements.
\begin{figure*}[t]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/e2_buffer.pdf}
\caption{Average buffer hit ratio}
\label{fig:e2_buffer}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/e2_latency.pdf}
\caption{Query execution time}
\label{fig:e2_latency}
\end{subfigure}
\caption{SmartQueue\xspace's effectiveness (buffer hit ratio and query completion rate) with increasing training sets. }
\label{fig:no_split}
\end{figure*}
\paragraph*{\bf Effectiveness}
First, we demonstrate that SmartQueue\xspace can improve its effectiveness as it collects more experience. In this set of experiments, we placed all $1,000$ queries in the queue and we start scheduling them using SmartQueue\xspace. In the beginning our agent will make arbitrary scheduling decisions, but as it schedules more queries, SmartQueue\xspace collects more experience from its past actions and starts improving its policy. To demonstrate that, we evaluated the learned model at different stages of its training. Figure~\ref{fig:e2_buffer} and Figure~\ref{fig:e2_latency} shows how the model performs as we increase the number of training queries. In Figure~\ref{fig:e2_buffer}, we measure the average buffer hit ratio when scheduling our $1,000$ queries and we compare it with the buffer hit ratio of FCFS and Greedy (which is not affected by the number of training queries). We observe that the DRL agent is able to improve the buffer hit ratio as it schedules more queries. It outperforms the buffer hit of the other two heuristics eventually converging into a ration that is 65\% higher than FCFS and $35\%$ higher than Greedy.
In addition, Figure \ref{fig:e2_latency} shows the number of executed queries over time. The results demonstrate that DRL-guided scheduling of SmartQueue\xspace allows our approach to execute the workload of $1,000$ queries around $42\%$ faster than Greedy and $55\%$ faster than FCFS. This indicates that SmartQueue\xspace can effectively capture the relationship between buffer pool state and data access patterns, and leverage that to better utilize the buffer pool and improve its query scheduling decisions.
\paragraph*{\bf Adaptability to new queries}
Next we studies SmartQueue\xspace's ability to adapt to unseen queries. For these experiments, we trained SmartQueue\xspace by first scheduling $950$ random queries out of 79 TPC-DS templates. We then test the model over 50 random queries out 20 unseen before TPC-DS templates. Figure~\ref{fig:e1_buffer} demonstrates how average buffer hit ratio of the testing queries is affected as SmartQueue\xspace collects experience increases from scheduling more training queries. The graph shows that the average buffer hit ratio of the testing queries is increased from 0.2 (when the SmartQueue\xspace is untrained) to 0.64 (when SmartQueue\xspace has schedule all 950 queries). Furthermore, SmartQueue\xspace outperforms FCFS and Greedy after having scheduled less than $500$ queries.
Finally, Figure ~\ref{fig:e1_latency}, shows that the query latency of our testing queries keeps decreasing (and eventually outperforms FCFS and Greedy) as SmartQueue\xspace is trained on more queries. Our approach enables unseen queries to be eventually executed $11\%$ faster than FCFS and $22\%$ than Greedy. These results indicate that query scheduling policy can adapt to new query templates leading to significant performance and resource sharing improvements.
\begin{figure*}[t]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/e1_buffer.pdf}
\caption{Average buffer hit ratio }
\label{fig:e1_buffer}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/e1_latency.pdf}
\caption{Query execution time}
\label{fig:e1_latency}
\end{subfigure}
\caption{Buffer hit ratio and latency improvement on unseen query templates and increasing training queries. }
\label{fig:train_test_split}
\end{figure*}
\paragraph*{\bf Overhead}
We also measured the training and inference time. Our proof-of-concept prototype needed 240 mins to incorporate 950 queries in our agent (so in average the training overhead is $3.95$ mins per query). This time does not include the execution time of the query. This training overhead can potentially be optimized by offloading it into another thread, introducing early stopping, or re-using previous network weights to get a good "starting point." There is no training overhead for FCFS and Greedy. The inference time of SmartQueue\xspace is {3.12}seconds while the inference time for Greedy is 2.52 seconds and {0.0012} seconds for FCFS.
\section{Introduction}
Query scheduling, the problem of deciding which of a set of queued queries to execute next, is an important and challenging task in modern database systems. Query scheduling can have a significant impact on query performance and resource utilization while it may need to account for a wide number of considerations, such as cached data sets, available resources (e.g., memory), per-query performance goals, query prioritization, or inter-query dependencies (e.g., correlated data access patterns).
In this work, we attempt to address the query scheduling problem by leveraging overlapping data access requests. Smart query scheduling policies can take advantage of such overlaps, allowing queries to share cached data, whereas naive scheduling policies may induce unnecessary disk reads. For example, consider three queries $q_1, q_2, q_3$ which need to read disk blocks $(b_1, b_2)$, $(b_4, b_5)$, and $(b_2, b_3)$ respectively. If the DBMS's buffer pool (i.e., the component of the database engine that caches data blocks) can only cache two blocks at once, executing the queries in the order of $[q_1, q_2, q_3]$ will result in reading 6 blocks from disk. However, if the queries are executing in the order $[q_1, q_3, q_2]$, then only 5 blocks will be read from disk, as $q_2$ will use the cached $b_2$. Since buffer pool hits can be orders of magnitude faster than cache misses, such savings could be substantial.
In reality, designing a query scheduler that is aware of the current buffer pool is a complex task. First, the exact data block read set of a query is not known ahead of time, and is dependent on data and query plan parameters (e.g., index lookups). Second, a smart scheduler must balance short-term rewards (e.g., executing a query that will take advantage of the current buffer state) against long-term strategy (e.g., selecting queries that keep the most important blocks cached). One could imagine many simple heuristics, such as greedily selecting the next query with the highest expected buffer usage, to solve this problem. However, a hand-designed policy to handle the complexity of the entire problem, including different buffer sizes, shifting query workloads, heterogeneous data types (e.g., index files vs base relations), and balancing short-term gains against long-term strategy is much more difficult to conceive.
Here, we showcase a prototype of SmartQueue\xspace, a deep reinforcement learning (DRL) system that automatically learns to maximize buffer hits in an adaptive fashion. Given a set of queued queries, SmartQueue\xspace combines a simple representation of the database's buffer state, the expected reads of queries, and deep Q-learning model to order queued queries in a way that garners long-term increases in buffer hits.
SmartQueue\xspace is fully learned, and requires minimal tuning. SmartQueue\xspace custom-tailors itself to the user's queries and database, and learns policies that are significantly better than naive or simple heuristics. In terms of integrating SmartQueue\xspace into an existing DBMS, our prototype only requires access to the execution plan for each incoming query (to assess likely reads) and the current state of the DBMS buffer pool (i.e., its cached data blocks).
We present our system model and formalized our learning task in Section~\ref{s:model}. We present preliminary experimental results from a proof-of-concept prototype implementation in Section~\ref{sec:experiments}, related work in Section~\ref{s:related}, and in Section~\ref{sec:conclusion} we highlight directions for future work.
\section{The SmartQueue Model}\label{s:model}
SmartQueue\xspace is a learned query scheduler that automatically learns how to order the execution of queries to minimize disk access requests. The core of SmartQueue\xspace includes a deep reinforcement learning (DRL) agent~\cite{deep_rl} that learns a query scheduling policy through continuous interactions with its environment, i.e., the database and the incoming queries. This DRL agent is not a static model, instead it \emph{continuously} learns from its past scheduling decisions and \emph{adapts} to new data access and caching patterns. Furthermore, as we discuss below, using a DRL model allows us to define a reward function and scheduling policy that captures long-term benefits vs short-term gains in disk access.
Our system model is depicted in Figure~\ref{fig:system_model}. Incoming user queries are placed into an execution queue and SmartQueue\xspace decides their order of execution. For each query execution, the database collects the required \emph{data blocks} of each input base relation, where a data block is the smallest data unit used by the database engine. Data blocks requests are first resolved by the buffer pool. Blocks found in the buffer (\emph{buffer hits}) are returned for processing while the rest of the blocks (\emph{buffer misses}) are read from disk and placed into the buffer pool (after possible block evictions). Higher buffer hit rates (and hence lower disk access rates) can enormously impact query execution times but require strategic query scheduling, as execution ordering affects the data blocks cached in the buffer pool.
One tempting solution to address this challenge could involve a greedy scheduler which executes the query that will re-use the maximum number of cached data blocks. While this simple approach would yield short term benefits, it ignores the long-term impact of each choice. Specifically, while the next query for execution will maximally utilize the buffer pool contents, it will also lead to newly cached data blocks, which will affect future queries. A greedy approach fails to identify whether these new cached blocks could be of any benefit to the unscheduled yet queries.
SmartQueue\xspace addresses this problem by training a deep reinforcement learning agent to make scheduling decisions that maximize long term benefits. Specifically, it uses a model that simultaneously estimates and tries to improve a weighted average between short-term buffer hits and the long-term impact of query scheduling choices. In the next paragraphs, we discuss the details of our approach: (a) the input features vector that capture data access requests (\emph{Query Bitmap}) and buffer state (\emph{Buffer Bitmap}), and (b) the formalized DRL task.
\paragraph*{Buffer Bitmap} One input to the DRL model is the state of the buffer pool, namely which blocks are currently cached in memory. Buffer state $B$ is represented by a bitmap where rows represent base relations and columns represent data blocks. The $(i,j)$ entry is set to 1 if the $j$-th block of relation $i$ is cached in the buffer pool and is set to zero otherwise. Since the number of blocks of any given relation can be very high and different for each relation, each row vector $F_i$ is downsized by calculating a simple moving average over the number of its blocks entries. Specifically $D_i$ is the downsized row of a relation $i$ and $F_i$ is the full size row, we have:
\begin{equation}
B_{ij} = \lfloor|F_i|/|D_i|\rfloor \times \sum_{k=j \times \lfloor|D_i|/|F_i|\rfloor}^{(j+1) \times \lfloor|D_i|/|F_i|\rfloor} F_{ik}
\end{equation}
\paragraph*{Query Vector} The second input to the DLR model is the data block requests of each query in the queue. Specifically, given a query $q$, we generate a vector that indicates the data blocks to be accessed by $q$ for each base relation in the database. To implement this, SmartQueue\xspace collects the query plan of $q$, and approximates the probability of each table's data block being accessed. Our approach handle requests of index file and base relations similarly, as both type of blocks will be cached into the buffer pool. The query vector is downsized in the same was as the buffer bitmap.
Full table scans for a base relation $i$ indicate that all data blocks of the given relation will be accessed, and therefore each cell of the $i$-th row vector has the value of 1. For indexed table scans, we calculate the number of tuples to be accessed based on the selectivity of the index scan. If the index scan is feeding a loop-based operator (i.e., nested loop join) the selectivity is adapted accordingly to account for any iterations over the relation. We assume the relation is uniformly stored across data blocks and therefore, if $x\%$ tuples of a base relation are to be selected from an indexed operation, we set the access probability of each data block of the relation to $x\%$. Similarly, we assume that the indexed operation reads $x\%$ of the index's blocks. We note that much more sophisticated probabilistic models could be used, but for this preliminary work we use this simple approximation.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figures/system-model.pdf}
\vspace{-2mm}
\caption{SmartQueue\xspace's system model}
\label{fig:system_model}
\vspace{-5mm}
\end{figure}
\paragraph*{Deep Q-Learning} SmartQueue\xspace uses deep Q-learning~\cite{deep_learning} in order to decide which query to execute next. As with any deep reinforcement learning system, SmartQueue\xspace is an agent that operates over a set of states $S$ (buffer pool states) and a set of actions $A$ per state (candidate queries to executed next). SmartQueue\xspace models the problem of query scheduling as a Markov Decision Process (MDP)~\cite{rl_book}: by picking one query from the queue to execute, the agent transitions from the current to a new buffer pool state (i.e., data blocks cached). Executing a new query on the current buffer state, provide the agent with a reward. In our case, the reward of an action is the buffer hit ratio of the executed query calculated as $\frac{\mbox{\textit{buffer hits}} }{\mbox{\textit{total block requests}}}$.
The goal of the agent is to learn a \emph{scheduling policy} that maximizes its total reward. This is an continues learning process: as more queries arrive and the agent makes more scheduling decisions, it collects more information (i.e., context of the decision and its reward) and adapts its policy accordingly.
The scheduling policy is expressed as a function $Q(S_t,A_t)$, that outputs a \emph{Q-value} for taking an action $A_t$ (i.e., a query to execute next) on a buffer state $S_t$. Given a state $S_t$ and a action $A_t$, the Q-value $Q(S_t, A_t)$ is calculated by adding the maximum reward attainable from future buffer states to the reward for achieving its current buffer state, effectively influencing the current scheduling decision by the potential future reward. This potential reward is a weighted sum of the expected buffer hit ratios of all future scheduling decisions starting from the current buffer state. Formally, after each action $A_t$ on a state $S_t$ the agent learns a new policy $Q^{new}(S_t, A_t)$ defined as:
\begin{equation}
Q(S_t, A_t)+\alpha[R_{t} + \gamma \max_{\alpha}{(Q(S_{t+1},\alpha)-Q(S_t, A_t))}]
\end{equation}
The parameter $\gamma$ is the discount factor which weighs the contribution of short-term vs. long-term rewards. Adjusting the value of $\gamma$ will diminish (e.g., favor choosing queries that will make use of the current buffer state) or increase (e.g., favor choosing queries that will allow long-term increased usage of the buffer) the contribution of future rewards. The parameter $\alpha$ is the learning rate or step size. This simply determines to what extent newly acquired information overrides old information: a low learning rate implies that new information should be treated skeptically, and may be appropriate when a workload is mostly stable but contains some outliers. A high learning rate implies that new information is more fully trusted, and may be appropriate when query workloads smoothly change over time. Since the above is a recursive equation, it starts with making arbitrary assumptions for all $Q$-values (and hence arbitrary initial scheduling decisions). However, as more experience is collected through the execution of incoming queries, the network likely converges to the optimal policy~\cite{dqn}.
\section{Introduction}
Query optimization is an important task for database management systems. Work on query optimization has a long history~\cite{systemr}. Despite decades of study, the most important elements of query optimization -- cardinality estimation and cost modeling -- have proven difficult to crack~\cite{howgood}.
Several recent works~\cite{deep_card_est, deep_card_est2, qo_state_rep, rejoin, sanjay_wat, neo, learn_cost} have applied machine learning techniques to these stubborn problems. While all of these new solutions demonstrate remarkable results, they suffer from fundamental limitations that prevent them from being integrated into a real-world DBMS. Most notably, these techniques (including those coming from authors of this paper) suffer from two main drawbacks.
\begin{enumerate}
\item{\textbf{Data}: most proposed machine learning techniques require an impractical amount of training data. For example, ML-powered cardinality estimators require gathering precise cardinalities from the underlying database, a prohibitively expensive operation in practice (this is why we wish to estimate cardinalities in the first place). Reinforcement learning techniques must process thousands of queries before outperforming traditional optimizers.}
\item{\textbf{Changes}: while performing an expensive training operation once may already be impractical, changes in data or schema may make the situation worse. Learned cardinality estimation techniques must be retrained when data changes, or risk becoming stale. Many proposed reinforcement learning techniques assume that both the data and the schema remain constant.}
\item{\textbf{Catastrophe}: learning techniques can outperform traditional optimizers on average, but often perform catastrophically (e.g., 100x regression in query performance) in the tail, especially when training data is sparse. While some approaches offer statistical guarantees of their dominance in the average case~\cite{skinnerdb}, such failures, even if rare, are unacceptable in many real world applications.}
\end{enumerate}
We propose a new class of learned query optimizers designed to remedy these problems based on multiplexing a family of simple query optimizers. Early results indicate that our approach can (1) outperform traditional query optimizers with minimal training data ($\approx 100$ query executions), (2) maintain this advantage even in the presence of data and schema changes, and (3) \emph{never} incur a catastrophic execution. Our new design rests on the observation that \textbf{writing a good query optimizer is hard, but writing a simple query optimizer is comparably easy.} For example, writing a query optimizer to generate left-deep join trees with index nested loop joins (INLJ) or maximally-parallel join trees with hash joins (HJ) is a matter of transcription from a textbook. While such simple optimizers still require cardinality estimates, simple optimizers require radically less complexity in their cost models and plan enumerators.
At a high level, our approach assumes a family of simple optimizers and treats each as an arm in a contextual multi-armed bandit problem. Our system learns a model that predicts which simple optimizer will lead to good performance for a particular query. When a query arrives, our system selects a simple optimizer, executes the resulting query plan, and observes a reward. Over time, our system refines its model to more accurately predict which simple optimizer will most benefit an incoming query. For example, our system can learn to use an left-deep INLJ plan for highly selective queries, and to use a bushy HJ plan for less selective queries. We assume that no simple optimizer ever generates a catastrophic query plan, and thus our system cannot ever select a catastrophic plan.
The core of our approach is a learned tree convolution model~\cite{tree_conv}. Upon receiving a new query, we generate query plans from each simple optimizer. Each node in the query plan tree is represented by a vector containing an estimated cardinality and a one-hot encoding of the query operator. Then, a tree convolutional neural network (TCNN) is used to predict the performance of each query plan. Note that the TCNN's predictions only need to be precise enough to choose a good simple optimizer.
We show preliminary results using the JOB dataset~\cite{howgood} on PostgreSQL in Figure~\ref{fig:wall}. Each boxplot shows the distribution of regret, the difference between the query time achieved and the time achieved by the optimal simple optimizer. The left side shows PostgreSQL (note that the PostgreSQL optimizer never beats the optimal simple optimizer), and the right side shows 25 iterations (of 50 queries each) of our system. Over time, our system outperforms the PostgreSQL optimizer.
Compared to other learned systems, our system requires very little training data: good results are observed after seeing only 100 queries. Because of our reliance on a family of simple query optimizers, our system never produces catastrophic query plans. Finally, because each simple optimizer is capable of handling schema changes and shifts in data distribution, there is hope that our system can handle these changes as well. Ongoing work involves testing this assumption, as well as performing a full experimental analysis of our system.
\message{ !name(main.tex) !offset(45) }
\end{document}
\section{Related work} \label{s:related}
Prior work on query scheduling have focused on query parallelism~\cite{q-cop}, elastic cloud databases~\cite{azar, cost_wait, leitner, wisedb-cidr, pmax, sqlvm, nashdb}, meeting SLAs~\cite{icbs,sla-tree,wisedb-vldb,slos,perfenforce_demo,sloorchestrator,activesla,smartsla}, or cluster scheduling~\cite{decima,opennebula,step}. In terms of buffer pools and caching, most prior work has focused on smart cache management~\cite{cache-augment,cache-tables} (i.e., assuming the query order is fixed and choose which blocks to evict or replace), or on (memory) cache-aware algorithms~\cite{mem-cache}. Here, we take a flipped approach, in which we assume the buffer management policy is fixed and the query order may be modified (e.g., batch processing).
More broadly, work on learned indexes follows recent trends in integrating machine learning components into systems~\cite{pillars}, especially database systems. Machine learning techniques have also been applied to query optimization~\cite{neo, skinnerdb, qo_state_rep}, cardinality estimation~\cite{deep_card_est2, naru, plan_loss}, cost modeling~\cite{learn_cost}, data integration~\cite{termite, deep_entity}, tuning~\cite{ml_tuning}, and security~\cite{sql_embed}.
| proofpile-arXiv_065-320 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Conclusion}
We have presented a dynamic matching network that predicts dense correspondences by composing hypercolumn features using a small set of relevant layers from a CNN.
The state-of-the-art performance of the proposed method indicates that the use of dynamic multi-layer features in a trainable architecture is crucial for robust visual correspondence.
We believe that our approach may prove useful for other domains involving correspondence such as image retrieval, object tracking, and action recognition.
We leave this to future work.
\section{Experiments}
In this section we compare our method to the state of the art and discuss the results.
The code and the trained model are available online at our project page.
\smallbreak
\noindent \textbf{Feature extractor networks.} As the backbone networks for feature extraction, we use ResNet-50 and ResNet-101~\cite{he2016deep}, which contains 49 and 100 conv layers in total (excluding the last FC), respectively. Since features from adjacent layers are strongly correlated, we extract the base block from \texttt{conv1} maxpool and intermediate blocks from layers with residual connections (before ReLU). They amounts to 17 and 34 feature blocks (layers) in total, respectively, for ResNet-50 and ResNet-101. Following related work~\cite{choy2016universal,han2017scnet,kim2018recurrent,lee2019sfnet,min2019hyperpixel,rocco17geocnn,rocco18weak,rocco2018neighbourhood,paul2018attentive,huang2019dynamic}, we freeze the backbone network parameters during training for fair comparison.
\smallbreak
\noindent \textbf{Datasets.} Experiments are done on four benchmarks for semantic correspondence: PF-PASCAL~\cite{ham2018proposal}, PF-WILLOW~\cite{ham2016proposal}, Caltech-101~\cite{li2006one}, and SPair-71k~\cite{min2019spair}. PF-PASCAL and PF-WILLOW consist of keypoint-annotated image pairs, 1,351 pairs from 20 categories, and 900 pairs from 4 categories, respectively.
Caltech-101~\cite{li2006one} contains segmentation-annotated 1,515 pairs from 101 categories. SPair-71k~\cite{min2019spair} is a more challenging large-scale dataset recently introduced in~\cite{min2019hyperpixel}, consisting of keypoint-annotated 70,958 image pairs from 18 categories with diverse view-point and scale variations.
\smallbreak
\noindent \textbf{Evaluation metrics.} As an evaluation metric for PF-PASCAL, PF-WILLOW, and SPair-71k, the probability of correct keypoints (PCK) is used. The PCK value given a set of predicted and ground-truth keypoint pairs $\mathcal{P}=\{(\hat{\mathbf{p}}'_{m}, \ \mathbf{p}'_{m})\}_{m=1}^{M}$ is measured by $\mathrm{PCK}(\mathcal{P}) = \frac{1}{M}\sum_{m=1}^{M} \mathbbm{1} [\norm{\hat{\mathbf{p}}'_{m} - \mathbf{p}'_{m}} \leq \alpha_{\tau} \max{(w_{\tau}, h_{\tau})} ]$.
As an evaluation metric for the Caltech-101 benchmark, the label transfer accuracy (LT-ACC)~\cite{liu2009nonparam} and the intersection-over-union (IoU)~\cite{Everingham2010} are used.
Running time (average time per pair) for each method is measured using its authors' code on a machine with an Intel i7-7820X CPU and an NVIDIA Titan-XP GPU.
\smallbreak
\noindent \textbf{Hyperparameters.} The layer selection rate $\mu$ and the channel reduction factor $\rho$ are determined by grid search using the validation split of PF-PASCAL. As a result, we set $\mu=0.5$ and $\rho=8$ in our experiments if not specified otherwise.
The threshold $\delta_{\mathrm{thres}}$ in Eq.(\ref{weighting_term}) is set to be $\text{max}(w_{\tau}, h_{\tau})/10$.
\begin{table}[!t]
\caption{\label{tab:hpftable} Performance on SPair-71k dataset in accuracy (per-class PCK with $\alpha_{\text{bbox}}=0.1$). TR represents transferred models trained on PF-PASCAL while FT denotes fine-tuned (trained) models on SPair-71k.}
\begin{center}
\scalebox{0.6}{
\begin{tabular}{cclccccccccccccccccccc}
\toprule
Sup. & \multicolumn{2}{c}{Methods} & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & dog & horse & mbike & person & plant & sheep & train & tv & all\\
\midrule
\multirow{4}{*}{self} & TR
& CNNGeo$_\textrm{res101}$~\cite{rocco17geocnn} & 21.3 & 15.1 & 34.6 & 12.8 & 31.2 & 26.3 & 24.0 & 30.6 & 11.6 & 24.3 & 20.4 & 12.2 & 19.7 & 15.6 & 14.3 & 9.6 & 28.5 & 28.8 & 18.1 \\
& FT & CNNGeo$_\textrm{res101}$~\cite{rocco17geocnn} & 23.4 & 16.7 & 40.2 & 14.3 & 36.4 & 27.7 & 26.0 & 32.7 & 12.7 & 27.4 & 22.8 & 13.7 & 20.9 & 21.0 & 17.5 & 10.2 & 30.8 & 34.1 & 20.6 \\
& TR & A2Net$_\textrm{res101}$~\cite{paul2018attentive} & 20.8 & 17.1 & 37.4 & 13.9 & 33.6 & {29.4} & {26.5} & 34.9 & 12.0 & 26.5 & 22.5 & 13.3 & 21.3 & 20.0 & 16.9 & 11.5 & 28.9 & 31.6 & 20.1 \\
& FT & A2Net$_\textrm{res101}$~\cite{paul2018attentive} & 22.6 & {18.5} & 42.0 & {16.4} & {37.9} & {30.8} & {26.5} & 35.6 & 13.3 & 29.6 & 24.3 & 16.0 & 21.6 & {22.8} & {20.5} & 13.5 & 31.4 & {36.5} & 22.3 \\
\midrule
\multirow{6}{*}{weak} & TR & WeakAlign$_\textrm{res101}$~\cite{rocco18weak} & \underline{23.4} & 17.0 & 41.6 & 14.6 & \underline{37.6} & \textbf{28.1} & \underline{26.6} & 32.6 & 12.6 & 27.9 & 23.0 & 13.6 & 21.3 & 22.2 & 17.9 & 10.9 & {31.5} & 34.8 & 21.1 \\
& FT & WeakAlign$_\textrm{res101}$~\cite{rocco18weak} & 22.2 & 17.6 & 41.9 & \underline{15.1} & \textbf{38.1} & \underline{27.4} & \textbf{27.2} & 31.8 & 12.8 & 26.8 & 22.6 & 14.2 & 20.0 & 22.2 & 17.9 & 10.4 & \underline{32.2} & 35.1 & 20.9 \\
& TR & NC-Net$_\textrm{res101}$~\cite{rocco2018neighbourhood} & \textbf{24.0} & 16.0 & {45.0} & 13.7 & 35.7 & 25.9 & 19.0 & \underline{50.4} & {14.3} & {32.6} & {27.4} & \underline{19.2} & {21.7} & 20.3 & 20.4 & {13.6} & \textbf{33.6} & \underline{40.4} & {26.4} \\
& FT & NC-Net$_\textrm{res101}$~\cite{rocco2018neighbourhood} & 17.9 & 12.2 & 32.1 & 11.7 & 29.0 & 19.9 & 16.1 & 39.2 & 9.9 & 23.9 & 18.8 & 15.7 & 17.4 & 15.9 & 14.8 & 9.6 & 24.2 & 31.1 & 20.1 \\\cline{2-22}\\[-2.3ex]
& TR & {DHPF$_\mathrm{res101}$ (ours)} & 21.5 & \textbf{21.8} & \textbf{57.2} & 13.9 & 34.3 & 23.1 & 17.3 & \underline{50.4} & \textbf{17.4} & \underline{34.8} & \textbf{36.2} & \textbf{19.7} & \underline{24.3} & \textbf{32.5} & \textbf{22.2} & \underline{17.6} & 30.9 & 36.5 & \textbf{28.5} \\
& FT & {DHPF$_\mathrm{res101}$ (ours)} & 17.5 & \underline{19.0} & \underline{52.5} & \textbf{15.4} & 35.0 & 19.4 & 15.7 & \textbf{51.9} & \underline{17.3} & \textbf{37.3} & \underline{35.7} & \textbf{19.7} & \textbf{25.5} & \underline{31.6} & \underline{20.9} & \textbf{18.5} & 24.2 & \textbf{41.1} & \underline{27.7} \\
\midrule
\multirow{3}{*}{strong} & FT & HPF$_\mathrm{res101}$~\cite{min2019hyperpixel} & \underline{25.2} & {18.9} & {52.1} & \underline{15.7} & \underline{38.0} & \underline{22.8} & \underline{19.1} & \underline{52.9} & {17.9} & \underline{33.0} & {32.8} & \underline{20.6} & \underline{24.4} & {27.9} & 21.1 & \underline{15.9} & \underline{31.5} & \underline{35.6} & \underline{28.2} \\\cline{2-22}\\[-2.3ex]
& TR & {DHPF$_\mathrm{res101}$ (ours)} & 22.6 & \underline{23.0} & \underline{57.7} & 15.1 & 34.1 & 20.5 & 14.7 & 48.6 & \underline{19.5} & 31.9 & \underline{34.5} & 19.6 & 23.0 & \underline{30.0} & \underline{22.9} & 15.5 & 28.2 & 30.2 & 27.4 \\
& FT & {DHPF$_\mathrm{res101}$ (ours)} & \textbf{38.4} & \textbf{23.8} & \textbf{68.3} & \textbf{18.9} & \textbf{42.6} & \textbf{27.9} & \textbf{20.1} & \textbf{61.6} & \textbf{22.0} & \textbf{46.9} & \textbf{46.1} & \textbf{33.5} & \textbf{27.6} & \textbf{40.1} & \textbf{27.6} & \textbf{28.1} & \textbf{49.5} & \textbf{46.5} & \textbf{37.3} \\
\bottomrule
\end{tabular}}
\end{center}
\end{table}
\subsection{Results and comparisons}
First, we train both of our strongly and weakly-supervised models on the PF-PASCAL~\cite{ham2018proposal} dataset and test on three standard benchmarks of PF-PASCAL (test split), PF-WILLOW and Caltech-101. The evaluations on PF-WILLOW and Caltech-101 are to verify transferability.
In training, we use the same splits of PF-PASCAL proposed in~\cite{han2017scnet} where training, validation, and test sets respectively contain 700, 300, and 300 image pairs. Following \cite{rocco18weak,rocco2018neighbourhood}, we augment the training pairs by horizontal flipping and swapping. Table~\ref{tab:main_table} summarizes our result and those of recent methods~\cite{ham2016proposal,han2017scnet,kim2018recurrent,kim2017dctm,min2019hyperpixel,rocco17geocnn,rocco18weak,rocco2018neighbourhood,paul2018attentive}.
Second, we train our model on the SPair-71k dataset~\cite{min2019spair} and compare it to other recent methods~\cite{min2019hyperpixel,rocco17geocnn,rocco18weak,rocco2018neighbourhood,paul2018attentive}.
Table~\ref{tab:hpftable} summarizes the results.
\smallbreak
\noindent \textbf{Strongly-supervised regime.} As shown in the bottom sections of Table~\ref{tab:main_table} and~\ref{tab:hpftable}, our strongly-supervised model clearly outperforms the previous state of the art by a significant margin. It achieves 5.9\%, 3.2\%, and 9.1\% points of PCK ($\alpha_{\text{img}}=0.1$) improvement over the current state of the art~\cite{min2019hyperpixel} on PF-PASCAL, PF-WILLOW, and SPair-71k, respectively, and the improvement increases further with a more strict evaluation threshold, \emph{e.g.}, more than 15\% points of PCK with $\alpha_{\text{img}}=0.05$ on PF-PASCAL. Even with a smaller backbone network (ResNet-50) and smaller selection rate ($\mu = 0.4$), our method achieves competitive performance with the smallest running time on the standard benchmarks of PF-PASCAL, PF-WILLOW, and Caltech-101.
\begin{figure}[!t]
\begin{center}
\scalebox{0.33}{
\centering
\includegraphics{figures/layer_selection.pdf}
}
\caption{Analysis of layer selection on PF-PASCAL dataset (a) PCK vs. running time with varying selection rate $\mu$ (b) Category-wise layer selection frequencies (x-axis: candidate layer index, y-axis: category) of the strongly-supervised model with different backbones: ResNet-101 (left) and ResNet-50 (right) (c) ResNet-101 layer selection frequencies of strongly (left) and weakly (right) supervised models at different layer selection rates $\mu$. Best viewed in electronic form.}
\label{fig:layer_selection}
\end{center}
\end{figure}
\smallbreak
\noindent \textbf{Weakly-supervised regime.} As shown in the middle sections of Table~\ref{tab:main_table} and~\ref{tab:hpftable}, our weakly-supervised model also achieves the state of the art in the weakly-supervised regime.
In particular, our model shows more reliable transferablility compared to strongly-supervised models, outperforming both weakly~\cite{huang2019dynamic} and strongly-supervised~\cite{min2019hyperpixel} state of the arts by 6.4\% and 5.8\% points of PCK respectively on PF-WILLOW. On the Caltech-101 benchmark, our method is comparable to the best among the recent methods.
Note that unlike other benchmarks, the evaluation metric of Caltech-101 is indirect (\emph{i.e.}, accuracy of mask transfer).
On the SPair-71k dataset, where image pairs have large view point and scale differences, the methods of~\cite{rocco18weak,rocco2018neighbourhood} as well as ours do not successfully learn in the weakly-supervised regime; they (FT) all underperform transferred models (TR) trained on PF-PASCAL. This result reveals current weakly-supervised objectives are all prone to large variations, which requires further research in the future.
\smallbreak
\noindent \textbf{Effect of layer selection rate $\mu$~\cite{veit2018convolutional}.}
The plot in Fig.~\ref{fig:layer_selection}a shows PCK and running time of our models trained with different layer selection rates $\mu$. It shows that smaller selection rates in training lead to faster running time in testing, at the cost of some accuracy, by encouraging the model to select a smaller number of layers. The selection rate $\mu$ can thus be used for speed-accuracy trade-off.
\begin{figure}[!t]
\centering
\begin{minipage}[b]{0.58\textwidth}
\includegraphics[width=\textwidth]{figures/histogram.pdf}
\caption{\label{fig:num_of_layer_used_histo}Frequencies over the numbers of selected layers with different selection rates $\mu$ (x-axis: the number of selected layers, y-axis: frequency). Best viewed in electronics.}
\end{minipage}
\hfill
\begin{minipage}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{figures/spair_qualitative.pdf}
\caption{\label{fig:qualitative_spair}Example results on SPair-71k dataset. The source images are warped to the target ones using resultant correspondences.}
\end{minipage}
\end{figure}
\begin{figure}[t]
\centering
\scalebox{0.6}{
\includegraphics{figures/pascal_qualitative.pdf}
}
\caption{Example results on PF-PASCAL~\cite{ham2018proposal}: (a) source image, (b) target image and (c) DHPF (ours), (d) WeakAlign~\cite{rocco18weak}, (e) A2Net~\cite{paul2018attentive}, (f) NC-Net~\cite{rocco2018neighbourhood}, and (g) HPF~\cite{min2019hyperpixel}.}
\label{fig:vis_pfpascal}
\end{figure}
\smallbreak
\noindent \textbf{Analysis of layer selection patterns.}
Category-wise layer selection patterns in Fig.~\ref{fig:layer_selection}b show that each group of animal, vehicle, and man-made object categories shares its own distinct selection patterns.
The model with a small rate ($\mu=0.3$) tends to select the most relevant layers only while the model with larger rates ($\mu>0.3$) tends to select more complementary layers as seen in Fig.\ref{fig:layer_selection}c.
For each $\mu \in \{0.3, 0.4, 0.5\}$ in Fig.\ref{fig:layer_selection}c, the network tends to select low-level features for vehicle and man-made object categories while it selects mostly high-level features for animal category.
We conjecture that it is because low-level (geometric) features such as lines, corners and circles appear more often in the vehicle and man-made classes compared to the animal classes.
Figure~\ref{fig:num_of_layer_used_histo} plots the frequencies over the numbers of selected layers with different selection rate $\mu$, where vehicles tend to require more layers than animals and man-made objects.
\smallbreak
\noindent \textbf{Qualitative results.}
Some challenging examples on SPair-71k~\cite{min2019spair} and PF-PASCAL~\cite{ham2018proposal} are shown in Fig.\ref{fig:qualitative_spair} and \ref{fig:vis_pfpascal} respectively: Using the keypoint correspondences, TPS transformation~\cite{donato2002approximate} is applied to source image to align target image.
The object categories of the pairs in Fig.\ref{fig:vis_pfpascal} are in order of table, potted plant, and tv.
Alignment results of each pair demonstrate the robustness of our model against major challenges in semantic correspondences such as large changes in view-point and scale, occlusion, background clutters, and intra-class variation.
\smallbreak
\noindent \textbf{Ablation study.}
We also conduct an ablation study to see the impacts of major components: Gumbel layer gating (GLG), conv feature transformation (CFT), probabilistic Hough matching (PHM), keypoint importance weight $\omega_m$, and layer selection loss $\mathcal{L}_{\mathrm{sel}}$.
All the models are trained with strong supervision and evaluated on PF-PASCAL.
Since the models with a PHM component have no training parameters, they are directly evaluated on the test split.
Table~\ref{tab:ablation_study} summarizes the results.
It reveals that among others CFT in the dynamic gating module is the most significant component in boosting performance and speed; without the feature transformation along with channel reduction, our models do not successfully learn in our experiments and even fail to achieve faster per-pair inference time.
The result of `w/o $\omega_m$' reveals the effect of the keypoint weight $\omega_m$ in Eq.(\ref{one_hot_gce}) by replacing it with uniform weights for all $m$, \emph{i.e.}, $\omega_m = 1$; putting less weights on easy examples helps in training the model by focusing on hard examples.
The result of `w/o $\mathcal{L}_{\mathrm{sel}}$' shows the performance of the model using $\mathcal{L}_{\mathrm{match}}$ only in training; performance drops with slower running time, demonstrating the effectiveness of the layer selection constraint in terms of both speed and accuracy.
With all the components jointly used, our model achieves the highest PCK measure of $90.7\%$. Even with the smaller backbone network, ResNet-50, the model still outperforms previous state of the art and achieves real-time matching as well as described in Fig.\ref{fig:layer_selection} and Table \ref{tab:main_table}.
\begin{table}[!t]
\begin{minipage}{.51\linewidth}
\caption{\label{tab:ablation_study}Ablation study on PF-PASCAL. (GLG: Gumbel layer gating with selection rates $\mu$, CFT: conv feature transformation)}
\begin{center}
\begin{tabular}{ccccccc}
\toprule
\multicolumn{3}{c}{Module} & \multicolumn{3}{c}{PCK ($\alpha_{\text{img}}$)} & time \\
GLG & CFT & PHM & $0.05$ & $0.1$ & $0.15$ & ({\em ms}) \\
\midrule
0.5 & \ding{51} & \ding{51} & 75.7 & 90.7 & 95.0 & 58 \\
0.4 & \ding{51} & \ding{51} & 73.6 & 90.4 & 95.3 & 51 \\
0.3 & \ding{51} & \ding{51} & 73.1 & 88.7 & 94.4 & 47 \\
\midrule
& \ding{51} & \ding{51} & 70.4 & 88.1 & 94.1 & 64 \\
0.5 & & \ding{51} & 43.6 & 74.7 & 87.5 & 176 \\
0.5 & \ding{51} & & 68.3 & 86.9 & 91.6 & 57 \\
& & \ding{51} & 37.6 & 68.7 & 84.6 & 124 \\
& \ding{51} & & 68.1 & 85.5 & 91.6 & 61 \\
0.5 & & & 35.0 & 54.8 & 63.4 & 173 \\
\midrule
\multicolumn{3}{c}{ w/o $\omega_m$ } & 69.8 & 86.1 & 91.9 & 57 \\
\multicolumn{3}{c}{ w/o $\mathcal{L}_{\mathrm{sel}}$} & 68.1 & 89.2 & 93.5 & 56 \\
\bottomrule
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}{.48\linewidth}
\caption{\label{tab:ablation_study_gating}Comparison to soft layer gating on PF-PASCAL.}
\begin{center}
\begin{tabular}{ccccc}
\toprule
\multirow{2}{*}{Gating function} & \multicolumn{3}{c}{PCK ($\alpha_{\text{img}}$)} & time \\
& $0.05$ & $0.1$ & $0.15$ & ({\em ms}) \\
\midrule
Gumbel$_{\mu=0.5}$ & 75.7 & 90.7 & 95.0 & 58 \\
\midrule
sigmoid & 71.1 & 88.2 & 92.8 & 74 \\
sigmoid$_{\mu=0.5}$ & 72.1 & 87.8 & 93.3 & 75 \\
sigmoid + $\ell1$& 65.9 & 87.2 & 91.0 & 60 \\
\bottomrule
\end{tabular}
\end{center}
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/soft_sparse.pdf}
\end{center}
\captionof{figure}{\label{fig:vis_layer_soft_sparse} ResNet-101 layer selection frequencies for `sigmoid' (left), `sigmoid$_{\mu=0.5}$' (middle), and `sigmoid + $\ell1$' (right) gating.}
\end{minipage}
\end{table}
\smallbreak
\noindent \textbf{Computational complexity.}
The average feature dimensions of our model before correlation computation are 2089, 3080, and 3962 for each $\mu \in \{0.3, 0.4, 0.5\}$ while those of recent methods \cite{min2019hyperpixel,lee2019sfnet,rocco2018neighbourhood,huang2019dynamic} are respectively 6400, 3072, 1024, 1024. The dimension of hyperimage is relatively small as GLG efficiently prunes irrelevant features and CFT effectively maps features onto smaller subspace, thus being more practical in terms of speed and accuracy as demonstrated in Table~\ref{tab:main_table} and \ref{tab:ablation_study}. Although \cite{rocco2018neighbourhood,huang2019dynamic} use lighter feature maps compared to ours, a series of 4D convolutions heavily increases time and memory complexity of the network, making them expensive for practical use (31ms (ours) vs. 261ms \cite{rocco2018neighbourhood,huang2019dynamic}).
\subsection{Comparison to soft layer gating} \label{sec:softgating}
The Gumbel gating function in our dynamic layer gating can be replaced with conventional soft gating using sigmoid.
We have investigated different types of soft gating as follows:
(1) `sigmoid': The MLP of dynamic gating at each layer predicts a scalar input for sigmoid and the transformed feature block pairs are weighted by the sigmoid output.
(2) `sigmoid$_{\mu=0.5}$': In training the `sigmoid' gating, the layer selection loss $\mathcal{L}_\mathrm{sel}$ with $\mu=0.5$ is used to encourage the model to increase diversity in layer selection.
(3) `sigmoid + $\ell1$': In training the `sigmoid' gating, the $\ell1$ regularization on the sigmoid output is used to encourage the soft selection result to be sparse.
Table~\ref{tab:ablation_study_gating} summarizes the results and Fig. \ref{fig:vis_layer_soft_sparse} compares their layer selection frequencies.
While the soft gating modules provide decent results, all of them perform worse than the proposed Gumbel layer gating in both accuracy and speed.
The slower per-pair inference time of `sigmoid' and `sigmoid$_{\mu=0.5}$' indicates that {\em soft} gating is not effective in skipping layers due to its non-zero gating values. We find that the sparse regularization of `sigmoid + $\ell1$' recovers the speed but only at the cost of significant accuracy points.
Performance drop of soft gating in accuracy may result from the {\em deterministic} behavior of the soft gating during training that prohibits exploring diverse combinations of features at different levels.
In contrast, the Gumbel gating during training enables the network to perform more comprehensive trials of a large number of different combinations of multi-level features, which help to learn better gating.
Our experiments also show that {\em discrete} layer selection along with {\em stochastic} learning in searching the best combination is highly effective for learning to establish robust correspondences in terms of both accuracy and speed.
\section{Dynamic hyperpixel flow}
\label{methods}
Given two input images to match, a pretrained convolutional network extracts a series of intermediate feature blocks for each image. The architecture we propose in this section, {\em dynamic hyperpixel flow}, learns to select a small number of layers (feature blocks) on the fly and composes effective features for reliable matching of the images. Figure~\ref{fig:architecture} illustrates the overall architecture.
In this section, we describe the proposed method in four steps: (i) multi-layer feature extraction, (ii) dynamic layer gating, (iii) correlation computation and matching, and (iv) training objective.
\begin{figure*}[t]
\begin{center}
\scalebox{0.41}{
\centering
\includegraphics{figures/architecture.pdf}
}
\caption{The overall architecture of Dynamic Hyperpixel Flow (DHPF).}
\label{fig:architecture}
\end{center}
\end{figure*}
\subsection{Multi-layer feature extraction}
We adopt as a feature extractor a convolutional neural network pretrained on a large-scale classification dataset, \emph{e.g.}, ImageNet~\cite{deng2009imagenet}, which is commonly used in most related methods~\cite{choy2016universal,han2017scnet,kim2018recurrent,kim2017dctm,lee2019sfnet,min2019hyperpixel,rocco17geocnn,rocco18weak,rocco2018neighbourhood,paul2018attentive,huang2019dynamic}.
Following the work on hypercolumns~\cite{hariharan2015hypercolumns}, however, we view the layers of the convolutional network as a non-linear counterpart of image pyramids and extract a series of multiple features along intermediate layers~\cite{min2019hyperpixel}.
Let us assume the backbone network contains $L$ feature extracting layers.
Given two images $I$ and $I'$, source and target, the network generates two sets of $L$ intermediate feature blocks.
We denote the two sets of feature blocks by $\mathbf{B} = \{\mathbf{b}_l \}_{l=0}^{L-1}$ and $\mathbf{B}' = \{\mathbf{b}'_l\}_{l=0}^{L-1}$, respectively, and call the earliest blocks, $\mathbf{b}_0$ and $\mathbf{b}'_0$, {\em base} feature blocks.
As in Fig.~\ref{fig:architecture}, each pair of source and target feature blocks at layer $l$ is passed to the $l$-th layer gating module as explained next.
\subsection{Dynamic layer gating}
Given $L$ feature block pairs $\{ (\mathbf{b}_l, \mathbf{b}'_l) \}_{l=0}^{L-1}$, $L$ layer gating modules learn to select relevant feature block pairs and transform them for establishing robust correspondences.
As shown in the top of Fig.~\ref{fig:architecture}, the module has two branches, one for layer gating and the other for feature transformation.
\smallbreak
\noindent \textbf{Gumbel layer gating.}
The first branch of the $l$-th layer gating module takes the $l$-th pair of feature blocks $(\mathbf{b}_l, \mathbf{b}'_l)$ as an input and performs global average pooling on two feature blocks to capture their channel-wise statistics. Two average pooled features of size $1 \times 1 \times c_l$ from $\mathbf{b}_l$ and $\mathbf{b}_l' $ are then added together to form a vector of size $c_l$. A multi-layer perceptron (MLP) composed of two fully-connected layers with ReLU non-linearity takes the vector and predicts a relevance vector $\mathbf{r}_l$ of size 2 for gating, whose entries indicate the scores for selecting or skipping (`on' or `off') the $l$-th layer, respectively. We can simply obtain a gating decision using argmax over the entries, but this na\"ive gating precludes backpropagation since argmax is not differentiable.
To make the layer gating trainable and effective, we adopt the Gumbel-max trick~\cite{gumbel1954statistical} and its continuous relaxation~\cite{eric2017categorical,maddison2017concrete}.
Let $\mathbf{z}$ be a sequence of i.i.d. Gumbel random noise and let $Y$ be a discrete random variable with $K$-class categorical distribution $\mathbf{u}$, \emph{i.e.}, $p(Y=y) \propto u_y$ and $y \in \{0,...,K-1\}$.
Using the Gumbel-max trick~\cite{gumbel1954statistical}, we can reparamaterize sampling $Y$ to $y = \argmax_{k \in \{0,...,K-1\}}(\log u_k + z_k)$.
To approximate the argmax in a differentiable manner, the continuous relaxation~\cite{eric2017categorical,maddison2017concrete} of the Gumbel-max trick replaces the argmax operation with a softmax operation.
By expressing a discrete random sample $y$ as a one-hot vector $
\mathbf{y}$, a sample from the Gumbel-softmax can be represented by $\mathbf{\hat{y}} = \text{softmax}((\log \mathbf{u} + \mathbf{z})/\tau)$, where $\tau$ denotes the temperature of the softmax.
In our context, the discrete random variable obeys a Bernoulli distribution, \emph{i.e.}, $y \in \{0,1\}$, and the predicted relevance scores represent the log probability distribution for `on' and `off', \emph{i.e.}, $\log{\mathbf{u}} = \mathbf{r}_l$. Our Gumbel-softmax gate thus has a form of
\begin{align}
\mathbf{\hat{y}}_l = \text{softmax}(\mathbf{r}_l + \mathbf{z}_l), \label{eq:gumbel-softmax}
\end{align}
where $\mathbf{z}_l$ is a pair of i.i.d. Gumbel random samples and the softmax temperature $\tau$ is set to 1.
\smallbreak
\noindent \textbf{Convolutional feature transformation.} The second branch of the $l$-th layer gating module takes the $l$-th pair of feature blocks $(\mathbf{b}_l, \mathbf{b}'_l)$ as an input and transforms each feature vector over all spatial positions while reducing its dimension by $\frac{1}{\rho}$; we implement it using $1\times1$ convolutions, \emph{i.e.}, position-wise linear transformations, followed by ReLU non-linearity.
This branch is designed to transform the original feature block of size $h_l \times w_l \times c_l$ into a more compact and effective representation of size $h_l \times w_l \times \frac{c_l}{\rho}$ for our training objective. We denote the pair of transformed feature blocks by $(\bar{\mathbf{b}_l}, \bar{\mathbf{b}'_l})$.
Note that if $l$-th Gumbel gate chooses to skip the layer, then the feature transformation of the layer can be also ignored thus reducing the computational cost.
\smallbreak
\noindent \textbf{Forward and backward propagations.} During training, we use the {\em straight-through} version of the Gumbel-softmax estimator~\cite{eric2017categorical}: forward passes proceed with discrete samples by argmax whereas backward passes compute gradients of the softmax relaxation of Eq.(\ref{eq:gumbel-softmax}). In the forward pass, the transformed feature pair $(\bar{\mathbf{b}_l}, \bar{\mathbf{b}'_l})$ is simply multiplied by 1 (`on') or 0 (`off') according to the gate's discrete decision $\mathbf{y}$.
While the Gumbel gate always makes discrete decision $\mathbf{y}$ in the forward pass, the continuous relaxation in the backward pass allows gradients to propagate through softmax output $\hat{\mathbf{y}}$, effectively updating both branches, the feature transformation and the relevance estimation, regardless of the gate's decision.
Note that this stochastic gate with random noise increases the diversity of samples and is thus crucial in preventing mode collapse in training. At test time, we simply use deterministic gating by argmax without Gumbel noise~\cite{eric2017categorical}.
As discussed in Sec.~\ref{sec:softgating}, we found that the proposed hard gating trained with Gumbel softmax is superior to conventional soft gating with sigmoid in terms of both accuracy and speed.
\subsection{Correlation computation and matching}
The output of gating is a set of selected layer indices, $S = \{s_1, s_2, ..., s_N\}$. We construct a {\em hyperimage} $\mathbf{H}$ for each image by concatenating transformed feature blocks of the selected layers along channels with upsampling:
$\mathbf{H} = \big[ \zeta(\bar{\mathbf{b}_{s_1}}), \zeta(\bar{\mathbf{b}_{s_2}}), ..., \zeta(\bar{\mathbf{b}_{s_N}}) \big]$,
where $\zeta$ denotes a function that spatially upsamples the input feature block to the size of $\mathbf{b}_0$, the {\em base} block. Note that the number of selected layers $N$ is fully determined by the gating modules. If all layers are off, then we use the base feature block by setting $S = \{0\}$. We associate with each spatial position $p$ of the hyperimage the corresponding image coordinates and hyperpixel feature~\cite{min2019hyperpixel}. Let us denote by $\mathbf{x}_p$ the image coordinate of position $p$, and by $\mathbf{f}_p$ the corresponding feature, {\em \emph{i.e.}}, $\mathbf{f}_p = \mathbf{H}({\mathbf{x}_p})$.
The hyperpixel at position $p$ in the hyperimage is defined as
$ \mathbf{h}_p = (\mathbf{x}_p, \mathbf{f}_p)$. Given source and target images, we obtain two sets of hyperpixels, $\mathcal{H}$ and $\mathcal{H}'$.
In order to reflect geometric consistency in matching, we adapt probablistic Hough matching (PHM)~\cite{cho2015unsupervised,han2017scnet} to hyperpixels, similar to~\cite{min2019hyperpixel}. The key idea of PHM is to re-weight appearance similarity by Hough space voting to enforce geometric consistency. In our context, let $\mathcal{D}=(\mathcal{H}, \mathcal{H}')$ be two sets of hyperpixels, and $m=(\mathbf{h},\mathbf{h}')$ be a match where $\mathbf{h}$ and $\mathbf{h}'$ are respectively elements of $\mathcal{H}$ and $\mathcal{H}'$. Given a Hough space $\mathcal{X}$ of possible offsets (image transformations) between the two hyperpixels, the confidence for match $m$, $p(m|\mathcal{D})$, is computed as
$p(m|\mathcal{D}) \propto p(m_\mathrm{a})\sum_{\mathbf{x}\in \mathcal{X}}p(m_\mathrm{g}|\mathbf{x})\sum_{m \in \mathcal{H} \times \mathcal{H}'}p(m_\mathrm{a})p(m_\mathrm{g}|\mathbf{x})$
where $p(m_\mathrm{a})$ represents the confidence for appearance matching and $p(m_\mathrm{g}|\mathbf{x})$ is the confidence for geometric matching with an offset $\mathbf{x}$, measuring how close the offset induced by $m$ is to $\mathbf{x}$. By sharing the Hough space $\mathcal{X}$ for all matches, PHM efficiently computes match confidence with good empirical performance~\cite{cho2015unsupervised,ham2016proposal,han2017scnet,min2019hyperpixel}. In this work, we compute appearance matching confidence using hyperpixel features by
$p(m_\mathrm{a}) \propto \text{ReLU}\Big( \frac{\mathbf{f}_p \cdot \mathbf{f}_p'}{\norm{\mathbf{f}_p} \norm{\mathbf{f}_p'}} \Big)^2,$
where the squaring has the effect of suppressing smaller matching confidences.
On the output $|\mathcal{H}| \times |\mathcal{H}'|$ correlation matrix of PHM, we perform soft mutual nearest neighbor filtering~\cite{rocco2018neighbourhood} to suppress noisy correlation values and denote the filtered matrix by $\mathbf{C}$.
\smallbreak
\noindent \textbf{Dense matching and keypoint transfer.}
From the correlation matrix $\mathbf{C}$, we establish hyperpixel correspondences by assigning to each source hyperpixel $\mathbf{h}_i$ the target hyperpixel $\hat{\mathbf{h}}'_{j}$ with the highest correlation. Since the spatial resolutions of the hyperimages are the same as those of base feature blocks, which are relatively high in most cases (\emph{e.g.}, $1/4$ of input image with ResNet-101 as the backbone), such hyperpixel correspondences produce quasi-dense matches.
Furthermore, given a keypoint $\mathbf{p}_m$ in the source image, we can easily predict its corresponding position $\hat{\mathbf{p}}'_m$ in the target image by transferring the keypoint using its nearest hyperpixel correspondence. In our experiments, we collect all correspondences of neighbor hyperpixels of keypoint $\mathbf{p}_m$ and use the geometric average of their individual transfers as the final prediction $\hat{\mathbf{p}}'_m$~\cite{min2019hyperpixel}.
This consensus keypoint transfer method improves accuracy by refining mis-localized predictions of individual transfers.
\begin{figure}[!tbp]
\centering
\begin{minipage}[b]{0.49\textwidth}
\subfloat[Strongly-supervised loss.]{
\includegraphics[width=\textwidth]{figures/strong_sup_loss.pdf}
}
\end{minipage}
\hfill
\begin{minipage}[b]{0.49\textwidth}
\subfloat[Weakly-supervised loss.]{
\includegraphics[width=\textwidth]{figures/weak_sup_loss.pdf}
}
\end{minipage}
\caption{Matching loss computation using (a) keypoint annotations (strong supervision) and (b) image pairs only (weak supervision). Best viewed in electronic form.}
\label{fig:loss_computation}
\end{figure}
\subsection{Training objective}
We propose two objectives to train our model using different degrees of supervision: strongly-supervised and weakly-supervised regimes.
\smallbreak
\noindent \textbf{Learning with strong supervision.}
In this setup, we assume that keypoint match annotations are given for each training image pair, as in~\cite{choy2016universal,han2017scnet,min2019hyperpixel}; each image pair is annotated with a set of coordinate pairs $\mathcal{M}=\{(\mathbf{p}_m, \mathbf{p}'_m)\}_{m=1}^{M}$, where $M$ is the number of match annotations.
To compare the output of our network with ground-truth annotations, we convert the annotations into a form of discrete correlation matrix.
First of all, for each coordinate pair $(\mathbf{p}_m, \mathbf{p}'_m)$, we identify their nearest position indices $(k_m, k_m')$ in hyperimages.
On the one hand, given the set of identified match index pairs $\{ (k_m, k_m') \}_{m=1}^{M}$, we construct a ground-truth matrix $\mathbf{G} \in \{0,1\}^{M\times |\mathcal{H}'|}$ by assigning one-hot vector representation of $k_m'$ to the $m$-th row of $\mathbf{G}$.
On the other hand, we construct $\hat{\mathbf{C}} \in \mathbb{R}^{M \times |\mathcal{H}'|}$ by assigning the $k_m$-th row of $\mathbf{C}$ to the $m$-th row of $\hat{\mathbf{C}}$.
We apply softmax to each row of the matrix $\hat{\mathbf{C}}$ after normalizing it to have zero mean and unit variance.
Figure~\ref{fig:loss_computation}a illustrates the construction of $\hat{\mathbf{C}}$ and $\mathbf{G}$.
Corresponding rows between $\hat{\mathbf{C}}$ and $\mathbf{G}$ can now be compared as categorical probability distributions.
We thus define the strongly-supervised matching loss as the sum of cross-entropy values between them:
\begin{align}
\label{one_hot_gce}
\mathcal{L}_\mathrm{match} = - \frac{1}{M} \sum_{m=1}^{ M} \omega_{m} \sum_{j = 1}^{|\mathcal{H}'|}\mathbf{G}_{m j}\log\hat{\mathbf{C}}_{mj},
\end{align}
where $\omega_{m}$ is an importance weight for the $m$-th keypoint. The keypoint weight $\omega_{m}$ helps training by reducing the effect of the corresponding cross-entropy term if the Eucliean distance between predicted keypoint $\hat{\mathbf{p}}'_m$ and target keypoint $\mathbf{p'}_m$ is smaller than some threshold distance $\delta_\mathrm{thres}$:
\begin{align}
\label{weighting_term}
\omega_{m} =
\begin{dcases}
(\norm{\hat{\mathbf{p}}'_m-\mathbf{p}'_m} / \delta_{\mathrm{thres}})^2 & \text{if} \quad \norm{\hat{\mathbf{p}}'_m-\mathbf{p}'_m} < \delta_{\mathrm{thres}}, \\
1 & \text{otherwise.}
\end{dcases}
\end{align}
The proposed objective for strongly-supervised learning can also be used for self-supervised learning with synthetic pairs~\cite{rocco17geocnn,paul2018attentive}\footnote{For example, we can obtain keypoint annotations for free by forming a synthetic pair by applying random geometric transformation (\emph{e.g.}, affine or TPS~\cite{donato2002approximate}) on an image and then sampling some corresponding points between the original image and the warped image using the transformation applied.}, which typically results in trading off the cost of supervision against the generalization performance.
\smallbreak
\noindent \textbf{Learning with weak supervision.} In this setup, we assume that only image-level labels are given for each image pair as either positive (the same class) or negative (different class), as in~\cite{huang2019dynamic,rocco2018neighbourhood}.
Let us denote the correlation matrix of a positive pair by $\mathbf{C_{+}}$ and that of a negative pair by $\mathbf{C_{-}}$.
For $\mathbf{C} \in \mathbb{R}^{|\mathcal{H}| \times |\mathcal{H}'|}$, we define its correlation entropy as
$s(\mathbf{C}) = -\frac{1}{|\mathcal{H}|}\sum_{i=1}^{|\mathcal{H}|}\sum_{j=1}^{|\mathcal{H}'|}\phi(\mathbf{C})_{ij}\log\phi(\mathbf{C})_{ij}$
where $\phi(\cdot)$ denotes row-wise L1-normalization. Higher correlation entropy indicates less distinctive correspondences between the two images. As illustrated in Fig.~\ref{fig:loss_computation}b, assuming that the positive images are likely to contain more distinctive correspondences, we encourage low entropy for positive pairs and high entropy for negative pairs.
The weakly-supervised matching loss is formulated as
\begin{align}
\mathcal{L}_\mathrm{match} = \frac{s(\mathbf{C_{+}}) + s(\mathbf{C_{+}^{\top}})} {s(\mathbf{C_{-}}) + s(\mathbf{C_{-}^{\top}})}.
\end{align}
\smallbreak
\noindent \textbf{Layer selection loss.}
Following the work of~\cite{veit2018convolutional}, we add a soft constraint in our training objective to encourage the network to select each layer at a certain rate:
$\mathcal{L}_\mathrm{sel} = \sum_{l=0}^{L-1} (\Bar{z}_l - \mu)^2$
where $\Bar{z}_l$ is a fraction of image pairs within a mini-batch for which the $l$-th layer is selected and $\mu$ is a hyperparameter for the selection rate.
This improves training by increasing diversity in layer selection and, as will be seen in our experiments, allows us to trade off between accuracy and speed in testing.
Finally, the training objective of our model is defined as the combination of the matching loss (either strong or weak) and the layer selection loss: $\mathcal{L} = \mathcal{L_{\mathrm{match}}} + \mathcal{L_{\mathrm{sel}}}$.
\section{Related work}
\noindent \textbf{Feature representation for semantic correspondence.}
Early approaches~\cite{bristow2015dense,cho2015unsupervised,ham2016proposal,kim2013deformable,liu2011sift,taniai2016joint,yang2017object} tackle the problem of visual correspondence using hand-crafted descriptors such as HOG~\cite{dalal2005histograms} and SIFT~\cite{lowe2004sift}.
Since these lack high-level image semantics, the corresponding methods have difficulties with significant changes in background, view point, deformations, and instance-specific patterns.
The advent of convolutional neural networks (CNN)~\cite{he2016deep,krizhevsky2012imagenet} has led to a paradigm shift from this hand-crafted representations to deep features and boosted performance in visual correspondence~\cite{fathy2018hierarchical,novotny2017anchornet,zhou2016learning}.
Most approaches~\cite{choy2016universal,han2017scnet,kim2017fcss,rocco2018neighbourhood} learn to predict correlation scores between local regions in an input image pair, and some recent methods~\cite{jeon2018parn,kanazawa2016warpnet,kim2018recurrent,rocco17geocnn,rocco18weak,paul2018attentive} cast this task as an image alignment problem in which a model learns to regress global geometric transformation parameters.
All typically adopt a CNN pretrained on image classification as their backbone, and make predictions based on features from its final convolutional layer.
While some methods~\cite{long2014convnets,zeiler2014visual} have demonstrated the advantage of using different CNN layers in capturing low-level to high-level patterns, leveraging multiple layers of deeply stacked layers has remained largely unexplored in correspondence problems.
\smallbreak
\noindent \textbf{Multi-layer neural features.}
To capture different levels of information distributed over all intermediate layers, Hariharan \emph{et al.} propose the hypercolumn~\cite{hariharan2015hypercolumns}, a vector of multiple intermediate convolutional activations lying above a pixel for fine-grained localization. Attempts at integrating multi-level neural features have addressed object detection and segmentation~\cite{kong2016hypernet,lin2017feature,liu2018receptive}. In the area of visual correspondence, only a few methods~\cite{min2019hyperpixel,novotny2017anchornet,ufer2017deep} attempt to use multi-layer features. Unlike ours, however, these models use static features extracted from CNN layers that are chosen manually~\cite{novotny2017anchornet,ufer2017deep} or by greedy search~\cite{min2019hyperpixel}. While the use of hypercolumn features on the task of semantic visual correspondence has recently been explored by Min~\emph{et al.} \cite{min2019hyperpixel}, the method predefines hypercolumn layers by a greedy selection procedure, \emph{i.e.}, beam search, using a validation dataset.
In this work, we clearly demonstrate the benefit of a dynamic and learnable architecture both in strongly-supervised and weakly-supervised regimes and also outperform the work of~\cite{min2019hyperpixel} with a significant margin.
\smallbreak
\noindent \textbf{Dynamic neural architectures.}
Recently, dynamic neural architectures have been explored in different domains.
In visual question answering, neural module networks~\cite{andreas2016learning,andreas2016neural} compose different answering networks conditioned on an input sentence.
In image classification, adaptive inference networks~\cite{figurnov2017spatially,srivastava2015highway,veit2018convolutional} learn to decide whether to execute or bypass intermediate layers given an input image.
Dynamic channel pruning methods~\cite{gao2018dynamic,hua2018channel} skip unimportant channels at run-time to accelerate inference. All these methods reveal the benefit of dynamic neural architectures in terms of either accuracy or speed, or both.
To the best of our knowledge, our work is the first that explores a dynamic neural architecture for visual correspondence.
Our main contribution is threefold: (1) We introduce a novel dynamic feature composition approach to visual correspondence that composes features on the fly by selecting relevant layers conditioned on images to match. (2) We propose a trainable layer selection architecture for hypercolumn composition using Gumbel-softmax feature gating. (3) The proposed method outperforms recent state-of-the-art methods on standard benchmarks of semantic correspondence in terms of both accuracy and speed.
\section{Introduction}
Visual correspondence is at the heart of image understanding with numerous applications such as object recognition, image retrieval, and 3D reconstruction~\cite{forsyth:hal-01063327}.
With recent advances in neural networks \cite{he2016deep,hu2017senet,huang2015dense,krizhevsky2012imagenet,simonyan2015vgg}, there has been a significant progress in learning robust feature representation for establishing correspondences between images under illumination and viewpoint changes.
Currently, the de facto standard is to use as feature representation the output of deeply stacked convolutional layers in a trainable architecture.
Unlike in object classification and detection, however, such learned features have often achieved only modest performance gains over hand-crafted
ones~\cite{dalal2005histograms,lowe2004sift} in the task of visual correspondence~\cite{schonberger2017comparative}.
In particular, correspondence between images under large intra-class variations still remains an extremely challenging problem~\cite{choy2016universal,fathy2018hierarchical,han2017scnet,jeon2018parn,kanazawa2016warpnet,kim2018recurrent,kim2017fcss,kim2017dctm,lee2019sfnet,long2014convnets,min2019hyperpixel,novotny2017anchornet,rocco17geocnn,rocco18weak,rocco2018neighbourhood,paul2018attentive,ufer2017deep,zhou2016learning} while modern neural networks are known to excel at classification~\cite{he2016deep,huang2015dense}.
What do we miss in using deep neural features for correspondence?
Most current approaches for correspondence build on monolithic and static feature representations
in the sense that they use a specific feature layer, \emph{e.g.}, the last convolutional layer, and adhere to it regardless of the images to match.
Correspondence, however, is all about precise localization of corresponding positions, which requires visual features at different levels, from local patterns to semantics and context; in order to disambiguate a match on similar patterns, it is necessary to analyze finer details and larger context in the image.
Furthermore, relevant feature levels may vary with the images to match; the more we already know about images, the better we can decide which levels to use. In this aspect, conventional feature representations have fundamental limitations.
In this work, we introduce a novel approach to visual correspondence that dynamically composes effective features by leveraging relevant layers conditioned on the images to match.
Inspired by both multi-layer feature composition, \emph{i.e.}, hypercolumn, in object detection~\cite{hariharan2015hypercolumns,kong2016hypernet,lin2017feature,liu2018receptive} and adaptive inference architectures in classification~\cite{figurnov2017spatially,srivastava2015highway,veit2018convolutional}, we combine the best of both worlds for visual correspondence.
The proposed method learns to compose hypercolumn features on the fly by selecting a small number of relevant layers in a deep convolutional neural network.
At inference time, this dynamic architecture greatly improves matching performance in an adaptive and efficient manner.
We demonstrate the effectiveness of the proposed method on several benchmarks for semantic correspondence, \emph{i.e.}, establishing visual correspondences between images depicting different instances of the same object or scene categories, where due to large variations it may be crucial to use features at different levels.
| proofpile-arXiv_065-321 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
With the growing demand for security equipments and high-quality products, process monitoring has received tremendous attention in both academia and industry in the past decades. Fault detection, defined as the identification of abnormal operating conditions in real time, is an active topic in process monitoring. Data driven approaches have been the main stream for fault detection and control in recent years because they don't require neither a model not a priori information~\cite{Yin1S,Yin2S}. The multivariate statistical process monitoring (MSPM) is a well-known data-driven approach, and has been widely used in complex industrial environments~\cite{Macgregor1995J,Mason2002R,Wang2018Y}.
Traditional MSPM methods, e.g., principal component analysis (PCA)~\cite{Wise1990B}, partial least squares (PLS)~\cite{Kresta_1991} and independent component analysis (ICA)~\cite{LeeJM2004}, take advantage of the Hotteling $T^2$ statistic in principal component subspace or the squared prediction error (SPE) statistic in residual subspace to monitor the sample stream~\cite{Qin_2003,Hotelling_1936}. Although this kind of methods perform satisfactorily in the case of highly correlated multi-modal variables, they always neglect the temporal correlation between consecutive samples. Consequently, they cause a large Type-II error (i.e., fails to reject a false null-hypothesis).
To circumvent this limitation, the dynamic PCA (DPCA)~\cite{Ku_1995,YNDong2018}, the modified ICA (MICA)~\cite{CTong_2017,YWZhang2010,TongC2017} and various other recursive MSPM methods (e.g.,~\cite{Alcala_2009,LZhang_2016,ZWChen2017,Qingchao2017}) have been proposed thereafter. These methods usually add time-lagged variables in a sliding window to form a data matrix that captures the (local) dynamic characteristics of the underlying process. Compared with the traditional PCA or ICA, window-based methods distinguish better sample measurement from noise, thus offering a reliable avenue to address challenges associated with continuous processes~\cite{Wang_2010,SMZhang_2019}.
To further improve the performance of the above window-based methods, efficient extraction of high-order statistics of process variables is crutial~\cite{Choudhury_2004,Wang_2011,Shang_2017,Shang_2018,SMZhang_2019,BQZhou_2020}. Notable examples include statistics pattern analysis (SPA)~\cite{Wang_2010, Wang_2011}, recursive transformed component statistical analysis (RTCSA)~\cite{Shang_2017} and recursive dynamic transformed component statistical analysis (RDTCSA)~\cite{Shang_2018}. Different from traditional PCA and DPCA that implicitly assume that the latent variables follow a multivariate Gaussian distribution, SPA integrates the skewness, the kurtosis, and various other high-order statistics of the process measurement in sliding windows to deal with non-Gaussian data, demonstrating superior performance over PCA and DPCA. However, SPA performs poorly in case of incipient faults~\cite{Shang_2017}. To address this limitation, RTCSA and RDTCSA avoid dividing the projected space into principal component subspace and residual subspace. Instead, both methodologies take advantage of the full space to extract orthogonal transformed components (TCs), and evaluate a test statistic by incorporating the mean, the variance, the skewness, and the kurtosis of TCs. One should note that, the third- and forth-order information is usually beneficial to detect incipient faults \cite{Choudhury_2004,Wang_2010,Wang_2011,Shang_2017,Shang_2018,SMZhang_2019}. Although RTCSA and RDTCSA enjoy solid mathematical foundation, the TCs from a covariance matrix only capture linear relationships among different dimensions of measurement. Therefore, a reliable way to extract nonlinear statistics among different dimensions of measurements becomes a pivotal problem in fault detection~\cite{Jia_2016,Lv_2018,Chang_2015}.
The application of information theory on fault detection is an emerging and promising topic~\cite{Bazan_2017,XiaoZhao_2017}. Although there are a few early efforts that attempt to shed light on fault detection with information-theoretic concepts, they simply employ (an approximation to) the MI to select a subset of the most informative variables to circumvent the curse of dimensionality (e.g.,~\cite{Verron_2008,Jiang_2011,MMRashid2012,Yu_2013,Jiang_2018,Joshi_2005}). To the best of our knowledge, there are only two exceptions that illuminate the potential of using information-theoretic concepts for fault detection, beyond the role of variable selection. Unfortunately, no specific statistical analysis is presented~\cite{Jiang_2018,Joshi_2005}. Therefore, the design from first principles of a fault detection method using information theory remains an open problem\footnote{Note that, this work does not use the physical significance of entropy, which was initially introduced in thermodynamics. According to Boltzmann the function of entropy can be expressed as: $S=-k\ln p$, where $k$ is Boltzmann constant, $p$ is thermodynamic probability. Instead, this work is based on information entropy by Shannon in 1948~\cite{shannon1948mathematical}, which was used to measure the uncertainty of signal source in a transmission system.}. The detailed contribution of this work is multi-fold:
\begin{itemize}
\item \textbf{Novel methodology:} We construct a MI matrix to monitor the (possibly nonlinear) dynamics and the non-stationarity of the fault process. A novel fault detection method, i.e., projections of mutual information matrix (PMIM), is also developed thereafter.
\item \textbf{Novel estimator:} Unlike previous information-theoretic fault detection methods which usually use the classical Shannon entropy functional that relies heavily on the precise estimation of underlying data distributions, we suggest using the recently proposed matrix-based R{\'e}nyi's $\alpha$-entropy functional to estimate MI values. The new estimator avoids estimation of the underlying probability density function (PDF), and employs the eigenspectrum of a (normalized) symmetric positive definite (SPD) matrix. This intriguing property makes the novel estimator easily applicable to real-world complex industrial process which usually contains continuous, discrete and even mixed variables.
\item \textbf{Detection accuracy:} Experiments on both synthetic data and the benchmark Tennessee Eastman process (TEP) indicate that PMIM achieves comparable or slightly higher detection rates than state-of-the-art fault detection methods. Moreover, PMIM enjoys significantly lower false detection rate.
\item \textbf{Implementation details and reproducibility:} We elaborate the implementation details of fault detection using PMIM. We also illustrate the detectability of PMIM using the eigenspectrum of the MI matrix. For reproducible results, we provide key functions (in MATLAB $2019$a) concerning PMIM in the Appendix A. We also release a full demo of PMIM at \url{https://github.com/SJYuCNEL/Fault_detection_PMIM}.
\item \textbf{Interpretability:} Fault detection using PMIM can provide insights on the the exact root variables that lead to the occurrence of fault. In this sense, the result of fault detection using PMIM is interpretable, i.e., the practitioners know which variable or specific sensor data causes the fault.
\end{itemize}
The remainder of this paper is organized as follows. We first introduce the definition of MI matrix and present its estimation with the matrix-based R{\'e}nyi's entropy functional in Section 2. We then describe our proposed fault detection using PMIM in Section 3, and elaborate its implementation details in Section 4. Experiments on both synthetic and TEP benchmark are performed in Section 5. We finally conclude this work and discuss future directions in Section 6.
\emph{Notations:} Throughout this paper, scalars are denoted by lowercase letters (e.g., $x$), vectors appear as lowercase boldface letters (e.g., $\mathbf{x}$), and matrices are indicated by uppercase letters (e.g., $X$). The $(i,j)$-th element of $X$ is represented by $X_{ij}$. If $X$ is a square matrix, then $X^{-1}$ denotes its inverse. $I$ stands for the identity matrix with compatible dimensions. The $i$-th row of a matrix $X$ is declared by the row vector $\mathbf{x}^i$, while the $j$-th column is indicated with the column vector $\mathbf{x}_j$. Moreover, superscript indicates time (or sample) index, subscript indicates variable index. For $\mathbf{x}\in \mathbb{R}^n$, the $\ell_p$-norm of $\mathbf{x}$ is defined as $\|\mathbf{x}\|_p\triangleq(\sum\limits_{i=1}^n|x_i|^p)^{\frac{1}{p}}$.
\section{The MI Matrix: Definition and Estimation}
\subsection{The Definition of MI matrix}
MI quantifies the nonlinear dependence between two random variables~\cite{Cover1991,Latham2009}. Therefore, given a multivariate time series (here refers to fault process), an MI matrix (in a stationary environment) can be constructed by evaluating MI values between each pair of variables. Intuitively, the MI matrix can be viewed as a nonlinear extension of the classical covariance matrix. Specifically, the formal definition of MI matrix is given as follows.
\noindent\textbf{Definition 1.} Given a $m$-dimensional (stationary) process $\wp$, let us denote $\mathbf{x}_i$ ($i=1,2,\cdots,m$) the $i$-th dimensional of the process measurement, then the MI matrix over $\wp$ is defined as:
\begin{equation}\label{eq_MIdefine}
M=\begin{bmatrix}
I(\mathbf{x}_{1}; \mathbf{x}_{1}) & I(\mathbf{x}_{1}; \mathbf{x}_{2}) & \cdots & I(\mathbf{x}_{1}; \mathbf{x}_{m})\\
I(\mathbf{x}_{2}; \mathbf{x}_{1}) & I(\mathbf{x}_{2}; \mathbf{x}_{2}) & \cdots & I(\mathbf{x}_{2}; \mathbf{x}_{m})\\
\vdots & \vdots & \ddots & \vdots \\
I(\mathbf{x}_{m}; \mathbf{x}_{1}) & I(\mathbf{x}_{m}; \mathbf{x}_{2}) & \cdots & I(\mathbf{x}_{m}; \mathbf{x}_{m})\\
\end{bmatrix} \in \mathbb{R}^{m \times m},
\end{equation}
where $I(\mathbf{x}_{i}; \mathbf{x}_{j})$ denotes MI between variables $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$.
According to Shannon information theory~\cite{shannon1948mathematical}, $I(\mathbf{x}_i;\mathbf{x}_j)$ is defined over the joint probability distribution of $\mathbf{x}_i$ and $\mathbf{x}_j$ (i.e., $p(\mathbf{x}_i,\mathbf{x}_j)$) and their respectively marginal distributions (i.e., $p(\mathbf{x}_i)$ and $p(\mathbf{x}_j)$). Specifically,
\begin{equation}
\label{eq_Shannon}
\begin{split}
I(\mathbf{x}_{i}; \mathbf{x}_{j}) &\!=\!\!\int\!\!\int\!\! p(\mathbf{x}_i,\mathbf{x}_j)\log\left(\frac{p(\mathbf{x}_i,\mathbf{x}_j)}{p(\mathbf{x}_i)p(\mathbf{x}_j)}\right)d\mathbf{x}_i d\mathbf{x}_j\\
& = - \int\!\!\left(\!\!\int\!\!p(\mathbf{x}_i,\mathbf{x}_j)d\mathbf{x}_j\!\!\right)\!\!\log p(\mathbf{x}_i)d\mathbf{x}_i\!\!-\!\!\int\!\!\left(\!\!\int\!\! p(\mathbf{x}_i,\mathbf{x}_j)d\mathbf{x}_i\!\!\right)\!\!\log p(\mathbf{x}_j)d\mathbf{x}_j \\
& ~~~~ + \int\int p(\mathbf{x}_i,\mathbf{x}_j)\log p(\mathbf{x}_i,\mathbf{x}_j)d\mathbf{x}_i d\mathbf{x}_j\\
& = - \int\!\!p(\mathbf{x}_i)\log p(\mathbf{x}_i)d\mathbf{x}_i\!-\!\!\int\!\!p(\mathbf{x}_j)\log p(\mathbf{x}_j)d\mathbf{x}_j\!+\!\!\int\!\!\int\!\! p(\mathbf{x}_i,\mathbf{x}_j)\log p(\mathbf{x}_i,\mathbf{x}_j)d\mathbf{x}_i d\mathbf{x}_j\\
&\!\!=\!\!H(\mathbf{x}_{i})\!+\!H(\mathbf{x}_{j})\!-\!H(\mathbf{x}_{i}, \mathbf{x}_{j}),
\end{split}
\end{equation}
where $H(\cdot)$ denote the entropy and $H(\cdot, \cdot)$ denotes the joint entropy. In particular, $I(\mathbf{x}_{i}; \mathbf{x}_{i})=H(\mathbf{x}_{i})$.
Theoretically, the MI matrix is symmetric and non-negative\footnote{By applying the Jensen inequality, we have \\ $I(\mathbf{x}_i; \mathbf{x}_j)\!\!=\!\!\int\!\!\!\int p(\mathbf{x}_i, \mathbf{x}_j)\log\left(\frac{p(\mathbf{x}_i, \mathbf{x}_j)}{p(\mathbf{x}_i)p(\mathbf{x}_j)}\right)d\mathbf{x}_i d\mathbf{x}_j \geq \!\!-\!\!\log\left(\int\!\!\!\int p(\mathbf{x}_i,\mathbf{x}_j)\left(\frac{p(\mathbf{x}_i)p(\mathbf{x}_j)}{p(\mathbf{x}_i,\mathbf{x}_j)}\right)\right) \!\!=\!\! -\log(\int\!\!\!\int p(\mathbf{x}_i)p(\mathbf{x}_j))=0$.}. Moreover, in the absence of any dependence in pairwise variables, the MI matrix reduces to a diagonal matrix with the entropy of each variable lies on the main diagonal. Interestingly, although the estimated MI matrix has been conjectured and also observed in our application to be positive semidefinite, this property is not always true theoretically~\cite{Jakobsen_2014}.
\subsection{Estimate MI matrix with matrix-based R{\'e}nyi's \texorpdfstring{$\alpha$}{Lg}-order entropy}
Entropy measures the uncertainty in a random variable using a single scalar quantity~\cite{Principe_2010,Cover_2017}. For a random variable (or vector) $\mathbf{x}$, with probability density function (PDF) $p(\mathbf{x})$ in a finite set $\mathbf{s}$, a natural extension of the Shannon's differential entropy is the R{\'e}nyi's $\alpha$-order entropy~\cite{Lennert_2013}:
\begin{equation}\label{eq_renyi}
H_{\alpha}(\mathbf{x}) =\frac{1}{1-\alpha}\log \int_{\mathbf{s}} p^{\alpha}(\mathbf{x})d\mathbf{x}.
\end{equation}
It is well-known that, when $\alpha\to 1$, Eq.~(\ref{eq_renyi}) reduces to the basic Shannon's differential entropy\footnote{A simple proof by applying the L'H\^{o}spital's rule at $\alpha=1$ is shown in~\cite{bromiley2004shannon}.} $H(\mathbf{x})=-\int _{\mathbf{s}} p(\mathbf{x}) \log p(\mathbf{x})d\mathbf{x}$. In this perspective, R{\'e}nyi's entropy makes a one-parameter generalization to the basic Shannon definition by introducing a hyperparameter $\alpha$.
Information theory has been successfully applied to various machine learning, computer vision and signal processing tasks~\cite{Principe_2010,yu2019multivariate}. Unfortunately, the accurate PDF estimation in Eq.~(\ref{eq_renyi}) on continuous and complex data impedes its more widespread adoption in data driven science. This problem becomes more severe for process monitoring, since the obtained multivariate measurement may contain both discrete and continuous variables. Moreover, there is still no universal agreement on the definition of MI between discrete and continuous variables~\cite{ross2014mutual,gao2017estimating}, let alone its precise estimation.
In this work, we use a novel estimator developed by S{\'a}nchez Giraldo \emph{et~al}.~\cite{Sanchez_2015} to estimate the MI matrix. Specifically, according to~\cite{yu2019multivariate,Sanchez_2015}, it is feasible to evaluate a quantity that resembles quantum R{\'e}nyi's entropy~\cite{Lennert_2013} in terms of the normalized eigenspectrum of the Hermitian matrix of the projected data in reproducing kernel Hilbert space (RKHS), thus estimating the entropy directly from data without PDF estimation. For completeness, we provide below S{\'a}nchez Giraldo \emph{et al}.'s definition on entropy and joint entropy.
\noindent\textbf{Definition 2.} Let $\kappa:\chi \times \chi \mapsto \mathbb{R}$ be a real valued positive definite kernel that is also infinitely divisible~\cite{Bhatia_2006}. Given $\{\mathbf{x}_{i}\}_{i=1}^{n}\in \chi$, each $\mathbf{x}_i$ can be a real-valued scalar or vector, and the Gram matrix $K$ obtained from evaluating a positive definite kernel $\kappa$ on all pairs of exemplars, that is $K=\kappa(\mathbf{x}_{i}, \mathbf{x}_{j})$, a matrix-based analogue to R{\'e}nyi's $\alpha$-entropy for a normalized positive definite matrix $A$ of size $n\times n$, such that $\tr(A)=1$, can be given by the following functional:
\begin{equation}\label{def_entropy}
H_{\alpha}(A)=\frac{1}{1-\alpha}\log \left(\tr(A^{\alpha})\right)=\frac{1}{1-\alpha}\log_{2}\left(\sum_{i=1}^{n}\lambda _{i}(A)^{\alpha}\right),
\end{equation}
where $A$ is the normalized version of $K$, i.e., $A=K/{\text{tr}(K)}$, and $\lambda _{i}(A)$ denotes the $i$-th eigenvalue of $A$.
\noindent\textbf{Definition 3.} Given $n$ pairs of samples $(\mathbf{x}_{i}, \mathbf{y}_{i})_{i=1}^{n}$, each sample contains two different types of measurements $\mathbf{x}\in\chi$ and $\mathbf{y}\in\gamma$ obtained from the same realization, and the positive definite kernels $\kappa_{1}:\chi\times\chi\mapsto\mathbb{R}$ and $\kappa_{2}:\gamma\times\gamma\mapsto\mathbb{R}$ , a matrix-based analogue to R{\'e}nyi's $\alpha$-order joint-entropy can be defined as:
\begin{equation}\label{def_joint_entropy}
H_{\alpha}(A,B)=H_{\alpha}\left(\frac{A \circ B}{\tr(A \circ B)}\right),
\end{equation}
where $A_{ij}=\kappa_{1}(\mathbf{x}_{i}, \mathbf{x}_{j})$ , $B_{ij}=\kappa_{2}(\mathbf{y}_{i}, \mathbf{y}_{j})$ and $A\circ B$ denotes the Hadamard product between the matrices $A$ and $B$.
Given Eqs.~(\ref{def_entropy})-(\ref{def_joint_entropy}), the matrix-based R{\'e}nyi's $\alpha$-order MI $I_{\alpha}(A; B)$ in analogy of Shannon's MI is given by:
\begin{equation}\label{def_mutual}
I_{\alpha}(A;B)=H_{\alpha}(A)+H_{\alpha}(B)-H_{\alpha}(A,B).
\end{equation}
Throughout this paper, we use the Gaussian kernel $\kappa(\mathbf{x}_{i},\mathbf{x}_{j})=\exp(-\frac{\|\mathbf{x}_{i}-\mathbf{x}_{j}\|^{2}}{2\sigma ^{2}})$ to obtain the Gram matrices. Obviously, Eq.~(\ref{def_mutual}) avoids real-valued PDF estimation and has no additional requirement on data characteristics (e.g., continuous, discrete, or mixed), which makes it has great potential in our application.
\section{The Fault Detection using PMIM}
In this section, we present PMIM, a novel fault detection method by monitoring the statistics associated with the MI matrix. Given a discrete time process $\aleph=\{\mathbf{x}^1,\mathbf{x}^2,\cdots\}:\mathbf{x}^{i} \in \mathbb{R}^{1 \times m}$, at each time instant $k$, we construct a local sample matrix $X^k\in \mathbb{R}^{w\times m}$ of the following form:
\begin{equation}\label{eq_sample_matrix}
\begin{split}
X^k &=\begin{bmatrix}
\mathbf{x}^{k-w+1}\\
\mathbf{x}^{k-w+2}\\
\vdots \\
\mathbf{x}^{k}
\end{bmatrix}
=\begin{bmatrix}
x^{k-w+1}_1& x^{k-w+1}_2& \cdots& x^{k-w+1}_m\\
x^{k-w+2}_1& x^{k-w+2}_2& \cdots& x^{k-w+2}_m\\
\vdots& \vdots& \ddots& \vdots\\
x^{k}_1& x^{k}_2& \cdots& x^{k}_m
\end{bmatrix}\\
&\triangleq
\left[
\begin{array}{c;{2pt/2pt}c;{2pt/2pt}c;{2pt/2pt}c}
\mathbf{x}_1 & \mathbf{x}_2 &\cdots & \mathbf{x}_m
\end{array}
\right]
\in \mathbb{R}^{w \times m},
\end{split}
\end{equation}
where $\mathbf{x}_j$ ($1\leq j\leq m$) denotes the $j$-th dimensional variable that is characterized by $w$ realizations. Fig.~\ref{fig:windowX} illustrates $\mathbf{x}^i$, $\mathbf{x}_j$ and $X$. Each variable is mean centered and normalized to $[0,1]$ to account for different value ranges~\cite{Macgregor1995J,Mason2002R,Wang2018Y,Wise1990B,Kresta_1991}. Then the MI matrix $M$ at time instant $k$ is given by:
\begin{equation} \label{eq_MI_matrix}
M=\begin{bmatrix}
H(\mathbf{x}_{1}) & I(\mathbf{x}_{1}; \mathbf{x}_{2}) & \cdots & I(\mathbf{x}_{1}; \mathbf{x}_{m})\\
I(\mathbf{x}_{2}; \mathbf{x}_{1}) & H(\mathbf{x}_{2}) & \cdots & I(\mathbf{x}_{2}; \mathbf{x}_{m})\\
\vdots & \vdots & \ddots & \vdots \\
I(\mathbf{x}_{m}; \mathbf{x}_{1}) & I(\mathbf{x}_{m}; \mathbf{x}_{2}) & \cdots & H(\mathbf{x}_{m})
\end{bmatrix} \in \mathbb{R}^{m \times m}\\.
\end{equation}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\textwidth]{matrix_organization.png}
\caption{Local sample matrix with a sliding window of size $w$.}\label{fig:windowX}
\end{figure}
The general idea of our method is that $M$ contains all the nonlinear dependencies between any pairwise variables of the underlying fault process at time instant $k$. In a stationary environment, any quantities or statistics associated with $M$ should remain unchanged or stable. However, the existence of an abrupt fault may affect, at least, the values of one or more entries in the MI matrix, thus altering the values of our monitored quantities or statistics extracted from MI matrix.
Prior art suggests that those reliable quantities can be extracted from the orthogonal space spanned by eigenvectors of the sample covariance matrix (e.g.,~\cite{Wise1990B,Kresta_1991,Ku_1995,Lee_2006,Shang_2017,Shang_2018,BQZhou_2020}).
Motivated by this idea, suppose the eigenvalue decomposition of MI matrix is given by $M=P\Lambda P^{-1}$, where $P\in\mathbb{R}^{m\times m}$ is the matrix of eigenvectors and $\Lambda = \diag(\lambda_1,\lambda_2,\cdots,\lambda_m)\in \mathbb{R}^{m \times m}$ is a diagonal matrix with eigenvalues on the main diagonal. Then, a new representation of $X$ (denote it $T$) in the orthogonal space spanned by column vectors in $P$ can be expressed as,
\begin{equation} \label{eq_TC}
T= X P\triangleq
\begin{bmatrix}
\mathbf{t}^{k-w+1} \\ \mathbf{t}^{k-w+2} \\ \vdots \\ \mathbf{t}^k
\end{bmatrix}
\in \mathbb{R}^{w \times m}.
\end{equation}
We term the column vectors of $T$ the mutual information based transform components (MI-TCs). The terminology of transform components (TCs) originates from~\cite{Wise1990B,Kresta_1991,Shang_2017} and is defined over the sample covariance matrix $C=\frac{1}{w-1}X^{T}X$. Specifically, suppose $P_{C}$ and $\Lambda_{C}$ are respectively the eigenvectors and eigenvalues of $C$, i.e., $C=P_{C}\Lambda_{C} {P_{C}}^{-1}$, then the original TCs of $X$ are given by $T_{C}=XP_{C}\in \mathbb{R}^{w\times m}$.
Compared with the MI matrix $M$, the covariance matrix $C$ only captures the linear dependence (correlation) between pairwise dimensions of the normalized measurement~\cite{Shang_2017}. By contrast, the MI matrix $M$ operates with the full PDF information between pairs of variables and makes no assumption on the joint distribution of the measurement nor the nature of the relationship
between pairwise dimensions. Moreover, it can simply identify nonlinear and non-monotonic dependencies \cite{InceRAZ2017},
which are common in industrial process~\cite{Hotelling_1936,Shang_2017,BQZhou_2020,Dyson_1962}. See Fig.~\ref{fig:corMI} for a few concrete examples on the advantage of MI over linear correlation, in which the linear correlation fails completely in quantifying nonlinear and non-monotonic effects (the bottom row).
\begin{figure}[!ht]
\centering
\includegraphics[width=0.95\textwidth]{cormi}\\
\caption{Examples of correlation versus mutual information (MI) estimated by the classic Shannon's discrete entropy functional with the formula $H(\mathbf{x})=-\sum \limits_{x\in \mathbf{x}}p(x)\log_{2}p(x)$, over 500 samples. Each panel illustrates a scatter plot of samples drawn from a particular bivariate distribution. For each example, the correlation between the two variables is shown in brown (left) and the MI is shown in red (right). The top row shows linear relationships, for which MI and correlation both detect a relationship (although in different scales). The bottom row shows a series of distributions for which the correlation is $0$, but the MI is significant larger than $0$.}\label{fig:corMI}
\end{figure}
In each sliding window, we characterize $T$ with a detection index $\mathbf{\Theta}^{k}=[\mathbf{\mu}_{k}|\mathbf{\nu}_{k}|\mathbf{\zeta}_{k}|\mathbf{\gamma}_{k}]^{T}\in \mathbb{R}^{4m}$, it consists of the first-order statistic (i.e., the mean $\mathbf{\mu}_{k}=\mathbb{E}(\mathbf{t}^{k})$), the second-order statistic (i.e., the variance $\mathbf{\nu}_{k}=\mathbf{\sigma}_{k}^{2}=\mathbb{E}\left[({\mathbf{t}^{k}-\mathbf{\mu}_{k}})^2\right]$), the third-order statistic (i.e., the skewness $\mathbf{\zeta}_{k}=\mathbb{E}\left[\left(\frac{\mathbf{t}^{k}-\mathbf{\mu}_{k}}{\mathbf{\sigma}_{k}}\right)^3\right]$), and the forth-order statistic (i.e., the excess kurtosis $\mathbf{\gamma}_{k}=\mathbb{E}\left[\left(\frac{\mathbf{t}^{k}-\mathbf{\mu}_{k}}{\mathbf{\sigma}_{k}}\right)^4\right]-3$).
Specifically, the empirical estimation to $\mathbf{\mu}_{k}$, $\mathbf{\nu}_{k}$, $\mathbf{\zeta}_{k}$ and $\mathbf{\gamma}_{k}$ are given by:
\begin{equation}\label{eq_mean}
\mathbf{\mu}_{k}= \frac{1}{w}\sum _{i=0}^{w-1}\mathbf{t}^{k-i}\in \mathbb{R}^{1 \times m},
\end{equation}
\begin{equation}\label{eq_variance}
\mathbf{\nu}_{k}=\frac{1}{w}\sum _{i=0}^{w-1}\left(\mathbf{t}^{k-i}-\mathbf{\mu}_{k}\right)^{2}\in \mathbb{R}^{1\times m},
\end{equation}
\begin{equation} \label{eq_skewness}
\mathbf{\zeta}_{k}=\frac{1}{w\mathbf{\sigma}_{k}^{3}}\sum _{i=0}^{w-1}\left(\mathbf{t}^{k-i}-\mathbf{\mu}_{k}\right)^{3}\in \mathbb{R}^{1 \times m},
\end{equation}
\begin{equation}\label{eq_kurtosis}
\mathbf{\gamma}_{k}=\frac{1}{w\mathbf{\sigma}_{k}^{4}}\sum _{i=0}^{w-1}\left(\mathbf{t}^{k-i}-\mathbf{\mu}_{k}\right)^{4}-3\in \mathbb{R}^{1 \times m}.
\end{equation}
Note that, $\mathbf{\mu}^{*}= \mathbb{E}\left[\mu_{k}\right]$ (the mean of the TCs under normal condition) is used for the online calculation of detection index. When a fault occurs, one or more of the four statistics (namely, $\mathbf{\mu}_k, \mathbf{\nu}_k, \mathbf{\zeta}_k$ and $\mathbf{\gamma}_k$) are expected to deviate significantly from their expectations.
Given $\Theta^k$, a similarity index for local sample matrix $X^k$ at time instant $k$ can be defined as:
\begin{equation}\label{similarity}
D^k=\|\Theta_{\sigma}^{-1}(\Theta^{k}-\Theta_{\mu})\|_{p},
\end{equation}
where $\Theta_{\mu}$ denotes the mean value of similarity index over training data, $\Theta_\sigma=\diag(\sigma_1,\sigma_2,\cdots,\sigma_{4m})$ denotes a diagonal matrix in which the main diagonal consists of the standard deviation in each dimension of $\Theta^k$. The empirical method based on training data is used to determine the upper control limit $D_{\text{cl}}$ with a given confidence level $\eta$~\cite{Wang_2010}. An online monitoring procedure is then used to quantify the dissimilarity of statistics between normal and abnormal states.
Algorithm 1 and Algorithm 2 summarize, respectively, the offline training and the online testing of our proposed PMIM.
\begin{algorithm} [!ht]
\caption{Fault detection using PMIM (training phase)}
\small
\label{PermutationAlg1}
\begin{algorithmic}[1]
\Require
Process measurements $\aleph=\{\mathbf{x}^{i}|\mathbf{x}^{i} \in \mathbb{R}^{m} \}_{i=1}^{n} $;
sliding window size $w$;
significance level $\eta$.
\Ensure
mean of the transform components (TCs) $\mathbf{\mu}^{*}$;
standard deviation $\Theta_{\sigma}$ of the detection index;
reference mean $\Theta_{\mu}$ of the detection index.
\For {$i = 1$ to $ n$}
\State Construct a local time-lagged matrix $X^{i}\in \mathbb{R}^{w\times m}$ at time instant $i$ by Eq.~(\ref{eq_sample_matrix});
\State Construct the MI matrix $M^{i}$ by Eq.~(\ref{eq_MI_matrix});
\State Obtain the TCs $T^{i}$ of $X^{i}$ by Eq.~(\ref{eq_TC});
\State Obtain the detection index $\Theta^{i}=[\mathbf{\mu}_{i}|\mathbf{\nu}_{i}|\mathbf{\zeta}_{i}|\mathbf{\gamma}_{i}]^{T}$ by Eqs.~(\ref{eq_mean})-(\ref{eq_kurtosis}).
\EndFor
\State Calculate the mean of the TCs $\mathbf{\mu}^{*}=\sum\limits_{i=1}^{n}{\mathbf{\mu}_{i}}$, reference mean $\Theta_{\mu}$ and standard deviation $\Theta_{\sigma}$.
\For {$i = 1$ to $ n$}
\State ${\kern 8pt}$ $D^i=\|\Theta_{\sigma}^{-1}(\Theta^{i}-\Theta_{\mu})\|_{p}$.
\EndFor
\State Determine the control limit $D_{\text{cl}}$ at the significance level $\eta$. \\
\Return $\mathbf{\mu}^{*}$; $\Theta_{\sigma}$; $\Theta_{\mu}$; $D_{\text{cl}}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm} [!ht]
\caption{Fault detection using PMIM (testing phase)}
\small
\label{PermutationAlg2}
\begin{algorithmic}[1]
\Require
The online process measurement $\{\mathbf{x}_{\text{test}}^1,\mathbf{x}_{\text{test}}^2,\cdots\}$; sliding window size $w$;
mean of the transform components (TCs) $\mu^{*}$;
standard deviation $\Theta_{\sigma}$ of the detection index;
reference mean $\Theta_{\mu}$of the detection index;
control limit $D_{\text{cl}}$.
\Ensure
\emph{Decision:} alarm or not.
\While {\text{End of process not reached}}
\State Construct a local time-lagged matrix $X_{\text{test}}^{i}\in \mathbb{R}^{w\times m}$ at time instant $i$ by Eq.~(\ref{eq_sample_matrix});
\State Construct the MI matrix $M_{\text{test}}^{i}$ by Eq.~(\ref{eq_MI_matrix});
\State Obtain the TCs $T_{\text{test}}^{i}$ of $ X_{\text{test}}^{i}$ by Eq.~(\ref{eq_TC});
\State Obtain the detection index $\Theta_{\text{test}}^{i}=[\mathbf{\mu}_{i}|\mathbf{\nu}_{i}|\mathbf{\zeta}_{i}|\mathbf{\gamma}_{i}]_{\text{test}}^{T}$ with the mean of the TCs $\mathbf{\mu}^{*}$;
\State Obtain the similarity index by $D_{\text{test}}^{i}=\|\Theta_{\sigma}^{-1}(\Theta_{\text{test}}^{i}-\Theta_{\mu})\|_{p}$;
\If {$D_{\text{test}}^{i} \geq D_{\text{cl}}$}
\State Alarm the occurrence of fault;
\State Identify the root variables that cause the fault;
\Else
\State $i=i+1$; Go back to Step 2.
\EndIf
\EndWhile \\
\Return \emph{Decision}
\end{algorithmic}
\end{algorithm}
\section{A Deeper Insight into the Implementation of PMIM}
In this section, we elaborate the implementation details of PMIM. The discussion is based on a synthetic process with time-correlated dynamics\cite{Shang_2017,Shang_2018}:
\begin{equation}
\mathbf{x}= A\mathbf{s}+\mathbf{e},
\end{equation}
where $\mathbf{x}\in \mathbb{R}^{m}$ is the process measurements, $\mathbf{s}\in \mathbb{R}^{r}(r<m)$ is the data sources, $\mathbf{e}\in \mathbb{R}^{m}$ is the noise, and $A\in \mathbb{R}^{m\times r}$ is coefficient matrix that assumed to be column full rank \cite{Shang_2018,Alcala_2009}. Let us assume data sources satisfy the following relations:
\begin{equation}
s^k_i=\sum_{j=1}^{l}\beta_{i,j}v^{k-j+1}_i,
\end{equation}
where $s^k_i$ is the $i$-th variable at time $k$, $v^{k-j+1}_i$ represents the value of the $i$-th Gaussian data source with time independence at time $k-j+1$, $\beta_{i,j}$ denotes the weight coefficient, $l\geq 2$. Obviously, both $\mathbf{s}$ and $\mathbf{x}$ are time-correlated.
Here, the fault type of sensor bias\footnote{Other fault types, such as sensor precision degradation $\mathbf{x}^{*}=\eta \mathbf{x}$, gain degradation $\mathbf{x}^{*}=\mathbf{x}+\mathbf{\xi}_{m}\mathbf{e}^{[s]}$, additive process fault $\mathbf{x}=A(\mathbf{s}+\mathbf{\xi}_{m}\mathbf{f}^{[p]})+\mathbf{e}$ and dynamic changes $\tilde{\beta}=\beta+\triangle \beta$ can also analyzed similarly.} is considered:
\begin{equation}
\mathbf{x}^{*}=\mathbf{x}+\mathbf{f},
\end{equation}
where $\mathbf{x}^{*}$ is the measurement under sensor bias, and $\mathbf{x}$ denotes the fault-free portion. In the following, we will show how $\mathbf{f}$ affects the matrix-based R{\'e}nyi's $\alpha$-order entropy.
The matrix-based R{\'e}nyi's $\alpha$-order entropy is a non-parametric measure of entropy. For the $p$-th variable with $w$ realizations,
we build its Gram matrix $K\in \mathbb{R}^{w\times w}$ (at time instant $k$) by projecting it into a RKHS with an infinite divisible kernel\footnote{In this work, we simply use the radial basis function (RBF) kernel $G_{\sigma}(\cdot)=\exp(-\frac{\|\cdot\|^2}{2\sigma^2})$ as recommended in~\cite{Sanchez_2015,yu2019multivariate}.}:
\begin{equation}\label{K_normal}
\centering
K_{\mathbf{x}_p}=\begin{bmatrix}
1 &\exp\!\left(\!-\frac{(x^{k-w+1}_{p}-x^{k-w+2}_{p})^{2}}{2\sigma ^{2}}\!\right)\! &\cdots &\exp\!\left(\!-\frac{(x^{k-w+1}_{p}-x^{k}_{p})^{2}}{2\sigma ^{2}}\!\right)\! \\
\exp\!\left(\!-\frac{(x^{k-w+2}_{p}-x^{k-w+1}_{p})^{2}}{2\sigma ^{2}}\!\right)\! &1 &\cdots &\exp\!\!\left(\!\!-\frac{(x^{k-w+2}_{p}-x^{k}_{p})^{2}}{2\sigma ^{2}}\!\right)\! \\
\vdots &\vdots &\ddots &\vdots \\
\exp\!\left(\!-\frac{(x^{k}_{p}-x^{k-w+1}_{p})^{2}}{2\sigma ^{2}}\!\right)\! & \exp\!\left(\!-\frac{(x^{k}_{p}-x^{k-w+2}_{p})^{2}}{2\sigma ^{2}}\!\right)\! &\cdots &1\\
\end{bmatrix}.
\end{equation}
We normalize $K$ by its trace, i.e., $K=K/{\text{tr}(K)}$. It should be noted that the kernel induced mapping can be understood as a means of computation of high order statistics\footnote{By the Taylor expansion of the RBF kernel, we have \\ $\kappa({x}^{i},{x}^{j})=\exp\left(-\gamma\|{x}^{i}-{x}^{j}\|^{2}\right)=\exp\left(-\gamma{x^i}^{2}\right)\exp\left(-\gamma{x^j}^{2}\right)\left(1+\frac{2\gamma{x}^{i}{x}^{j}}{1!}+\frac{(2\gamma{x}^{i}{x}^{j})^2}{2!}+\frac{(3\gamma{x}^{i}{x}^{j})^2}{3!}+\cdots\right)$, where $\gamma=\frac{1}{2\sigma^2}$.}.
Suppose the fault occurs exactly at the $p$-th variable, i.e., $\mathbf{x}_{p}^{*}=\mathbf{x}_{p}+\mathbf{f}$ and $\mathbf{f}=\{f^{k-w+1}, f^{k-w+2}, \cdots, f^{k}\}$. The $(i,j)$-th entry of the Gram matrix $K$ associated with $\mathbf{x}_{p}$ becomes:
\begin{equation}
\begin{split}
\exp\!\left(\!-\frac{||x_{p}^{i*}-x_{p}^{j*}||^{2}}{2\sigma ^{2}}\!\right)\!
&=\exp\!\left(\!-\frac{[(x_{p}^{i}+f^{i})-(x_{p}^{j}+f^{j})]^{2}}{2\sigma ^{2}}\!\right)\!\\
&=\exp\!\left(\!-\frac{[(x_{p}^{i}-x_{p}^{j})+(f^{i}-f^{j})]^{2}}{2\sigma ^{2}}\!\right)\!\\
&=\exp\!\left(\!-\frac{(x_{p}^{i}-x_{p}^{j})^{2}}{2\sigma ^{2}}\!\right)\! \exp\!\left(\!-\frac{(x_{p}^{i}-x_{p}^{j})(f^{i}-f^{j})}{\sigma ^{2}}\!\right)\! \exp\!\left(\!-\frac{(f^{i}-f^{j})^{2}}{2\sigma ^{2}}\!\right)\!,\\
\end{split}
\end{equation}
where $i$, $j$ are time indices. Therefore, the new Gram matrix $K_{\mathbf{x}_p}^*$ can be represented as:
\begin{equation}\label{K_abnormal}
K_{\mathbf{x}_p}^{*}=K_{\mathbf{x}_p}\circ K_{\langle\mathbf{x}_p,~\mathbf{f}\rangle}\circ K_{\mathbf{f}}\\,
\end{equation}
where
\begin{tiny}
\begin{equation}
\begin{split}
&K_{\langle\mathbf{x}_p,~\mathbf{f}\rangle}=\\
&\begin{bmatrix}
1 &\exp\!\!\left(\!\!-\frac{(x_p^{k-w+1}-x_p^{k-w+2})(f^{k-w+1}-f^{k-w+2})}{\sigma ^{2}}\!\!\right)\!\! &\cdots & \exp\!\!\left(\!\!-\frac{(x_p^{k-w+1}-x_p^{k})(f^{k-w+1}-f^{k})}{\sigma ^{2}}\!\!\right)\!\! \\
\exp\!\!\left(\!\!-\frac{(x_p^{k-w+2}-x_p^{k-w+1})(f^{k-w+2}-f^{k-w+1})}{\sigma ^{2}}\!\!\right)\!\! &1 &\cdots & \exp\!\!\left(\!\!-\frac{(x_p^{k-w+2}-x_p^{k})(f^{k-w+2}-f^{k})}{\sigma ^{2}}\!\!\right)\!\! \\
\vdots &\vdots &\ddots &\vdots \\
\exp\!\!\left(\!\!-\frac{(x_p^{k}-x_p^{k-w+1})(f^{k}-f^{k-w+1})}{\sigma ^{2}}\!\!\right)\!\! & \exp\!\!\left(\!\!-\frac{(x_p^{k}-x_p^{k-w+2})(f^{k}-f^{k-w+2})}{\sigma ^{2}}\!\!\right)\!\! & \cdots & 1\\
\end{bmatrix}\\
\end{split},
\end{equation}
\end{tiny}
and
\begin{equation}\label{K_fault}
\begin{split}
K_{\mathbf{f}}&=\begin{bmatrix}
1 &\exp\!\left(\!-\frac{(f^{k-w+1}-f^{k-w+2})^{2}}{2\sigma ^{2}}\!\right)\! &\cdots & \exp\!\left(\!-\frac{(f^{k-w+1}-f^{k})^{2}}{2\sigma ^{2}}\!\right)\! \\
\exp\!\left(\!-\frac{(f^{k-w+2}-f^{k-w+1})^{2}}{2\sigma ^{2}}\!\right)\! & 1 &\cdots & \exp\!\left(\!-\frac{(f^{k-w+2}-f^{k})^{2}}{2\sigma ^{2}}\!\right)\! \\
\vdots &\vdots &\ddots & \vdots \\
\exp\!\left(\!-\frac{(f^{k}-f^{k-w+1})^{2}}{2\sigma ^{2}}\!\right)\! & \exp\!\left(\!-\frac{(f^{k}-f^{k-w+2})^{2}}{2\sigma ^{2}}\!\right)\! &\cdots & 1\\
\end{bmatrix}\\
\end{split}.
\end{equation}
In case of incipient faults, $f^i-f^j\approx0$, Eq.~(\ref{K_fault}) reduces to an all-ones matrix. As a result, Eq.~(\ref{K_abnormal}) can be approximated with $K_{\mathbf{x}_p}^{*}\approx K_{\mathbf{x}_p}\circ K_{\langle\mathbf{x}_p,~\mathbf{f}\rangle}$.
Take the simulation data described in section 5.1 as an example, $\mathbf{f}$ is induced on $\mathbf{x}_{1}$, the Gram matrix of $\mathbf{x}_{1}$ and $\mathbf{x}_{1}^{*}$ , i.e., $K_{\mathbf{x}_1}$ and $K_{\mathbf{x}_1}^{*}$, are shown in Fig.~\ref{fig:V9}. As can be seen, the incipient fault $\mathbf{f}$ causes minor changes on the (normalized) Gram matrix as well as its eigenspectrum, and thus the entropy of the variable.
\begin{figure*}[!ht]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.0cm}
\centering
\subfigure[$K_{\mathbf{x}_1}$] {\includegraphics[width=.34\textwidth,height=3.5cm]{gramshow1}}\hspace{-2mm}
\subfigure[$K_{\mathbf{x}_1}^{*}$] {\includegraphics[width=.34\textwidth,height=3.5cm]{gramshow2}}\hspace{-2mm}
\subfigure[$Eigenvalues$] {\includegraphics[width=.32\textwidth,height=3.5cm]{gramshow3}}
\caption{The (normalized) Gram matrix and its associated eigenspectrum in normal state or under incipient fault. (a) $K_{\mathbf{x}_1}$ in normal state; (b) $K_{\mathbf{x}_1}^{*}$ under incipient fault; (c) the eigenspectrum of $K_{\mathbf{x}_1}$ and $K_{\mathbf{x}_1}^{*}$. The incipient fault causes an obvious change in eigenspectrum, and thus the entropy of data.}
\label{fig:V9}
\end{figure*}
We now discuss the change of MI between the $p$-th variable $\mathbf{x}_{p}$ and the $q$-th variable $\mathbf{x}_{q}$.
Again, suppose the fault of sensor bias occurs at the $p$-th variable $\mathbf{x}_{p}^{*}$, the difference between $I(\mathbf{x}_{p};\mathbf{x}_{q})$ and $I(\mathbf{x}_p^{*};\mathbf{x}_{q})$ is:
\begin{equation}
\begin{split}
\triangle I(\mathbf{x}_p^{*};\mathbf{x}_{q})&=I(\mathbf{x}_p^{*};\mathbf{x}_{q})-I(\mathbf{x}_{p};\mathbf{x}_{q})\\
&=[H_\alpha(A_p^{*})+H_\alpha(A_{q})-H_\alpha(A_p^{*},A_{q})]-[H_\alpha(A_{p})+H_\alpha(A_{q})- H_\alpha(A_{p},A_{q})]\\
&=H_\alpha(A_p^{*})-H_\alpha(A_p^{*},A_{q})-H_\alpha(A_{p})+ H_\alpha(A_{p},A_{q})\\
&=\frac{1}{1-\alpha}\log_2 \left( \frac{\sum\limits_{i=1}^{w}\lambda_{i}(A_p^{*})^{\alpha} \sum\limits_{i=1}^{w}\lambda_{i}\left(\frac{A_p \circ A_q}{\tr(A_p \circ A_q)}\right)^{\alpha}} {\sum\limits_{i=1}^{w}\lambda_{i}(A_{p})^{\alpha}\sum\limits_{i=1}^{w}\lambda_{i}\left(\frac{A_p^* \circ A_q}{\tr(A_p^* \circ A_q)}\right)^{\alpha}} \right),\\
\end{split}
\end{equation}
where $\lambda_{i}(A)$ denotes the $i$-th eigenvalue of matrix $A$, the normalized Gram matrix obtained from the corresponding variable.
Again, we use the simulated data described in section 5.1 as an example, where the fault is induced in $\mathbf{x}_1$. By comparing the MI matrix under normal and fault states, as shown in Fig.~\ref{fig:V10}, we can observe that all entries related to $\mathbf{x}_1$ (the first dimensional measurement) have a sudden change. For example, the MI value in $M_{12}$ is $2.51$ under normal state, but it becomes $2.67$ with incipient fault. This result also indicates that our methodology has the potential to identify the exact fault sources by monitoring significant changes in MI values over MI matrix, which makes our detection result interpretable.
\begin{figure*}[!ht]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.0cm}
\centering
\subfigure[Normal] {\includegraphics[width=.45\textwidth]{mimatrix1}}
\subfigure[Fault] {\includegraphics[width=.45\textwidth]{mimatrix2}}
\caption{The MI matrix under (a) normal state; and (b) fault state (the fault is induced on $\mathbf{x}_1$). The entries with changed values are marked with red rectangles. Only entries that are related to $\mathbf{x}_1$ have different MI values.}
\label{fig:V10}
\end{figure*}
\section{Experiments}
In this section, experiments on both synthetic data and the real-world Tennessee Eastman process (TEP) are conducted to demonstrate the superiority of our proposed PMIM over state-of-the-art fault detection methods. We also evaluate the robustness of PMIM with respect to different hyper-parameter settings.
Two generally used metrics, namely the fault detection rate (FDR) and the false alarm rate (FAR), are employed for performance evaluation\cite{Yin1S,SXDing_2013,ZWChen_2017}.
The FDR is the probability of event where an alarm is raised when a fault really occurs,
\begin{equation}
\text{FDR}=\text{prob}(D>D_{\text{cl}}|\text{fault $\neq$ 0}), \\
\end{equation}
where $D$ and $D_{\text{cl}}$ are respectively the similarity index and its corresponding control limit.
By contrast, the FAR is the percentage of the samples under normal state but are identified as faults,
\begin{equation}
\text{FAR}=\text{prob}(D>D_{\text{cl}}|\text{fault = 0}). \\
\end{equation}
Obviously, a higher FDR and a lower FAR is expected.
\subsection{Numerical Simulation}
Motivated by \cite{Alcala_2009,Shang_2017,Shang_2018}, we consider a multivariate nonlinear process generated by the following equation:
\begin{equation} \nonumber
\begin{bmatrix}
x_{1} \\
x_{2} \\
x_{3} \\
x_{4} \\
x_{5} \\
\end{bmatrix}=\begin{bmatrix}
{0.2183} & { - 0.1693} & {0.2063} \\
{ - 0.1972} & {0.2376} & {0.1736} \\
{0.9037} & { - 0.1530} & {0.6373} \\
{0.1146} & {0.9528} & { - 0.2624} \\
{0.4173} & { - 0.2458} & {0.8325} \\
\end{bmatrix} \begin{bmatrix}
{s_{1}}^2 \\
s_{2}s_{3} \\
{s_{3}}^3 \\
\end{bmatrix}+\begin{bmatrix}
e_{1} \\
e_{2} \\
e_{3} \\
e_{4} \\
e_{5} \\
\end{bmatrix},
\end{equation}
where $s$ satisfies $s^k_i=\sum_{j=1}^{l}\beta_{i,j}v^{k-j+1}_i$ with a weight matrix $\mathbf{\beta}$ given by,
\begin{equation} \nonumber
\mathbf{\beta}=\begin{bmatrix}
{0.6699} & {0.0812} & {0.5308} & {0.4527} & {0.2931} \\
{0.4071} & {0.8758} & {0.2158} & { - 0.0902} & {0.1122} \\
{0.3035} & {0.5675} & {0.3064} & {0.1316} & {0.6889} \\
\end{bmatrix},
\end{equation}
$v$ denotes three mutually independent Gaussian distributed data sources with mean of $[0.3,~2.0,~3.1]^{T}$ and standard deviation of $[1.0,~2.0,~0.8]^{T}$, and $e$ denotes Gaussian white noises with standard deviation
$[0.061,~0.063,~0.198,~0.176,~0.170]^{T}$. Same to \cite{Shang_2017,Shang_2018}, we consider four different types of faults that cover a broad spectrum of real-life scenarios,
\begin{itemize}
\item Type I: Sensor bias $\mathbf{x}^{*}=\mathbf{x}+f$, with $f=5.6+\mathbf{e}$, $\mathbf{e}$ randomly chosen from [0,~1.0];
\item Type II: Sensor precision degradation $\mathbf{x}^{*}=\eta \mathbf{x}$ with $\eta=0.6$;
\item Type III: Additive process fault $\mathbf{s}^{*}=\mathbf{s}+f$ with $f=1.2$;
\item Type IV: Dynamic changes $\mathbf{\tilde{\beta}}=\mathbf{\beta}+\bigtriangleup \mathbf{\beta}$ with $\bigtriangleup \beta_{3}=[-0.825,~0.061,~0.662,~-0.820,
~0.835]$, where $\mathbf{\beta}_{3}$ denotes the $3$-th row of $\mathbf{\beta}$.
\end{itemize}
The training set contains $10,000$ samples, the test set contains $4,000$ samples. All the faults are introduced after the $1,000$-th sample. For convenience, we assume sensor fault occurs at $\mathbf{x}_{1}$ (i.e., the first dimension of observable measurement), and process fault occurs at $\mathbf{s}_{1}$ (i.e., the first independent data sources). Empirical evaluation aims to answer the following three questions:
\begin{itemize}
\item Can MI manifest more complex dependence among different dimensions of measurement than the classical correlation coefficient?
\item Is fault detection using PMIM robust to hyper-parameter settings and how hyper-parameters affect the performance of PMIM?
\item Does PMIM outperform existing state-of-the-art window-based fault detection methods?
\end{itemize}
\subsubsection{MI versus Pearson's correlation coefficient}
Firstly, we demonstrate the advantage of MI over the Pearson's correlation coefficient $\gamma$ on manifesting the complex (especially nonlinear) dependency between two variables. Intuitively, if two random variables are linearly correlated, they should have large $\gamma^{2}$ ($\gamma^{2}> 0.6$) and large MI\footnote{In general, $\gamma^{2}>0.3$ indicates a moderate linear dependence and $\gamma^{2}>0.6$ indicates a strong linear dependence~\cite{ratner2009correlation,Jiang_2011}. However, there is little guidance for what value of MI really constitutes an indication of strong dependence~\cite{Jiang_2011}. This is just because MI is not upper bounded and different estimators usually offer different MI values. Therefore, we intuitively consider a MI value is ``large" if the corresponding $\gamma^2$ indicates a ``strong" linear dependence (i.e., larger than $0.6$).}(but we cannot compare the value of $\gamma^2$ to the value of MI).
However, if they are related in a nonlinear fashion, they should have large MI but small $\gamma^{2}$ ($\gamma^{2}\leq 0.6$)~\cite{Jiang_2011}. On the other hand, two variables will never have a large $\gamma^2$ but a small MI, as linear correlation is a very special case of the general dependence. Therefore, MI should always be a superior metric to measure the degree of interactions than Pearson's correlation coefficient. We perform a simple simulation to support our argument.
\begin{figure*}[!t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.0cm}
\centering
\subfigure[Shannon's MI versus $\gamma^{2}$] {\includegraphics[width=.5\textwidth]{pearsonmi1}}\hspace{-4mm}
\subfigure[Matrix-based R{\'e}nyi's $\alpha$-order MI versus $\gamma^{2}$] {\includegraphics[width=.5\textwidth]{pearsonmi2}}
\caption{The comparison between Pearson's correlation coefficient $\gamma^{2}$ and mutual information estimated with (a) Shannon's discrete entropy functional by discretizing continuous variables into $5$ bins of equal width; and (b) matrix-based R{\'e}nyi's $\alpha$-order MI. The values of $\gamma^{2}$ and MI are shown in $x$-axis and $y$-axis, respectively.}\label{fig:PearsonMI}
\end{figure*}
Specifically, we select the first $4,000$ samples in the training set and compute both MI and $\gamma^{2}$ in each window data of size $100$. We finally obtain $3,601$ pairs of MI and $\gamma^{2}$.
We evaluate MI with both the basic Shannon's discrete entropy functional and our suggested matrix-based R{\'e}nyi's $\alpha$-order entropy functional. For Shannon entropy functional, we discretize continuous variables into $5$ bins of equal width to estimate the underlying distributions.
The values of MI ($y$-axis) and $\gamma^{2}$ ($x$-axis) are specified in the scatter plot in Fig.~\ref{fig:PearsonMI}. As can be seen, there are strong nonlinear dependencies in our simulated data. Take Fig.~\ref{fig:PearsonMI}(b) as an example, we can observe that when $\gamma^{2}=0.6$, the smallest MI is $0.37$.
As such, we consider $MI\geq0.37$ to indicate a strong correlation. We noticed that there are quite a few points in the region $0.37\leq MI \leq 1.2$ and $\gamma^{2}\leq 0.6$, suggesting that nonlinear dependence dominates for a large number of variables.
Further, to quantitatively demonstrate the superiority of MI matrix over the well-known covariance matrix on nonlinear fault detection, we use MI matrix as a substitute to the covariance matrix in the basic PCA-based fault detection approach. We denote this simple modification as MI-PCA, which includes both MI-PCA$_{\text{Shannon}}$ and MI-PCA$_{\text{R{\'e}nyi}}$. Both Hotelling $T^2$ and squared prediction error (SPE) are considered in PCA and MI-PCA. Performances in terms of FDR and FAR are shown in Fig.~\ref{fig:PCAMI}. In case of $T^2$, MI-PCA always has higher or almost the same FDR values, but significantly smaller FAR values. In case of SPE, although traditional PCA has smaller FAR, its results are meaningless. In fact, if we look deeper, the FDR of PCA is almost zero, which suggests that traditional PCA completely fails.
\begin{figure*}[!t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.0cm}
\centering
\subfigure[FDRs] {\includegraphics[width=.5\textwidth]{fdr}}\hspace{-4mm}
\subfigure[FARs] {\includegraphics[width=.5\textwidth]{far}}
\caption{Performance comparison between PCA and MI-PCA in terms of FDR (the larger the better) and FAR (the smaller the better). We replace the covariance matrix in the basic PCA-based fault detection with MI matrix estimated with both Shannon entropy (denote it MI-PCA$_{\text{Shannon}}$) and matrix based R{\'e}nyi's $\alpha$-order entropy (denote it MI-PCA$_{\text{R{\'e}nyi}}$).
We use both Hotelling $T^2$ and squared prediction error (SPE) to monitor the state of samples.}
\label{fig:PCAMI}
\end{figure*}
\subsubsection{Hyperparameter analysis}
We then present a comprehensive analysis on the effects of three hyper-parameters, namely the entropy order $\alpha$, the kernel size $\sigma$ and the length $w$ of sliding window in PMIM. We focus our discussion on the process data with time-correlated dynamic changes, i.e., fault Type V. The FDR and FAR values of our methodology with respect to different hyper-parameter settings are shown in Fig.~\ref{fig:FDR}, Fig.~\ref{fig:FDRFAR} and Fig.~\ref{fig:FAR}.
The choice of $\alpha$ is associated with the task goal. If the application requires emphasis on tails of the distribution (rare events) or multiple modalities, $\alpha$ should be less than 2, but if the goal is to characterize modal behavior, $\alpha$ should be greater than 2. $\alpha= 2$ provides neutral weighting \cite{yu2019multivariate,yu2019simple}. The detection performances of different values of $\alpha$ are presented in Fig.~\ref{fig:FDR}. For a comprehensive comparison, we consider $\alpha\in\{0.1,~0.2,~0.3,~0.4,~0.5,~0.6,~0.7,~0.8,~0.9,~1,$ $~1.1,~1.2,~1.3,~1.4,~1.5,~2,~3,~5\}$. Both $\ell_{\infty}$ and $\ell_{2}$ are assessed in the calculation of similarity index $D$ in Eq.~(\ref{similarity}). As a common practice, we use window size $100$. As can be seen, the FDR values are always larger than $99.5\%$, which suggests that FDR is less sensitive to the changes of $\alpha$. On the other hand, the FAR keeps a stable value in the range $\alpha \in[0.5,~1.2]$, but suddenly increases to $25\%$ or above when $\alpha \geq 2$. Therefore, we recommend $\alpha$ in the range $[0.5,~1.2]$ for PMIM.
\begin{figure*}[!t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.0cm}
\centering
\subfigure[FDRs~with~different~$\alpha$] {\includegraphics[width=.5\textwidth]{alpha2}}\hspace{-4mm}
\subfigure[FARs~with~different~$\alpha$] {\includegraphics[width=.5\textwidth]{alpha}}
\caption{Detection performances of different $\alpha$ on (a) FDRs; and (b) FARs. Both $\ell_{\infty}$ and $\ell_{2}$ norm are considered in the calculation of similarity index $D$. As a common practice, window size $100$ is used here.}
\label{fig:FDR}
\end{figure*}
The parameter $\sigma$ controls the locality of the estimator, its selection can follow Silverman's rule of thumb for density estimation~\cite{Silverman_1986} or other heuristics from a graph cut perspective (e.g., the $10$ to $30$ percent of the total range of the Euclidean distances between all pairwise data points~\cite{shi2000normalized}). For example, the range from a graph cut perspective corresponds to $0.21 < \sigma< 1.33$ on the normalized data here. The detection performances of different $\sigma$ and $\alpha$ are presented in Fig.~\ref{fig:FDRFAR}. We choose $\sigma\in\{0.1,~0.2,~0.3,~0.4,~0.5,~0.6,$ $~0.7,~0.8,~0.9,~1,~5,~10,~24,~50,~100\}$ (displayed in log-scale) and $\alpha\in\{0.4,~0.5,$ $~0.6,~0.7,~0.8,~0.9,~1,~1.1,~1.2,~1.5\}$.
According to Fig.~\ref{fig:FDRFAR}, FDR is always larger than $99.20\%$, whereas FAR is relatively more sensitive to $\sigma$.
Specifically, FAR reaches to its minimum value when $\sigma$ is around $0.5$. After that, FAR is consistently increasing when $\sigma \in[1,~100]$. To achieve higher FDR and lower FAR values, we thus recommend $\sigma$ in the range $[0.4,~1]$ for PMIM.
\begin{figure*} [!ht]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.0cm}
\centering
\subfigure[FDRs~with~different~$\sigma$] {\includegraphics[width=.5\textwidth]{sigma2}}\hspace{-4mm}
\subfigure[FARs~with~different~$\sigma$] {\includegraphics[width=.5\textwidth]{sigma1}}
\caption{Detection performances of different $\sigma$ with a fixed $\alpha$ on (a) FDRs; and (b) FARs. $\sigma\in\{0.1,~0.2,~0.3,~0.4,$ $~0.5,~0.6,~0.7,~0.8,$ $~0.9,~1,~5,~10,~24,~50,~100\}$ (displayed in log-scale). $\ell_{2}$ norm is considered in the calculation of similarity index $D$.}
\label{fig:FDRFAR}
\end{figure*}
\begin{figure*} [!ht]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.0cm}
\centering
\subfigure[FDRs~with~different~$w$] {\includegraphics[width=.51\textwidth]{w2}}\hspace{-4mm}
\subfigure[FARs~with~different~$w$] {\includegraphics[width=.51\textwidth]{w1}}
\caption{Detection performances of different $w$ on (a) FDRs; and (b) FARs. Both $\ell_{\infty}$ and $\ell_{2}$ norm are considered for scalarization in the calculation of similarity index $D$.}
\label{fig:FAR}
\end{figure*}
The local stationarity or smoothness assumption (of the underlying process) might be violated if the window size is too large. In this case, the eigenspectrum becomes stable and is less sensitive to the abrupt distributional changes of the underlying process, which may lead to decreased detection power or lower FDR values. On the other hand, in case of a very small window size, the MI estimation becomes unreliable (due to limited samples) and the local time-lagged matrix may be dominated by environmental noises, which in turn would result in a high FAR value. Moreover, according to Fig.~\ref{fig:FAR}, FDR remains stable when $w\in [50,~120]$, and decreases as the window length increasing when $w \geq 120$. By contrast, FAR is more sensitive to $w$ than FDR, but its changing patterns are not consistent for $\ell_2$ norm and $\ell_\infty$ norm. We choose $w=100$ in the following experiments, because it can strike a good trade-off between FDR and FAR for both $\ell_2$ norm and $\ell_\infty$ norm here.
\subsubsection{Comparison with state-of-the-art methods}
We compare our proposed PMIM with four state-of-the-art window based data-driven fault detection approaches, namely DPCA \cite{Ku_1995}, SPA \cite{Wang_2010}, RTCSA \cite{Shang_2017} and RDTCSA \cite{Shang_2018}. The hyperparameters of PMIM are set to $\alpha=1.01$, $\sigma=0.5$ and $w=100$.
For DPCA, $90\%$ cumulative percent variance is used to determine the number of principal components. For RTCSA, RDTCSA and PMIM, their detection performances are illustrated in Table~\ref{1:TableFDR} and Table~\ref{1:TableFAR}.
{
\linespread{1}
\begin{table}[!hbpt]
\small
\centering
\begin{threeparttable}
\caption{The FDRs $(\%)$ of different methods for the numerical simulations} \label{1:TableFDR}
\renewcommand{\arraystretch}{1.1}
\renewcommand{\tabcolsep}{2mm}
\begin{tabular}{c||c|c||c|c||c||c||c}\hline
\toprule
\textbf{No.}& \multicolumn{2}{|c||}{\textbf{DPCA}}
&\multicolumn{2}{|c||}{\textbf{SPA}} &\textbf{RTCSA} &\textbf{RDTCSA} &\textbf{PMIM} \\ \cline{2-3}\cline{4-5}
&\footnotesize{$T^{2}$} &\footnotesize{SPE} &\footnotesize{$D_{r}$} &\footnotesize{$D_{p}$} & & & \\\hline
\midrule
1 &51.17 &\textbf{99.70} &0.80 &2.80 &88.43 &91.01 &91.57 \\ \hline
2 &21.23 &21.0 &2.40 &6.67 &82.50 &\textbf{100} &99.63 \\ \hline
3 &33.10 &\textbf{99.83} &0.77 &7.37 &96.60 &96.83 &97.50 \\ \hline
4 &81.23 &85.57 &29.13 &99.13 &99.70 &99.70 &\textbf{99.87} \\ \hline
\textbf{Aver.} &46.68 &76.53 &8.28 &29.0 &91.81 &96.89 &\textbf{97.14} \\ \hline
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item $T^2$ denotes Hotelling’s $T^2$ statistic; SPE denotes squared prediction error; $D_r$ and $D_p$ denote SPE and $T^2$ of statistics patterns (SPs) in SPA framework, respectively. For SPA, the selected statistics are mean, variance, skewness, and kurtosis. For DPCA, SPA and RDTCSA, the time lag is set to 2, 1 and 1 respectively. The window lengths are all set as the commonly used 100. For RTCSA, RDTCSA and PMIM, $\ell_2$ norm is used as scalarization. The significance level is set as 5\%.
\end{tablenotes}
\end{threeparttable}
\vspace{-.0in}
\end{table}
}
{
\linespread{1}
\begin{table}[!hbpt]
\small
\caption{The FARs $(\%)$ of different methods for the numerical simulations} \label{1:TableFAR}
\centering
\begin{threeparttable}
\renewcommand{\arraystretch}{1.1}
\renewcommand{\tabcolsep}{2mm}
\begin{tabular}{c||c|c||c|c||c||c||c}\hline
\toprule
\textbf{No.} &\multicolumn{2}{|c||}{\textbf{DPCA}}
&\multicolumn{2}{|c||}{\textbf{SPA}} &\textbf{RTCSA} &\textbf{RDTCSA} &\textbf{PMIM} \\\cline{2-3}\cline{4-5}
&\footnotesize{$T^{2}$} &\footnotesize{SPE} &\footnotesize{$D_{r}$} &\footnotesize{$D_{p}$} & & & \\\hline
\midrule
1 &17.31 &18.28 &0.22 &10.32 &6.22 &3.11 &\textbf{1.78} \\ \hline
2 &20.20 &19.44 &0 &0 &4.67 &\textbf{1.44} &5.01 \\ \hline
3 &18.28 &15.53 &0 &9.54 &4.88 &3.65 &\textbf{2.77} \\ \hline
4 &19.44 &17.92 &0 &15.54 &11.88 &15.53 &\textbf{2.77} \\ \hline
\textbf{Aver.} &18.81 &17.79 &0.055 &8.85 &6.91 &5.93 &\textbf{3.08} \\ \hline
\bottomrule
\end{tabular}
\end{threeparttable}
\vspace{-.0in}
\end{table}
}
According to Table~\ref{1:TableFDR}, PMIM can effectively detect different types of faults and has the highest detection rate. Our advantage becomes more obvious for fault Type III and fault Type V, namely the additive process fault and dynamic changes. Moreover, as demonstrated in Table~\ref{1:TableFAR}, for each test process, PMIM achieves smaller FAR values at the early stage of the normal phase. Although SPA achieves nearly zero FAR values, its FDR values is too small, which indicates that SPA is hard to identify faults here.
This is not hard to understand. Note that SPA uses a time lag of $1$. In this sense, any two adjacent windows of data only differ in $1$ sample. The highly overlapped windows will lead to highly correlated SPs, which severely deteriorate the capability of SPA~\cite{Wang_2010}.
\subsection{TEP Experiment}
As a public benchmark of chemical industrial process, Tennessee Eastman process (TEP) created by the Eastman Chemical Company has been widely used for multivariable process control problems \cite{Downs_1993,Ricker_1995} (see Appendix B on the introduction of TEP process). In this application, we use the simulation data generated by the closed-loop Simulink models developed by Braatz~\cite{Ricker_1995, Chiang_2001,Russell_2012} to evaluate the effectiveness of our proposed PMIM. We use $22$ continuous process measurements (sampled with a sampling interval of 3 minutes) and $11$ manipulated variables (generated at time delay that varys from 6 to 15 minutes) for monitoring, which constitutes $33$ dimensional of input data. To obtain a reliable significance level, we generate $200$ hours of training data ($4,000$ samples in total) and $100$ hours of testing data ($2,000$ samples in total). In each test data, a fault occurs exactly after $20$ hours from the beginning.
First, the MI matrix (with the boxplot of its diagonal vector) of normal state, fault 1 (step fault) and fault 14 (sticking fault) are shown in Fig.~\ref{fig:V5}. Obviously, the MI matrix keeps almost the same in different time instants under the normal state. However, the occurrence of a fault will lead to different joint or marginal distributions on each dimensional of input, and thus change the entry values in MI matrix. By comparing the boxplots of normal and fauts states, we can observe the changes of diagonal vector, i.e., changes of entropy. Moreover, different types of faults produce different changes of MI matrix.
\begin{figure*}[!t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.0cm}
\centering
\subfigure[Normal (t=$500$)] {\includegraphics[width=.51\textwidth]{mi1}}\hspace{-4mm}
\subfigure[Normal (t=$1,500$)] {\includegraphics[width=.51\textwidth]{mi2}}\hspace{-2mm}
\subfigure[Fault 1 (t=$1,500$)] {\includegraphics[width=.51\textwidth]{mi3}}\hspace{-4mm}
\subfigure[Fault 14 (t=$1,500$)] {\includegraphics[width=.51\textwidth]{mi4}}\hspace{-2mm}
\caption{The MI matrix of TEP under normal and fault states: (a) the MI matrix of normal state at $500$-th sampling instant; (b) the MI matrix of normal state at $1,500$-th sampling instant; (c) the MI matrix of fault 1 at $1,500$-th sampling instant; and (d) the MI matrix of fault 14 at $1,500$-th sampling instant.}
\label{fig:V5}
\end{figure*}
\begin{figure*}[!t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.0cm}
\centering
\subfigure[Fault 1] {\includegraphics[width=.49\textwidth]{contri01_1}}
\subfigure[Fault 14] {\includegraphics[width=.49\textwidth]{contri14_1}}
\caption{The means of MI matrix of TEP under fault states: (a) fault 1 (step fault); and (b) fault 14 (sticking fault). The left plot is the means of MI along each variable, and the right is their confidence interval.}
\label{fig:V51}
\end{figure*}
The mean of MI values between one variable and all remaining variables\footnote{For the $i$-th variable, we just compute the mean of $I(\mathbf{x}_1,\mathbf{x}_i),\cdots, I(\mathbf{x}_{i-1},\mathbf{x}_i), I(\mathbf{x}_{i+1},\mathbf{x}_i),\cdots, I(\mathbf{x}_m,\mathbf{x}_i)$.} are shown in Fig.~\ref{fig:V51}. As Fig.~\ref{fig:V51}(a) shown, the central box becomes wider and the $75$-th percentiles becomes larger. This indicates that the fault 1 is possibly a step change. In fact, fault 1 indeed induce a step change on stream 4. This feeding changes of reactants A, B and C causes a global impacts on measurements. By contrast, fault 14 induces a sticking change on the reactor cooling water valve, and the most relevant variables are in dimensions $9$, $21$ and $32$~\cite{Chiang_2001}. From Fig.~\ref{fig:V51}(b), there are indeed three outliers which are plotted individually using the $``+"$ symbol, corresponding to the $9$-th, $21$-th and $32$-th dimensional variables.
In other words, the changes on the dimensions $9$, $21$ and $32$ are exactly the driving force that lead to the changes in MI matrix (and hence its eigenspectrum). In this sense, our PMIM also provides insights on the exact root variables that cause the fault, i.e., our fault detection using PMIM is interpretable. One should also note that, an interpretable results also benefit problems related to fault isolation~\cite{ChenZ2016} and restoration~\cite{LiG2011}.
Next, we use the empirical method to determine the confidence limits of different MSPM methods under the same confidence level. Without loss of generality, the window lengths of all competing methods are set to $100$, and all the statics mentioned in Section 3 are used here. The average FDR and FAR values of different MSPM methods on TEP are summarized in Table~\ref{1:Tableeigen2} and Table~\ref{1:Tableeigen1}, respectively.
It can be observed from Table~\ref{1:Tableeigen2} that the FDR of RTCSA, RDTCSA, and PMIM are consistently higher than other methods and remain stable across different types of faults. Moreover, our PMIM always outperforms RTCSA, owing to the superiority of MI over covariance matrix in capturing the intrinsic interactions (either linear or non-linear) between pairwise variables. PMIM detects most of faults. Although our method has relatively lower FDR on step fault 5 and unknown fault 19 with $w=100$, its detection performance in both faults can be significantly improved with larger window size $w$ (see Fig.~\ref{fig:Diffw}.) Detection performances in terms of FDR of different $w$ for fault 5 and 19 are shown in Fig.~\ref{fig:Diffw}. $w=150$ is better to achieve higher FDRs here.
{
\linespread{1}
\begin{table}[!hbpt]
\small
\centering
\begin{threeparttable}
\caption{The FDRs $(\%)$ of different MSPM methods for TEP} \label{1:Tableeigen2}
\renewcommand{\arraystretch}{1.1}
\renewcommand{\tabcolsep}{2.5mm}
\begin{tabular}{l||c|c||c|c||c||c||c}\hline
\toprule
\textbf{No.} &\multicolumn{2}{c||}{\textbf{DPCA}}
&\multicolumn{2}{c||}{\textbf{SPA}} &\textbf{RTCSA} &\textbf{RDTCSA} &\textbf{PMIM} \\ \cline{2-3} \cline{4-5}
\footnotesize{\text{(fault type)}} &\footnotesize{$T^{2}$} &\footnotesize{SPE} &\footnotesize{$D_{r}$} &\footnotesize{$D_{p}$} & & & \\ \hline
\midrule
1 Step &99.91 &\textbf{99.94} &99.88 &99.81 &99.62 &99.56 &99.69 \\ \hline
2 Step &\textbf{99.19} &98.88 &99.12 &99.12 &98.50 &98.69 &98.31 \\ \hline
4 Step &11.63 &\textbf{100} &16.50 &\textbf{100} &98.38 &99.44 &99.56 \\ \hline
5 Step &14.94 &28.56 &19.50 &87.81 &\textbf{99.88} &97.25 &77.38 \\ \hline
6 Step &99.50 &\textbf{100} &13.63 &13.63 &\textbf{100} &99.94 &\textbf{100} \\ \hline
7 Step &\textbf{100} &\textbf{100} &44.12 &\textbf{100} &\textbf{100} &\textbf{100} &\textbf{100} \\ \hline
8 Random &98.88 &93.63 &\textbf{99.12} &\textbf{99.12} &97.88 &97.75 &98.62 \\ \hline
10 Random &21.69 &51.62 &59.56 &88.12 &\textbf{96.63} &37.38 &96.06 \\ \hline
11 Random &36.88 &95.44 &99.69 &\textbf{100} &96.25 &92.94 &99.0 \\ \hline
12 Random &99.38 &97.31 &99.31 &99.31 &99.38 &99.50 &\textbf{100} \\ \hline
13 Slow drift &98.56 &92.31 &98.31 &\textbf{100} &97.88 &98.0 &98.25 \\ \hline
14 Sticking &99.88 &\textbf{99.94} &\textbf{99.94} &\textbf{99.94} &99.88 &99.88 &99.88 \\ \hline
16 Unknown &15.37 &52.38 &63.56 &91.81 &\textbf{99.75} &79.31 &99.50 \\ \hline
17 Unknown &87.19 &98.31 &98.0 &\textbf{99.31} &97.81 &97.75 &97.88 \\ \hline
18 Unknown &94.56 &\textbf{95.75} &93.81 &95.56 &93.75 &93.69 &94.69 \\ \hline
19 Unknown &48.25 &49.75 &29.38 &99.62 &\textbf{100} &97.19 &78.19 \\ \hline
20 Unknown &47.38 &61.31 &96.19 &\textbf{96.75} &96.69 &95.81 &96.31 \\ \hline
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item The window lengths are all set as 100. The selected statistics are mean, variance, skewness, and kurtosis. For RTCSA, RDTCSA and PMIM, $\ell_{\infty}$ norm is used as scalarization. For DPCA and RDTCSA, the time lag is set to 2 and 1 respectively, recommended by authors~\cite{Shang_2017,Shang_2018}. The significance level is set as 2\%.
\end{tablenotes}
\end{threeparttable}
\vspace{-.0in}
\end{table}
}
{
\linespread{1}
\begin{table}[!hbpt]
\small
\caption{The average FARs $(\%)$ of different MSPM methods for TEP} \label{1:Tableeigen1}
\centering
\renewcommand{\arraystretch}{1.1}
\renewcommand{\tabcolsep}{2mm}
\begin{tabular}{c||c|c||c|c||c||c||c}\hline
\toprule
\textbf{FAR}& \multicolumn{2}{|c||}{\textbf{DPCA}}
& \multicolumn{2}{|c||}{\textbf{SPA}} &\textbf{RTCSA} &\textbf{RDTCSA} &\textbf{PMIM} \\ \cline{2-3} \cline{4-5}
\footnotesize{$(\%)$} &\footnotesize{$T^{2}$} &\footnotesize{SPE} &\footnotesize{$D_{r}$} &\footnotesize{$D_{p}$} & & & \\ \hline
\midrule
Normal &2.05 & 3.95 & 4.73 &5.96 &2.89 &3.63 &\textbf{1.18} \\ \hline
\bottomrule
\end{tabular}
\vspace{-.0in}
\end{table}
}
\begin{figure}[!hbpt]
\centering
\includegraphics[width=0.5\textwidth]{diffw}\\
\caption{Detection performances in terms of FDR of different $w$ for fault 5 and 19 in TEP. $w\in\{80,~100,~120,$ $~150,180,200\}$. Fault 5 is marked by red, fault 19 is marked by blue.}
\label{fig:Diffw}
\end{figure}
From Table~\ref{1:Tableeigen1} all the methods achieve favorable FAR, approaching to the theoretical minimum value, i.e., the used significance level. Moreover, our FAR is lower than RTCSA and RDTCSA. This result confirms the superiority of MI in capturing the intrinsic interactions. On the other hand, the detection delay is inevitable owing to the use of sliding windows, a common drawback of the window-based MSPM methods. Take fault 1 for instance, detection performances in terms of FAR, FDR and TFDR (we define the FDR value in the transition phase\footnote{Transitional phase can be regarded as a connection process between its two neighboring stable phases, in which the window contains both normal and abnormal samples.} as TFDR, the higher the better), of RTCSA, RDTCSA and PMIM are illustrated in Fig.~\ref{fig:DTE}. Our proposed PMIM has the lowest FAR and highest TFDR, which indicates that PMIM is more sensitive to fault 1 than RTCSA and RDTCSA. The detection delay of the proposed method is only 4 samples, which is acceptable in window-based approaches.
\begin{figure}[hbpt]
\centering
\includegraphics[width=0.5\textwidth]{dte}\\
\caption{Detection performances of TCSA methods for fault 1 in TEP. TFDR refers to the FDR value in transition phase. The higher TFDR, the better performance of the used methodology. The methods of RTCSA, RDTCSA and PMIM are marked by blue, red and yellow respectively.}
\label{fig:DTE}
\end{figure}
\begin{figure}[hbpt]
\centering
\includegraphics[width=1\textwidth]{te21}\\
\caption{Detection performances of TCSA methods for fault 21 in TEP. The occurrence of fault corresponded to the $61$-$th$ (RTCSA, PMIM) / $60$-$th$ (RDTCSA) measurements, marked by black line. The FDR values in transition phase are marked by pink. The green line indicates the first sample that detected as a fault instant.}
\label{fig:F21}
\end{figure}
To describe the effectiveness of our proposed PMIM by a more general data, we use the benchmark data of base model that can be downloaded from: \url{http://web.mit.edu/braatzgroup/links.html}. $960$ samples are used as test data. The fault is induced after $8$ hours, which corresponds to the $161$-th samples. Because the length of sliding window is $100$, the fault occurs at the time index $61$ (for RTCSA and PMIM) and $60$ (for RDTCSA). Take fault $21$ as an example, the detection performances of RTCSA, RDTCSA and our PMIM are shown in Fig.~\ref{fig:F21}. The FARs of three competing methods are $1.67\%$ (for RTCSA), $27.87\%$ (for RDTCSA) and $0$ (for PMIM). Obviously, our method has the lowest FAR in this example. RTCSA detects a fault at the $85$-th sample, which suggests a detection delay of $24$ samples. By contrast, our PMIM detects a fault at time index $69$, with a detection delay of only $8$ samples. RDTCSA fails in this example, because it alarms a fault at time index $42$ ($18$ samples ahead of the occurrence of fault), which is a false detection.
\section{Conclusion}
This work presents a new information-theoretic method on fault detection. Before our work, most of the information-theoretic fault detection methods just use mutual information (MI) as a dependence measure to select the most informative dimensions to circumvent the curse of dimensionality. Distinct from these efforts, our method does not perform feature selection. Instead, we constructed a MI matrix to quantify all nonlinear dependencies between pairwise dimensions of data.
We introduced the matrix-based R{\'e}nyi's $\alpha$-order mutual information estimator to estimate the MI value in each entry of the MI matrix. The new estimator avoids the density estimation and is well-suited for complex industrial process. By monitoring different orders of statistics associated with the transformed components of the MI matrix, we demonstrated that our method is able to quickly detect the distributional change of the underlying process, and to identify the root variables that cause the fault. We compared our method with four state-of-the-art fault detection methods on both synthetic data and the real-world Tennessee Eastman process. Empirical results suggest that our method improves the fault detection rate (FDR) and significantly reduces the false alarm rate (FAR). We also presented a thorough analysis on effects of hyper-parameters (e.g., window length $w$ and kernel width $\sigma$) to the performance of our method and illuminated how they control the trade-off between FAR and FDR.
Finally, one should note that the MI matrix is a powerful tool to analyze and discover pairwise interactions in high dimensions of multivariate time series in signal processing, economics and other scientific disciplines. Unfortunately, most of its properties, characteristics, and practical advantages are still largely unknown. This work is a first step to understand the value of non-parametric dependence measures (especially the MI matrix) in monitoring industrial process. We will continue working along this direction to improve the performance of our method and also theoretically explore its fundamental properties.
\section*{Acknowledgment}
This work was supported by the National Natural Science Foundation of China under Grant 61751304, 61933013, 62003004; and the Henan Provincial Science and Technology Research Foundation of China under Grant 202102210125.
\section*{Appendix A}
\definecolor{codegreen}{rgb}{0,0.6,0}
\definecolor{codegray}{rgb}{0.5,0.5,0.5}
\definecolor{codepurple}{rgb}{0.58,0,0.82}
\definecolor{backcolour}{rgb}{0.95,0.95,0.92}
\lstdefinestyle{mystyle}{
backgroundcolor=\color{backcolour},
commentstyle=\color{codegreen},
keywordstyle=\color{magenta},
numberstyle=\tiny\color{codegray},
stringstyle=\color{codepurple},
basicstyle=\ttfamily\footnotesize,
breakatwhitespace=false,
breaklines=true,
captionpos=b,
keepspaces=true,
numbers=left,
numbersep=5pt,
showspaces=false,
showstringspaces=false,
showtabs=false,
tabsize=2
}
\lstset{style=mystyle}
For reproducible results, we provide key functions (in MATLTB $2019$a) of the proposed PMIM. Specifically, ``mutual\_information\_estimation.m" estimates the matrix-based R{\'e}nyi's $\alpha$-order mutual information (Eq.~\ref{def_mutual}), in which the ``gaussianMatrix.m" evaluates the kernel induced Gram matrix (Eq.~\ref{K_normal}). ``MI\_matrix.m" obtains a series of mutual information matrix at each time instant $k$. ``MITCSA.m" computes the similarity index (Eq.~\ref{similarity}).
\lstinputlisting[language=Octave]{mutual_information_estimation.m}
\lstinputlisting[language=Octave]{guassianMatrix.m}
\lstinputlisting[language=Octave]{MI_matrix.m}
\lstinputlisting[language=Octave]{MITCSA.m}
\section*{Appendix B}
Tennessee Eastman process (TEP) created by the Eastman Chemical Company is designed to provide an actual industrial process for evaluating process control strategies\cite{Downs_1993,Ricker_1995}. It is composed of five major unit operations including a chemical reactor, a product condenser, a recycle compressor, a vapor-liquid separator and a product stripper. Fig.~\ref{fig:TEP} shows its schematic. 21 types of identified faults are listed in Table~\ref{1:Tableeigen3}. In this work, $33$ different variables ($22$ process measurements and $11$ manipulated measurements) constitute the input of PMIM, as listed in Table~\ref{1:Tableeigen4}. In this sense, the MI matrix in TEP is of size $33\times 33$.
\begin{figure}[hbpt]
\centering
\includegraphics[width=1.1\textwidth]{tep}\\
\caption{The schematic of TEP.}
\label{fig:TEP}
\end{figure}
{
\linespread{1}
\begin{table}[!hbpt]
\small
\centering
\begin{threeparttable}
\caption{Descriptions of process faults in TEP} \label{1:Tableeigen3}
\renewcommand{\arraystretch}{1.1}
\renewcommand{\tabcolsep}{2.5mm}
\begin{tabular}{c|c|c}\hline
\toprule
\textbf{No.} &\textbf{Description} &\textbf{Type} \\ \hline
\midrule
1 & A/C feed ratio, B composition constant (Stream 4) & Step \\ \hline
2 & B composition, A/C ratio constant (Stream 4) & Step \\ \hline
3 & D feed temperature (Stream 2) & Step \\ \hline
4 & Reactor cooling water inlet temperature & Step \\ \hline
5 & Condenser cooling water inlet temperature & Step \\ \hline
6 & A feed loss (Stream 1) & Step \\ \hline
7 & C header pressure loss- reduced availability (Stream 4) & Step \\ \hline
8 & A, B, C feed composition (Stream 4) & Random variation \\ \hline
9 & D feed temperature (Stream 2) & Random variation \\ \hline
10 & C feed temperature (Stream 4) & Random variation \\ \hline
11 & Reactor cooling water inlet temperature & Random variation \\ \hline
12 & Condenser cooling water inlet temperature & Random variation \\ \hline
13 & Reaction kinetics slow & Slow drift \\ \hline
14 & Reactor cooling water valve & Sticking \\ \hline
15 & Condenser cooling water valve & Sticking \\ \hline
16 & Unknown (deviations of heat transfer within stripper (heat exchanger)) & Unknown \\ \hline
17 & Unknown (deviations of heat transfer within reactor) & Unknown \\ \hline
18 & Unknown (deviations of heat transfer within condenser) & Unknown \\ \hline
19 & Unknown & Unknown \\ \hline
20 & Unknown & Unknown \\ \hline
21 & The valve for Stream 4 was fixed at the steady state position & Constant position \\ \hline
\bottomrule
\end{tabular}
\end{threeparttable}
\vspace{-.0in}
\end{table}
}
{
\linespread{1}
\begin{table}[!hbpt]
\small
\begin{threeparttable}
\caption{Monitoring variables in TEP} \label{1:Tableeigen4}
\renewcommand{\arraystretch}{1.1}
\renewcommand{\tabcolsep}{2.5mm}
\begin{tabular}{c c||c c}\hline
\toprule
\textbf{No.} &\textbf{Manipulated measurements} &\textbf{No.} &\textbf{Continuous measurements} \\ \hline
\midrule
1 &D feed flow valve (stream 2) &6 &Reactor feed rate (stream 6) \\ \hline
2 &E feed flow valve (stream 3) &7 &Reactor pressure \\ \hline
3 &A feed flow valve (stream 1) &8 &Reactor level \\ \hline
4 &total feed flow valve (stream 4) &9 &Reactor temperature \\ \hline
5 &compressor recycle valve &10 &Purge rate (stream 9) \\ \hline
6 &purge valve (stream 9) &11 &Product separator temperature \\ \hline
7 &separator pot liquid flow valve (stream 10) &12 &Product separator level \\ \hline
8 &stripper liquid product flow valve (stream 11) &13 &Product separator pressure \\ \hline
9 &stripper steam valve &14 &Product separator underflow \\ \hline
10 &reactor cooling water flow &15 &Stripper level \\ \hline
11 &condenser cooling water flow &16 &Stripper pressure \\ \hline
\multicolumn{2}{c||}{\footnotesize{\textbf{Sampling interval: 6 mins}}} &17 &Stripper underflow \\ \cline{1-2} \hline
\textbf{No.} &\textbf{Continuous measurements} &18 &Stripper temperature \\ \hline
1 &A feed (stream 1) &19 &Stripper steam flow \\ \hline
2 &D feed (stream 2) &20 &Compressor work \\ \hline
3 &E feed (stream 3) &21 &Reactor cooling water outlet temperature \\ \hline
4 &A and C feed (stream 4) &22 &Separator cooling water outlet temperature \\ \hline
5 &Recycle flow (stream 4) &\multicolumn{2}{c}{\footnotesize{\textbf{Sampling interval: 3 mins}}} \\ \cline{3-4} \hline
\bottomrule
\end{tabular}
\end{threeparttable}
\vspace{-.0in}
\end{table}
}
| proofpile-arXiv_065-322 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Swiss-system tournaments received a highly increasing consideration in the last years and are implemented in various professional and amateur tournaments in, e.g., badminton, bridge, chess, e-sports and card games. A Swiss-system tournament is a non-eliminating tournament format that features a predetermined number of rounds of competition. Assuming an even number of participants, each player plays exactly one other player in each round and two players play each other at most once in a tournament. The number of rounds is predetermined and publicly announced. The actual planning of a round usually depends on the results of the previous rounds to generate as attractive matches as possible and highly depends on the considered sport. Typically, players with the same numbers of wins in the matches played so far are paired, if possible.
Tournament designers usually agree on the fact that one should have at least $\log (n)$ rounds in a tournament with $n$ participants to ensure that there cannot be multiple players without a loss in the final rankings. \citet{appleton1995may} even mentions playing $\log (n) + 2$ rounds, so that a player may lose once and still win the tournament.
In this work, we examine a bound on the number of rounds that can be \emph{guaranteed} by tournament designers. Since the schedule of a round depends on the results of previous rounds, it might happen that at some point in the tournament, there is no next round that fulfills the constraint that two players play each other at most once in a tournament. This raises the question of how many rounds a tournament organizer can announce before the tournament starts while being sure that this number of rounds can always be scheduled. We provide bounds that are \emph{independent} of the results of the matches and the detailed rules for the setup of the rounds.
We model the feasible matches of a tournament with $n$ participants as an undirected graph with $n$ vertices. A match that is feasible in the tournament corresponds to an edge in the graph. Assuming an even number of participants, one round of the tournament corresponds to a perfect matching in the graph. After playing one round we can delete the corresponding perfect matching from the set of edges to keep track of the matches that are still feasible. We can guarantee the existence of a next round in a Swiss-system tournament if there is a perfect matching in the graph. The largest number of rounds that a tournament planner can guarantee is equal to the largest number of perfect matchings that a greedy algorithm is guaranteed to delete from the complete graph. Greedily deleting perfect matchings models the fact that rounds cannot be preplanned or adjusted later in time.
Interestingly, the results imply that infeasibility issues can arise in some state-of-the-art rules for table-tennis tournaments in Germany. There is a predefined amateur tournament series with more than 1000 tournaments per year that \emph{guarantees} the 9 to 16 participants 6 rounds in a Swiss-system tournament~\citep{httvCup}. We can show that a tournament with 10 participants might become infeasible after round 5, even if these rounds are scheduled according to the official tournament rules. Infeasible means that no matching of the players who have not played before is possible anymore and thus no rule-consistent next round. For more details, see \citet{Kuntz:Thesis:2020}. Remark~\ref{rem:extend} shows that tournament designers could \emph{extend} the lower bound from 5 to 6, by choosing the fifth round carefully.
We generalize the problem to the famous social golfer problem in which not $2$, but $k\geq 3$ players compete in each match of the tournament, see~\citet{csplib:prob010}. We still assume that each pair of players can meet at most once during the tournament. A famous example of this setting is Kirkman's schoolgirl problem \citep{kirkman1850note}, in which fifteen girls walk to school in rows of three for seven consecutive days such that no two girls walk in the same row twice.
In addition to the theoretical interest in this question, designing golf tournaments with a fixed size of the golf groups that share a hole is a common problem in the state of the art design of golfing tournaments, see e.g.,~\citet{golf}. Another application of the social golfer problem is the Volleyball Nations league. Here 16 teams play a round-robin tournament. To simplify the organisation, they repeatedly meet in groups of four at a single location and play all matches within the group. Planning which teams to group together and invite to a single location is an example of the social golfer problem. See \citet{volleyball}.
In graph-theoretic terms, a round in the social golfer problem corresponds to a set of vertex-disjoint cliques of size $k$ that contains every vertex of the graph exactly once. In graph theory, a feasible round of the social golfer problem is called a clique-factor.
We address the question of how many rounds can be guaranteed if clique-factors, where each clique has a size of $k$, are greedily deleted from the complete graph, i.e., without any preplanning.
A closely related problem is the Oberwolfach problem. In the Oberwolfach problem, we seek to find seating assignments for multiple diners at round tables in such a way that two participants sit next to each other exactly once. Half-jokingly, we use the fact that seatings at Oberwolfach seminars are assigned greedily, and study the greedy algorithm for this problem. Instead of deleting clique-factors, the algorithm now iteratively deletes a set of vertex-disjoint cycles that contains every vertex of the graph exactly once. Such a set is called a cycle-factor. We restrict attention to the special case of the Oberwolfach problem in which all cycles have the same length $k$. We analyze how many rounds can be guaranteed if cycle-factors, in which each cycle has length $k$, are greedily deleted from the complete graph.
\subsection*{Our Contribution} Motivated by applications in sports, the social golfer problem, and the Oberwolfach problem, we study the greedy algorithm that iteratively deletes a clique, respectively cycle, factor in which all cliques/cycles have a fixed size $k$, from the complete graph. We prove the following main results for complete graphs with $n$ vertices for $n$ divisible by $k$.
\begin{itemize}
\item We can always delete $\lfloor n/(k(k-1))\rfloor$ clique-factors in which all cliques have a fixed size $k$ from the complete graph. In other words, the greedy procedure guarantees a schedule of $\lfloor n/(k(k-1))\rfloor$ rounds for the social golfer problem.
This provides a simple polynomial time $\frac{k-1}{2k^2-3k-1}$-approximation algorithm.
\item The bound of $\lfloor n/(k(k-1))\rfloor$ is tight, in the sense that it is the best possible bound we can guarantee for our greedy algorithm. To be more precise, we show that a tournament exists in which we can choose the first $\lfloor n/(k(k-1))\rfloor$ rounds in such a way that no additional feasible round exists. If a well-known conjecture by \citet{chen1994equitable} in graph theory is true (the conjecture is proven to be true for $k\leq 4$), then this is the unique example (up to symmetries) for which no additional round exists after $\lfloor n/(k(k-1))\rfloor$ rounds. In this case, we observe that for $n>k(k-1)$ we can always pick a different clique-factor in the last round such that an additional round can be scheduled.
\item We can always delete $\lfloor (n+4)/6\rfloor$ cycle-factors in which all cycles have a fixed size $k$, where $k\geq 3$, from the complete graph. This implies that our greedy approach guarantees to schedule $\lfloor (n+4)/6\rfloor$ rounds for the Oberwolfach problem. Moreover, the greedy algorithm can be implemented so that it is a polynomial time $\frac{1}{3+\epsilon}$-approximation algorithm for the Oberwolfach problem for any fixed $\epsilon>0$.
\item If El-Zahar's conjecture \citep{el1984circuits} is true (the conjecture is proven to be true for $k\leq 5$), we can increase the number of cycle-factors that can be deleted to $\lfloor (n+2)/4\rfloor$ for $k$ even and $\lfloor (n+2)/4-n/4k\rfloor$ for $k$ odd. Additionally, we show that this bound is essentially tight by distinguishing three different cases. In the first two cases, the bound is tight, i.e., an improvement would immediately disprove El-Zahar's conjecture. In the last case, a gap of one round remains.
\end{itemize}
\section{Preliminaries}
We follow standard notation in graph theory and for two graphs $G$ and $H$ we define an $H$-factor of $G$ as a union of vertex-disjoint copies of $H$ that contains every vertex of the graph $G$.
For some graph $H$ and $n \in\mathbb{N}_{\geq 2}$, a \emph{tournament} with $r$ rounds is defined as a tuple $T=(H_1,\ldots, H_r)$ of $H$-factors of the complete graph $K_n$ such that each edge of $K_n$ is in at most one $H$-factor. The \emph{feasibility graph} of a tournament $T=(H_1,\ldots, H_r)$ is a graph $G = K_n \backslash \bigcup_{i \leq r} H_i$ that contains all edges that are in none of the $H$-factors.
If the feasibility graph of a tournament $T$ is empty, we call $T$ a \emph{complete tournament}.
Motivated by Swiss-system tournaments and the importance of greedy algorithms in real-life optimization problems, we study the greedy algorithm that starts with an empty tournament and iteratively extends the current tournament by an arbitrary $H$-factor in every round until no $H$-factor remains in the feasibility graph. We refer to Algorithm \ref{algo:greedy} for a formal description.
\vspace{\baselineskip}
\begin{algorithm}[H]
\SetAlgoLined
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{number of vertices $n$ and a graph $H$}
\Output{tournament $T$}
$G \leftarrow K_n$\\
$i \leftarrow 1$\\
\While{there is an $H$-factor $H_i$ in $G$}{
delete $H_i$ from $G$\\
$i \leftarrow i+1$\\
}
\Return $T=(H_1,\dots,H_{i-1})$
\caption{Greedy tournament scheduling}
\label{algo:greedy}
\end{algorithm}
\vspace{\baselineskip}
\subsection*{Greedy Social Golfer Problem}
In the greedy social golfer problem we consider tournaments with $H=K_k$, for $k\geq 2$, where $K_k$ is the complete graph with $k$ vertices and all $\frac{k(k-1)}{2}$ edges. The greedy social golfer problem asks for the minimum number of rounds of a tournament computed by Algorithm \ref{algo:greedy}, as a function of $n$ and $k$. The solution of the greedy social golfer problem is a guarantee on the number of $K_k$-factors that can be iteratively deleted from the complete graph without any preplanning.
For sports tournaments this corresponds to $n$ players being assigned to rounds with matches of size $k$ such that each player is in exactly one match per round and each pair of players meets at most once in the tournament.
\subsection*{Greedy Oberwolfach Problem}
In the greedy Oberwolfach problem we consider tournaments with $H=C_k$, for $k\geq 3$, where $C_k$ is the cycle graph with $k$ vertices and $k$ edges. The greedy Oberwolfach problem asks for the minimum number of rounds calculated by Algorithm~\ref{algo:greedy}, given $n$ and $k$. This corresponds to a guarantee on the number of $C_k$-factors that can always be iteratively deleted from the complete graph without any preplanning.\\
Observe that for $k= 3$, both problems are equivalent.
To avoid trivial cases, we assume throughout the paper that $n$ is divisible by $k$. This is a necessary condition for the existence of a \emph{single} round. Usually, in real-life sports tournaments, additional dummy players are added to the tournament if $n$ is not divisible by $k$. The influence of dummy players on the tournament planning strongly depends on the sport. There are sports, like e.g.,\ golf or karting where matches can still be played with less than $k$ players, or others where the match needs to be cancelled if one player is missing, for example beach volleyball or tennis doubles. Thus, the definition of a best possible round if $n$ is not divisible by $k$ depends on the application. We exclude the analysis of this situation from this work to ensure a broad applicability of our results and focus on the case $n \equiv 0 \mod k$.
\subsection{Related Literature}
For matches with $k=2$ players, \cite{rosa1982premature} studied the question whether a given tournament can be extended to a round-robin tournament. This question was later solved by \cite{Csaba2016} as long as the graph is sufficiently large. They showed that even if we apply the greedy algorithm for the first $n/2-1$ rounds, the tournament can be extended to a complete tournament by choosing all subsequent rounds carefully.
\cite{cousins1975maximal} asked the question of how many rounds can be guaranteed to be played in a Swiss-system tournament for the special case $k=2$. They showed that $\frac{n}{2}$ rounds can be guaranteed. Our result of $\left\lfloor\frac{n}{k(k-1)}\right\rfloor$ rounds for the social golfer problem is a natural generalization of this result. \cite{rees1991spectrum} investigated in more detail after how many rounds a Swiss-system tournament can get stuck.
For a match size of $k\geq 2$ players, the original \emph{social golfer problem} with $n\geq 2$ players asks whether a complete tournament with $H=K_k$ exists. For $H=K_2$, such a complete tournament coincides with a round-robin tournament. Round-robin tournaments are known to exist for every even number of players. Algorithms to calculate such schedules are known for more than a century due to \citet{schurig1886}. For a more recent survey on round-robin tournaments, we refer to \cite{rasmussen2008round}.
For $H=K_k$ and $k\geq 2$, complete tournaments are also known as resolvable balanced incomplete block designs (resolvable-BIBDs). To be precise, a \emph{resolvable-BIBD} with parameters $(n,k,1)$ is a collection of subsets (blocks) of a finite set $V$ with $|V|=n$ elements with the following properties:
\begin{enumerate}
\item Every pair of distinct elements $u,v$ from $V$ is contained in exactly one block.
\item Every block contains exactly $k$ elements.
\item The blocks can be partitioned into rounds $R_1, R_2, \ldots , R_r$ such that each element of $V$ is contained in exactly one block of each round.
\end{enumerate}
Notice that a round in a resolvable-BIBD corresponds to an $H$-factor in the social golfer problem.
Similar to the original social golfer problem, a resolvable-BIBD consists of $(n-1)/(k-1)$ rounds. For the existence of a resolvable-BIBD the conditions $n \equiv 0 \mod{k}$ and $n-1 \equiv 0 \mod{k-1}$ are clearly necessary. For $k=3$, \citet{ray1971solution} proved that these two conditions are also sufficient. Later, \citet{hanani1972resolvable} proved the same result for $k=4$. In general, these two conditions are not sufficient (one of the smallest exceptions being $n=45$ and $k=5$), but \citet{ray1973existence} showed that they are asymptotically sufficient, i.e., for every $k$ there exist a constant $c(k)$ such that the two conditions are sufficient for every $n$ larger than $c(k)$. These results immediately carry over to the existence of a \emph{complete} tournament with $n$ players and $H=K_k$.
Closely related to our problem is the question of the existence of graph factorizations. An $H$-factorization of a graph $G$ is collection of $H$-factors that exactly cover the whole graph $G$. For an overview of graph theoretic results, we refer to \cite{yuster2007combinatorial}. \cite{condon2019bandwidth} looked at the problem of maximizing the number of $H$-factors when choosing rounds carefully. For our setting, their results imply that in a sufficiently large graph one can always schedule rounds such that the number of edges remaining in the feasibility graph is an arbitrary small fraction of edges. Notice that the result above assumes that we are able to preplan the whole tournament. In contrast, we plan rounds of a tournament in an online fashion depending on the results in previous rounds.
In greedy tournament scheduling, Algorithm~\ref{algo:greedy} greedily adds one round after another to the tournament, and thus \emph{extends} a given tournament step by step. The study of the existence of another feasible round in a given tournament with $H=K_k$ is related to the existence of an equitable graph-coloring. Given some graph $G=(V,E)$, an $\ell$-coloring
is a function $f: V \rightarrow \{1, \ldots, \ell \}$, such that $f(u) \neq f(v)$ for all edges $(u,v) \in E$. An \emph{equitable $\ell$-coloring} is an $\ell$-coloring, where the number of vertices in any two color classes differs by at most one, i.e., $|\{ v | f(v)=i \}| \in \{ \left\lfloor \frac{n}{\ell} \right\rfloor , \left\lceil \frac{n}{\ell} \right\rceil\}$ for every color $i \in \{1, \ldots , \ell \}$.
To relate equitable colorings of graphs to the study of the extendability of tournaments, we consider the complement graph $\bar{G}$ of the feasibility graph $G=(V,E)$, as defined by
$\bar{G}=K_n \backslash E$. Notice that a color class in an equitable coloring of the vertices of $\bar{G}$ is equivalent to a clique in $G$. In an equitable coloring of $\bar{G}$ with $\frac{n}{k}$ colors, each color class has the same size, which is $k$. Thus, finding an equitable $\frac{n}{k}$-coloring in $\bar{G}$ is equivalent to finding a $K_k$-factor in $G$ and thus an extension of the tournament. Questions on the existence of an equitable coloring dependent on the vertex degrees in a graph have already been considered by \citet{erdos1964problem}, who posed a conjecture on the existence of equitable colorings in low degree graphs, that has been proven by \citet{hajnal1970proof}. Their proof was simplified by \citet{kierstead2010fast}, who also gave a polynomial time algorithm to find an equitable coloring. In general graphs, the existence of clique-factors with clique size equal to $3$ \citep[][Sec.~3.1.2]{garey1979computers} and at least $3$ \citep{kirkpatrick1978completeness,kirkpatrick1983complexity,hell1984packings} is known to be NP-hard.
The maximization variant of the social golfer problem for $n$ players and $H=K_k$ asks for a schedule which lasts as many rounds as possible. It is mainly studied in the constraint programming community using heuristic approaches \citep{dotu2005scheduling, triska2012effective,triska2012improved, liu2019social}. Our results give lower bounds for the maximization variant using a very simple greedy algorithm.
For $n$ players and table sizes $k_1, \ldots, k_{\ell}$ with $n=k_1 + \ldots +k_{\ell}$, the (classical) \emph{Oberwolfach problem} can be stated as follows. Defining $\tilde{H} = \bigcup_{i\leq \ell} C_{k_i}$ the problem asks for the existence of a tournament of $n$ players, with $H=\tilde{H}$ which has $(n-1)/2$ rounds. Note that the Oberwolfach problem does not ask for such an assignment but only for existence. While the general problem is still open, several special cases have been solved. Assuming $k=k_1=\ldots=k_{\ell}$, \citet{alspach1989oberwolfach} showed existence for all odd $k$ and all $n$ odd with $n \equiv 0 \mod{k}$. For $k$ even, \citet{alspach1989oberwolfach} and \citet{hoffman1991existence} analyzed a slight modification of the Oberwolfach problem and showed that there is a tournament, such that the corresponding feasibility graph $G$ is not empty, but equal to a perfect matching for all even $n$ with $n \equiv 0 \mod{k}$.
Recently, the Oberwolfach problem was solved for large $n$, see \cite{glock2021resolution}, and for small $n$, see \cite{salassa}.
\citet{liu2003equipartite} studied a variant of the Oberwolfach problem in bipartite graphs and gave conditions under which the existence of a complete tournament is guaranteed.
A different optimization problem inspired by finding feasible seating arrangements subject to some constraints is given by \cite{estanislaomeunier}.
The question of extendability of a given tournament with $H=C_k$ corresponds to the covering of the feasibility graph with cycles of length $k$. Covering graphs with cycles is already studied since \citet{petersen1891theorie}. The problem of finding a set of cycles of arbitrary lengths covering a graph (if one exists) is polynomially solvable \citep{edmonds1970matching}. However, if certain cycle lengths are forbidden, the problem is NP-complete \citep{hell1988restricted}.
\subsection{Example}
Consider the example of a tournament with $n=6$ and $H=K_2$ depicted in Figure \ref{fig:exa}. The coloring of the edges in the graph on the left represents three rounds $H_1, H_2, H_3$. The first round $H_1$ is depicted by the set of red edges. Each edge corresponds to a match. In the second round, all blue edges are played. The third round $H_3$ consists of all green edges. After these three rounds, the feasibility graph $G$ of the tournament is depicted on the right side of the figure. We cannot feasibly schedule a next round as there is no perfect matching in $G$. Equivalently, we can observe that the tournament with $3$ rounds cannot be extended, since there is no equitable $3$-coloring in $\bar{G}$, which is depicted on the left of Figure~\ref{fig:exa}.
\begin{figure}[t]
\begin{minipage}{0.55\textwidth}
\centering
\begin{tikzpicture}
\draw[thick,red] (-2,1) -- (-2,-1);
\draw[thick,green!50!black] (-2,1) -- (0,-2);
\draw[thick,blue] (-2,1) -- (2,-1);
\draw[thick,blue] (0,2) -- (-2,-1);
\draw[thick,red] (0,2) -- (0,-2);
\draw[thick,green!50!black] (0,2) -- (2,-1);
\draw[thick,green!50!black](2,1) -- (-2,-1);
\draw[thick,blue] (2,1) -- (0,-2);
\draw[thick,red] (2,1) -- (2,-1);
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,2){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,-1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,-2){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,-1){};
\node at (0,-2.5) {};
\end{tikzpicture}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\begin{tikzpicture}
\draw[thick] (-2,1) -- (0,2);
\draw[thick] (-2,1) -- (2,1);
\draw[thick] (0,2) -- (2,1);
\draw[thick] (-2,-1) -- (0,-2);
\draw[thick] (-2,-1) -- (2,-1);
\draw[thick] (0,-2) -- (2,-1);
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,2){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,-1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,-2){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,-1){};
\node at (0,-2.5){};
\end{tikzpicture}
\end{minipage}
\caption{Consider a tournament with 6 participants and $H=K_2$. The left figure corresponds to three rounds, where each color denotes the matches of one round. The right figure depicts the feasibility graph after these three rounds.}\label{fig:exa}
\end{figure}
On the other hand there is a tournament with $n=6$ and $H=K_2$ that consists of $5$ rounds. The corresponding graph is depicted in Figure~\ref{fig:com}. Since this is a complete tournament, the example is a resolvable-BIBD with parameters $(6,2,1)$. The vertices of the graph correspond to the finite set $V$ of the BIBD and the colors in the figure correspond to the rounds in the BIBD. Note that these examples show that there is a complete tournament with $n=6$ and $H=K_2$, where $5$ rounds are played while the greedy algorithm can get stuck after $3$ rounds. In the remainder of the paper, we aim for best possible bounds on the number of rounds that can be guaranteed by using the greedy algorithm.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\draw[thick,red] (-2,1) -- (0,2);
\draw[thick] (-2,1) -- (2,1);
\draw[very thick,yellow!90!black] (0,2) -- (2,1);
\draw[thick,red] (-2,-1) -- (0,-2);
\draw[thick] (-2,-1) -- (2,-1);
\draw[very thick,yellow!90!black] (0,-2) -- (2,-1);
\draw[very thick,yellow!90!black] (-2,1) -- (-2,-1);
\draw[thick,green!50!black] (-2,1) -- (0,-2);
\draw[thick,blue] (-2,1) -- (2,-1);
\draw[thick,blue] (0,2) -- (-2,-1);
\draw[thick] (0,2) -- (0,-2);
\draw[thick,green!50!black] (0,2) -- (2,-1);
\draw[thick,green!50!black](2,1) -- (-2,-1);
\draw[thick,blue] (2,1) -- (0,-2);
\draw[thick,red] (2,1) -- (2,-1);
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,2){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,-1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,-2){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,-1){};
\end{tikzpicture}
\caption{A complete tournament with 6 players and 5 rounds, in which each color represents the matches of a round.}\label{fig:com}
\end{figure}
\subsection{Outline}
The paper is structured as follows. We start with the analysis of Swiss-system tournaments to demonstrate our main ideas. To be more precise, Section \ref{sec:war} considers the setting of greedy tournament scheduling with $H=K_2$. Section \ref{sec:gsgp} then generalizes the main results for the greedy social golfer problem. Lastly, in Section \ref{sec:gop}, we obtain lower and upper bounds on the number of rounds for the greedy Oberwolfach problem.
\section{Warmup: Perfect Matchings}\label{sec:war}
Most sports tournaments consist of matches between two competing players. We therefore first consider the special case
of a tournament with $H=K_2$.
In this setting, the greedy social golfer problem boils down to iteratively deleting perfect matchings from the complete graph. Recall that Propositions \ref{prop:k=2} and \ref{prop:k=2l}, and Corollary \ref{corr:ndivbyfour} were already shown by \cite{cousins1975maximal}. For completeness, we have added the proofs.
First, we use Dirac's theorem to show that we can always greedily delete at least $\frac{n}{2}$ perfect matchings from the complete graph. Recall that we assume $n$ to be even to guarantee the existence of a single perfect matching.
\begin{proposition}\label{prop:k=2}
For each even $n\in\mathbb{N}$ and $H=K_2$, Algorithm~\ref{algo:greedy} outputs a tournament with at least $\frac{n}{2}$ rounds.
\end{proposition}
\begin{proof}
Algorithm~\ref{algo:greedy} starts with an empty tournament and extends it by one round in every iteration.
To show that Algorithm~\ref{algo:greedy} runs for at least $\frac{n}{2}$ iterations we consider the feasibility graph of the corresponding tournament. Recall that the degree of each vertex in a complete graph with $n$ vertices is $n-1$. In each round, the algorithm deletes a perfect matching and thus the degree of a vertex is decreased by $1$. After at most $\frac{n}{2}-1$ rounds, the degree of every vertex is at least $\frac{n}{2}$. By Dirac's theorem \citep{dirac1952some}, a Hamiltonian cycle exists. The existence of a Hamiltonian cycle implies the existence of a perfect matching by taking every second edge of the Hamiltonian cycle. So after at most $\frac{n}{2}-1$ rounds, the tournament can be extended and the algorithm does not terminate.
\end{proof}
Second, we prove that the bound of Proposition \ref{prop:k=2} is tight by showing that there are tournaments that cannot be extended after $\frac{n}{2}$ rounds.
\begin{proposition}\label{prop:k=2l}
There are infinitely many $n \in \mathbb{N}$ for which there exists a tournament that cannot be extended after $\frac{n}{2}$ rounds.
\end{proposition}
\begin{proof}
Choose $n$ such that $\frac{n}{2}$ is odd. We describe the chosen tournament by perfect matchings in the feasibility graph $G$. Given a complete graph with $n$ vertices, we partition the vertices into a set $A$ with $|A|=\frac{n}{2}$ and $V\setminus A$ with $|V\setminus A|=\frac{n}{2}$. We denote the players in $A$ by $1,\ldots, \frac{n}{2}$ and the players in $V\setminus A$ by $\frac{n}{2}+1,\ldots,n$.
In each round $r=1,\ldots,\frac{n}{2}$, player $i+\frac{n}{2}$ is scheduled in a match with player $i+r-1$ (modulo $\frac{n}{2}$) for all $i=1,\ldots,\frac{n}{2}$. After deleting these $\frac{n}{2}$ perfect matchings, the feasibility graph $G$ consists of two disjoint complete graphs of size $\frac{n}{2}$, as every player in $A$ has played against every player in $V\setminus A$. Given that $\frac{n}{2}$ is odd, no perfect matching exists and hence the tournament cannot be extended.
\end{proof}
A natural follow-up question is to characterize those feasibility graphs that can be extended after $\frac{n}{2}$ rounds. Proposition \ref{prop:cha} answers this question and we essentially show that the provided example is the only graph structure that cannot be extended after $\frac{n}{2}$ rounds.
\begin{proposition}\label{prop:cha}
Let $T$ be a tournament of $\frac{n}{2}$ rounds with feasibility graph $G$ and its complement $\bar{G}$. Then $T$ cannot be extended if and only if $\bar{G} = K_{\frac{n}{2},\frac{n}{2}}$ and $\frac{n}{2}$ is odd.
\end{proposition}
Before we prove the proposition we present a result by \citet{chen1994equitable}, which the proof makes use of.
\subsubsection*{Chen-Lih-Wu theorem \citep{chen1994equitable}.}
Let $G$ be a connected graph with maximum degree $\Delta(G) \geq \frac{n}{2}$. If $G$ is different from $K_m$ and $K_{2m+1,2m+1}$ for all $m\geq 1$, then $G$ is equitable $\Delta(G)$-colorable.
\begin{proof}[Proof of Proposition \ref{prop:cha}.]
If the complement of the feasibility graph $\bar{G} = K_{\frac{n}{2},\frac{n}{2}}$ with $\frac{n}{2}$ odd, we are exactly in the situation of the proof of Proposition~\ref{prop:k=2l}. To show equivalence, assume that either $\bar{G} \neq K_{\frac{n}{2},\frac{n}{2}}$ or $\frac{n}{2}$ even.
By using the Chen-Lih-Wu Theorem, we show that in this case $\bar{G}$ is equitable $\frac{n}{2}$-colorable.
After $\frac{n}{2}$ rounds, we have $\Delta(\bar{G})=\frac{n}{2}$. We observe that $\bar{G}= K_n$ if and only if $n=2$ and in this case $\bar{G} = K_{1,1}$, a contradiction. Thus all conditions of the Chen-Lih-Wu theorem are fulfilled, and $\bar{G}$ is equitable $\frac{n}{2}$-colorable. An equitable $\frac{n}{2}$-coloring in $\bar{G}$ corresponds to a perfect matching in $G$ and hence implies that the tournament is extendable.
\end{proof}
\begin{corollary}
\label{corr:ndivbyfour}
For each even $n \in \mathbb{N}$ divisible by four and $H=K_2$, Algorithm~\ref{algo:greedy} outputs a tournament with at least $\frac{n}{2}+1$ rounds.
\end{corollary}
\begin{remark}
\label{rem:extend}
By selecting the perfect matching in round $\frac{n}{2}$ carefully, there always exists a tournament with $\frac{n}{2}+1$ rounds.
\end{remark}
\begin{proof}
After $\frac{n}{2}-1$ rounds of a tournament $T$, the degree of every vertex in $G$ is at least $\frac{n}{2}$. By Dirac's theorem \citep{dirac1952some}, there is a Hamiltonian cycle in $G$. This implies that two edge-disjoint perfect matchings exist: one that takes every even edge of the Hamiltonian cycle and one that takes every odd edge of the Hamiltonian cycle. If we first extend $T$ by taking every even edge of the Hamiltonian cycle and then extend $T$ by taking every odd edge of the Hamiltonian cycle, we have a tournament of $\frac{n}{2}+1$ rounds.
\end{proof}
\section{The Greedy Social Golfer Problem}\label{sec:gsgp}
We generalize the previous results to $k\geq 3$. This means we analyze tournaments with $n$ participants and $H=K_k$. Dependent on $n$ and $k$, we provide tight bounds on the number of rounds that can be scheduled greedily, i.e., by using Algorithm~\ref{algo:greedy}.
Remember that we assume that $n$ is divisible by $k$.
\begin{theorem}\label{thm:k>2}
For each $n \in \mathbb{N}$ and $H= K_k$, Algorithm~\ref{algo:greedy} outputs a tournament with at least $\lfloor\frac{n}{k(k-1)}\rfloor$ rounds.
\end{theorem}
Before we continue with the proof, we first state a result from graph theory. In our proof, we will use the Hajnal-Szemeredi theorem and adapt it such that it applies to our setting.
\subsubsection*{Hajnal-Szemeredi Theorem \citep{hajnal1970proof}.}
Let $G$ be a graph with $n\in\mathbb{N}$ vertices and maximum vertex degree $\Delta(G)\leq \ell-1$. Then $G$ is equitable $\ell$-colorable.
\begin{proof}[Proof of Theorem \ref{thm:k>2}.]
We start by proving the lower bound on the number of rounds. Assume for sake of contradiction that there are $n \in \mathbb{N}$ and $k \in \mathbb{N}$ such that the greedy algorithm for $H=K_k$ terminates with a tournament $T$ with $r\leq \lfloor\frac{n}{k(k-1)}\rfloor -1$ rounds. We will use the feasibility graph $G$ corresponding to $T$. Recall that the degree of a vertex in a complete graph with $n$ vertices is $n-1$. For each $K_k$-factor $(H_1, \dots, H_r)$, every vertex loses $k-1$ edges. Thus, every vertex in $G$ has degree
\[n-1 - r(k-1) \geq n-1-\left(\Big\lfloor\frac{n}{k(k-1)}\Big\rfloor -1\right)(k-1) \geq n-1-\frac{n}{k}+k-1\;.\]
We observe that each vertex in the complement graph $\bar{G}$ has at most degree $\frac{n}{k} - k +1$. Using the Hajnal-Szemeredi theorem with $\ell = \frac{n}{k}$, we obtain the existence of an $\frac{n}{k}$-coloring where all color classes have size $k$. Since there are no edges between vertices of the same color class in $\bar{G}$, they form a clique in $G$. Thus, there exists a $K_k$-factor in $G$, which is a contradiction to the assumption that Algorithm \ref{algo:greedy} terminated.
This implies that $r>\lfloor\frac{n}{k(k-1)}\rfloor -1$, i.e., the total number of rounds is at least $\lfloor\frac{n}{k(k-1)}\rfloor$.
\end{proof}
\begin{remark}
\citet{kierstead2010fast} showed that finding a clique-factor can be done in polynomial time if the minimum vertex degree is at least $\frac{n(k-1)}{k}$.
\end{remark}
Let OPT be the maximum possible number of rounds of a tournament. We conclude that the greedy algorithm is a constant factor approximation factor for the social golfer problem.
\begin{corollary}
Algorithm~\ref{algo:greedy} outputs at least $\frac{1}{k}\text{OPT}-1$ rounds for the social golfer problem. Thus it is a $\frac{k-1}{2k^2-3k-1}$-approximation algorithm for the social golfer problem.
\end{corollary}
\begin{proof}
The first statement follows directly from Theorem \ref{thm:k>2} and the fact that $\text{OPT} \leq \frac{n-1}{k-1}$.
For proving the second statement, we first consider the case $n\leq 2k(k-1)-k$. Note that the algorithm always outputs at least one round. OPT is upper bounded by $\frac{n-1}{k-1} \leq \frac{2k (k-1)- k-1}{k-1}=\frac{2k^2-3k-1}{k-1}$, which implies the approximation factor.
For $n\geq 2k(k-1)-k$, observe that the greedy algorithm guarantees to output $\lfloor \frac{n}{k(k-1)} \rfloor$ rounds in polynomial time by Theorem \ref{thm:k>2}. This yields
\begin{align*}
\frac{\left \lfloor \frac{n}{k(k-1)}\right\rfloor}{\frac{n-1}{k-1}} &\geq \frac{\frac{n - \left(k (k-1)-k\right)}{k(k-1)}}{\frac{n-1}{k-1}}\geq \frac{\frac{2k(k-1)-k - k (k-1)+k}{k(k-1)}}{\frac{2k(k-1)-k-1}{k-1}}=\frac{k-1}{2k^2-3k-1},
\end{align*}
where the first inequality follows since we round down by at most $\frac{k(k-1)-k}{k(k-1)}$ and the second inequality follows since the second term is increasing in $n$ for $k \geq 3$.
\end{proof}
Our second main result on greedy tournament scheduling with $H=K_k$ shows that the bound of Theorem \ref{thm:k>2} is tight.
\begin{theorem}
There are infinitely many $n \in \mathbb{N}$ for which there exists a tournament that cannot be extended after $\lfloor\frac{n}{k(k-1)}\rfloor$ rounds.
\label{lowerboundexample_k>2}
\end{theorem}
\begin{proof}
We construct a tournament with $n=j(k(k-1))$ participants for some $j$ large enough to be chosen later. We will define necessary properties of $j$ throughout the proof and argue in the end that there are infinitely many possible integral choices for $j$. The tournament we will construct has $\lfloor\frac{n}{k(k-1)}\rfloor$ rounds and we will show that it cannot be extended. Note that $\lfloor\frac{n}{k(k-1)}\rfloor = \frac{n}{k(k-1)}$.
The proof is based on a step-by-step modification of the feasibility graph $G$. We will start with the complete graph $K_n$ and describe how to delete $\frac{n}{k(k-1)}$ $K_k$-factors such that the resulting graph does not contain a $K_k$-factor. This is equivalent to constructing a tournament with $\lfloor\frac{n}{k(k-1)}\rfloor$ rounds that cannot be extended.
Given a complete graph with $n$ vertices, we partition the vertices $V$ in two sets, a set $A$ with $\ell=\frac{n}{k}+1$ vertices and a set $V \backslash A$ with $n-\ell$ vertices. We will choose all $\frac{n}{k(k-1)}$ $K_k$-factors in such a way, that no edge $\{a,b\}$ with $a\in A$ and $b\notin A$ is deleted, i.e., each $K_k$ is either entirely in $A$ or entirely in $V\setminus A$. We will explain below that this is possible. Since a vertex in $A$ has $\frac{n}{k}$ neighbours in $A$ and $k-1$ of them are deleted in every $K_k$-factor, all edges within $A$ are deleted after deleting $\frac{n}{k(k-1)}$ $K_k$-factors.
We now first argue that after deleting these $\frac{n}{k(k-1)}$ $K_k$-factors, no $K_k$-factor exists. Assume that there exists another $K_k$-factor. In this case, each vertex in $A$ forms a clique with $k-1$ vertices of $V \backslash A$. However, since $(k-1)\cdot(\frac{n}{k}+1)>\frac{(k-1)n}{k}-1=|V \backslash A|$ there are not enough vertices in $V \backslash A$, a contradiction to the existence of the $K_k$-factor.
It remains to show that there are $\frac{n}{k(k-1)}$ $K_k$-factors that do not contain an edge $\{a,b\}$ with $a\in A$ and $b \notin A$. We start by showing that $\frac{n}{k(k-1)}$ $K_k$-factors can be found within $A$. \citet{ray1973existence} showed that given $k'\geq 2$ there exists a constant $c(k')$ such that if $n'\geq c(k')$ and $n' \equiv k' \mod k'(k'-1)$, then a resolvable-BIBD with parameters $(n',k',1)$ exists.
By choosing $k'=k$ and $n' = \ell$ with $j= \lambda \cdot k +1$ for some $\lambda \in \mathbb{N}$ large enough, we establish $\ell\geq c(k)$, where $c(k)$ is defined by \citet{ray1973existence}, and we get
\[|A| = \ell = \frac{n}{k}+1 = j(k-1)+1 = (\lambda k +1)(k-1) + 1 = k+ \lambda k (k-1)\;.\]
Thus, a resolvable-BIBD with parameters $(\ell,k,1)$ exists, and there is a complete tournament for $\ell$ players with $H=K_k$, i.e., we can find $\frac{n}{k(k-1)}$ $K_k$-factors in $A$.
It remains to show that we also find $\frac{n}{k(k-1)}$ $K_k$-factors in $V \setminus A$. We define a tournament that we call shifting tournament as follows. We arbitrarily write the names of the players in $V \setminus A$ into a table of size $k\times (n-\ell)/k$. Each column of the table corresponds to a $K_k$ and the table to a $K_k$-factor in $V \setminus A$. By rearranging the players we get a sequence of tables, each corresponding to a $K_k$-factor. To construct the next table from a preceding one, for each row $i$, all players move $i-1$ steps to the right (modulo $(n-\ell)/k$).
We claim that this procedure results in $\frac{n}{k(k-1)}$ $K_k$-factors that do not share an edge. First, notice that the step difference between any two players in two rows $i \neq i'$ is at most $k-1$, where we have equality for rows $1$ and $k$. However, we observe that $(n-\ell)/k$ is not divisible by $(k-1)$ since $n/k$ is divisible by $k-1$ by definition, whereas $\ell/k$ is not divisible by $k-1$ since $\ell/k(k-1)=1/(k-1)+\lambda$ and this expression is not integral. Thus, a player in row $1$ can only meet a player in row $k$ again after at least $2\frac{n-\ell}{k(k-1)}$ rounds.
Since $2\frac{n-\ell}{k(k-1)}\geq\frac{n}{k(k-1)}$ if $n\geq \frac{2k}{k-2}$, the condition is satisfied for $n$ sufficiently large.
Similarly, we have to check that two players in two rows with a relative distance of at most $k-2$ do not meet more than once. Since $\frac{n-\ell}{k(k-2)}\geq\frac{n}{k(k-1)}$ if $n\geq k^2-k$, the condition is also satisfied for $n$ sufficiently large.
Observe that there are infinitely many $n$ and $\ell$ such that $\ell=\frac{n}{k}+1$, $n$ is divisible by $k(k-1)$ and $\ell \equiv k \mod{k(k-1)}$ and thus the result follows for sufficiently large $n$.
\end{proof}
We turn our attention to the problem of characterizing tournaments that are not extendable after $\lfloor\frac{n}{k(k-1)}\rfloor$ rounds. Assuming the Equitable $\Delta$-Coloring Conjecture (E$\Delta$CC) is true, we give an exact characterization of the feasibility graphs of tournaments that cannot be extended after $\lfloor\frac{n}{k(k-1)}\rfloor$ rounds. The existence of an instance not fulfilling these conditions would immediately disprove the E$\Delta$CC.
Furthermore, this characterization allows us to guarantee $\lfloor\frac{n}{k(k-1)}\rfloor+1$ rounds in every tournament when the last two rounds are chosen carefully.
\subsubsection*{Equitable $\Delta$-Coloring Conjecture \citep{chen1994equitable}.}
Let $G$ be a connected graph with maximum degree $\Delta(G) \leq \ell$. Then $G$ is not equitable $\ell$-colorable if and only if one of the following three cases occurs:
\begin{enumerate}
\item[(i)] $G=K_{\ell+1}$.
\item[(ii)] $\ell=2$ and $G$ is an odd cycle.
\item[(iii)] $\ell$ odd and $G=K_{\ell,\ell}$.
\end{enumerate}
\subsubsection*{}The conjecture was first stated by \citet{chen1994equitable} and is proven for $|V|=k\cdot\ell$ and $k=2,3,4$. See the Chen-Lih-Wu theorem for $k=2$ and \citet{kierstead2015refinement} for $k=3,4$. Both results make use of Brooks' theorem \citep{brooks1941coloring}. For $k>4$, the conjecture is still open.
\begin{proposition}
If E$\Delta$CC{} is true, a tournament with $\lfloor\frac{n}{k(k-1)} \rfloor$ rounds cannot be extended if and only if $K_{\frac{n}{k}+1}$ is a subgraph of the complement graph $\bar{G}$.
\label{prop:charOneRoundMore}
\end{proposition}
Before we start the proof we state the following claim, which we will need in the proof.
\begin{claim}
\label{claim:connectedcomponents}
Let $G$ be a graph with $|G|$ vertices and let $m$ be such that $|G| \equiv 0 \mod{m}$. Given an equitable $m$-coloring for every connected component $G_i$ of $G$, there is an equitable $m$-coloring for $G$.
\end{claim}
\begin{proof}
Let $G$ consist of connected components $G_1, \dots, G_c$. In every connected component $G_i$, $i \in \{1, \dots, c\}$, there are $\ell_i \equiv |G_i| \mod{m}$ \emph{large color classes}, i.e., color classes with $\lfloor\frac{|G_i|}{m}\rfloor +1$ vertices and $m-\ell_i$ \emph{small color classes}, i.e., color classes with $\lfloor\frac{|G_i|}{m}\rfloor$ vertices. First note that from $\ell_i \equiv|G_i| \mod{m}$, it follows that $(\sum \ell_i) \equiv (\sum |G_i|) \equiv |G| \equiv 0 \mod{m}$, i.e., $\sum \ell_i$ is divisible by $m$.
In the remainder of the proof, we will argue that we can recolor the color classes in the connected components to new color classes $\{1,\ldots, m\}$ that form an equitable coloring. The proof is inspired by McNaughton's wrap around rule \cite{mcnaughton1959scheduling} from Scheduling. Pick some connected component $G_i$ and assign the $\ell_i$ large color classes to the new colors $\{1, \dots, \ell_i\}$. Choose some next connected component $G_j$ with $j\neq i$ and assign the $\ell_j$ large color classes to the new colors $\{(\ell_{i} + 1) \mod m, \dots, (\ell_{i} + \ell_j) \mod{m} \}$. Proceed analogously with the remaining connected components. Note that $\ell_i<m$ for all $i \in \{1, \dots, c\}$, thus we assign at most one large color class from each component to every new color. Finally, for each connected component we add a small color class to all new colors if no large color class from this connected component was added in the described procedure.
Each new color class contains exactly $\frac{\sum \ell_i}{m}$ large color classes and $c - \frac{\sum \ell_i}{m}$ small color classes and has thus the same number of vertices.
This gives us an $m$-equitable coloring of $G$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:charOneRoundMore}.]
First, assume that $K_{\frac{n}{k}+1}$ is a subgraph of $\bar{G}$, the complement of the feasibility graph. We will show that the tournament is not extendable. To construct an additional round, on the one hand at most one vertex from the complete graph $K_{\frac{n}{k}+1}$ can be in each clique. On the other hand, there are only $\frac{n}{k}$ cliques in a round which directly implies that the tournament is not extendable.
Second, assume that the tournament cannot be extended after $\lfloor \frac{n}{k(k-1)} \rfloor$ rounds. We will show that given E$\Delta$CC{} $K_{\frac{n}{k}+1}$ is a subgraph of $\bar{G}$. If there is an equitable $\frac{n}{k}$-coloring for every connected component, then by \Cref{claim:connectedcomponents} there is an equitable $\frac{n}{k}$-coloring of $\bar{G}$ and thus also a next round of the tournament. This is a contradiction to the assumption. Thus, there is a connected component $\bar{G}_i$ that is not equitable $\frac{n}{k}$-colorable.
After $\lfloor \frac{n}{k(k-1)} \rfloor$ rounds, the degree of all vertices $v$ in $\bar{G}_i$ is $\Delta(\bar{G})=(k-1)\lfloor\frac{n}{k(k-1)}\rfloor \leq \frac{n}{k}$. By E$\Delta$CC for $\ell =\frac{n}{k}$, one of the following three cases occur: (i) $\bar{G}_i=K_{\frac{n}{k} + 1}$, or (ii) $\frac{n}{k}=2$ and $\bar{G}_i$ is an odd cycle, or (iii) $\frac{n}{k}$ is odd and $\bar{G}_i=K_{\frac{n}{k}, \frac{n}{k}}$. We will show that (ii) and (iii) cannot occur, thus $\bar{G}_i=K_{\frac{n}{k} + 1}$, which will finish the proof.
Assume that (ii) occurs, i.e., $\frac{n}{k}=2$ and $\bar{G}_i$ is an odd cycle. Since we assume $k\geq 3$ in this section, an odd cycle can only be formed from a union of complete graphs $K_k$ if there is only one round with $k=3$. Thus, we have that $n=6$. In this case, (ii) reduces to (i) because $\bar{G}_i = K_{3} = K_{\frac{n}{k}+1}$.
Next, assume that (iii) occurs, i.e., $\frac{n}{k}$ is odd and $\bar{G}_i=K_{\frac{n}{k}, \frac{n}{k}}$. Given that $k \geq 3$, we will derive a contradiction. Since $k\geq 3$, every clique of size $k$ contains an odd cycle. This implies $\bar{G}_i$ contains an odd cycle, contradicting that $\bar{G}_i$ is bipartite.
\end{proof}
Note that any tournament with $H=K_k$ and $\lfloor \frac{n}{k(k-1)}\rfloor$ rounds which does not satisfy the condition in \Cref{prop:charOneRoundMore} would disprove the E$\Delta$CC{} .
\begin{proposition}
Let $n>k(k-1)$. If E$\Delta$CC{} is true, then by choosing round $\lfloor \frac{n}{k(k-1)}\rfloor$ carefully, there always exists a tournament with $\lfloor \frac{n}{k(k-1)}\rfloor + 1$ rounds,
\end{proposition}
\begin{proof}
A tournament with $\lfloor \frac{n}{k(k-1)}\rfloor$ rounds is either extendable or by \Cref{prop:charOneRoundMore}, at least one connected component of the complement of the feasibility graph is equal to $K_{\frac{n}{k}+1}$. In the former case, we are done. So assume the latter case. Denote the connected components that are equal to $K_{\frac{n}{k}+1}$
by $\bar{G}_1, \ldots , \bar{G}_c$. First, we shorten the tournament by eliminating the last round and then extend it by two other rounds.
First of all notice that to end up with $\bar{G}_i=K_{\frac{n}{k}+1}$ after $\lfloor \frac{n}{k(k-1)}\rfloor$ rounds all matches between vertices in $\bar{G}_i$ need to be scheduled entirely inside $\bar{G}_i$. The reason for this is that $\bar{G}_i$ has $\frac{n^2}{2k^2}+ \frac{n}{2k}$ edges which is the maximum number of edges that can arise from $\lfloor \frac{n}{k(k-1)}\rfloor$ rounds with $\frac{n}{k}+1$ players.
Clearly, the last round of the original tournament corresponds to a $K_k$-factor in the feasibility graph of the shortened tournament. By the assumed structure of the feasibility graph, all cliques $K_k$ are either completely within $\bar{G}_i$, $i \in \{1,\dots,c\}$ or completely within $V \setminus \bigcup_{i \in \{ 1, \ldots , c\}} \bar{G}_i$. Thus, for each $i \in \{1,\dots,c\}$, all edges between $\bar{G}_i$ and $V \setminus \bar{G}_i$ are not present in the complement of the feasibility graph.
If $c=1$, select a vertex $v_1 \in \bar{G}_1$ and $v_2 \in V \setminus \bar{G}_1$. Exchange these vertices to get a $K_k$-factor with which the shortened tournament is extended. More precisely, $v_1$ is paired with the former clique of $v_2$ and vice versa, while all remaining cliques stay the same.
Since $k<\frac{n}{k}+1$ by assumption, this ensures that there is no set of $\frac{n}{k}+1$ vertices for which we have only scheduled matches within this group. Thus, after extending the tournament, no connected component in the complement of the feasibility graph corresponds to $K_{\frac{n}{k}+1}$.
By \Cref{prop:charOneRoundMore}, the tournament can be extended to have $\lfloor \frac{n}{k(k-1)}\rfloor + 1$ rounds.
If $c>1$, we select a vertex $v_i$ from each $\bar{G}_i$ for $i \in \{ 1,\ldots , c\}$.
We exchange the vertices in a cycle to form new cliques, i.e., $v_i$ is now paired with the vertices in the old clique of $v_{i+1}$ for all $i \in \{1, \dots, c\}$, where $v_{c+1}=v_1$. By adding this new $K_k$-factor, we again ensure that there is no set of $\frac{n}{k}+1$ vertices for which we have only scheduled matches within this group. By applying \Cref{prop:charOneRoundMore} we can extend the tournament for another round, which finishes the proof.
\end{proof}
\section{The Greedy Oberwolfach Problem} \label{sec:gop}
In this section we consider tournaments with $H=C_k$ for $k \geq 3$. Dependent on the number of participants $n$ and $k$ we derive bounds on the number of rounds that can be scheduled greedily in such a tournament.
Before we continue with the theorem, we first state two graph-theoretic results and a conjecture.
\subsubsection*{Aigner-Brandt Theorem \citep{aigner1993embedding}.}
Let $G$ be a graph with minimum degree $\delta(G) \geq \frac{2n-1}{3}$. Then $G$ contains any graph $H$ with at most $n$ vertices and maximum degree $\Delta(H)= 2$ as a subgraph.
\subsubsection*{Alon-Yuster Theorem \citep{alon1996h}.}
For every $\epsilon>0$ and for every $k\in\mathbb{N}$, there exists an $n_0=n_0(\epsilon,k)$ such that for every graph $H$ with $k$ vertices and for every $n>n_0$, any graph $G$ with $nk$ vertices and minimum degree $\delta(G)\geq \left(\frac{\chi(H)-1}{\chi(H)}+\epsilon\right)nk$ has an $H$-factor that can be computed in polynomial time. Here, $\chi(H)$ denotes the chromatic number of $H$, i.e., the smallest possible number of colors for a vertex coloring of $H$.
\subsubsection*{El-Zahar's Conjecture \citep{el1984circuits}.}
Let $G$ be a graph with $n=k_1+\ldots+k_{\ell}$. If $\delta(G)\geq \lceil \frac{1}{2} k_1 \rceil + \ldots + \lceil \frac{1}{2} k_\ell \rceil$, then G contains $\ell$ vertex disjoint cycles of lengths $k_1, \ldots, k_\ell$.
El-Zahar's Conjecture is proven to be true for $k_1=\ldots=k_{\ell}=3$ \citep{corradi1963maximal}, $k_1=\ldots=k_{\ell}=4$ \citep{wang2010proof}, and $k_1=\ldots=k_{\ell}=5$ \citep{wang2012disjoint}.
\begin{theorem}\label{thm:obe1}
Let $H=C_k$, Algorithm~\ref{algo:greedy} outputs a tournament with at least
\begin{enumerate}
\item $\lfloor\frac{n+4}{6}\rfloor$ rounds for all $n \in \mathbb{N}$\;,
\item $\lfloor\frac{n+2}{4}-\epsilon\cdot n \rfloor$ rounds for $n$ large enough, $k$ even and for fixed $\epsilon>0$\;,
\end{enumerate}
If El-Zahar's conjecture is true, the number of rounds improves to $\lfloor\frac{n+2}{4}\rfloor$ for $k$ even and $\lfloor\frac{n+2}{4}-\frac{n}{4k}\rfloor$ for $k$ odd and all $n \in \mathbb{N}$.
\end{theorem}
\begin{proof}
\textbf{Statement 1.} Recall that Algorithm \ref{algo:greedy} starts with the empty tournament and the corresponding feasibility graph is the complete graph, where the degree of every vertex is $n-1$. In each iteration of the algorithm, a $C_k$-factor is deleted from the feasibility graph and thus every vertex loses $2$ edges.
We observe that as long as the constructed tournament has at most $\lfloor\frac{n-2}{6}\rfloor$ rounds, the degree of every vertex in the feasibility graph is at least $n-1-\lfloor\frac{n-2}{3}\rfloor \geq \frac{2n-1}{3}$.
Since a $C_k$-factor with $n$ vertices has degree $2$, by the Aigner-Brandt theorem $G$ contains a $C_k$-factor. It follows that the algorithms runs for another iteration. In total, the number of rounds of the tournament is at least $\lfloor\frac{n-2}{6}\rfloor+1 = \lfloor\frac{n+4}{6}\rfloor$.
\textbf{Statement 2.} Assume $k$ is even. We have that the chromatic number $\chi(C_k)=2$. As long as Algorithm~\ref{algo:greedy} runs for at most $\lfloor\frac{n-2}{4}-\epsilon\cdot n\rfloor$ iterations, the degree of every vertex in the feasibility graph is at least $n-1-2 \cdot \lfloor\frac{n-2}{4}-\epsilon\cdot n\rfloor\geq n-1-\frac{n-2}{2}+2\epsilon\cdot n = \frac{n}{2}+2\epsilon\cdot n$. Hence by the Alon-Yuster theorem with $k'=k$, $n'=\frac{n}{k}$ and $\epsilon'=2\epsilon$, a $C_k$-factor exists for $n$ large enough and thus another iteration is possible. This implies that Algorithm~\ref{algo:greedy} is guaranteed to construct a tournament with $\lfloor\frac{n-2}{4}-\epsilon\cdot n\rfloor + 1$ rounds.
\textbf{Statement El-Zahar, $k$ even.} As long as Algorithm~\ref{algo:greedy} runs for at most $\lfloor\frac{n-2}{4}\rfloor$ iterations, the degree of every vertex in the feasibility graph is at least $n-1-2 \cdot \lfloor\frac{n-2}{4}\rfloor\geq n-1-\frac{n-2}{2} = \frac{n}{2}$. Hence from El-Zahar's conjecture with $k_1 = k_2 = \dots = k_\ell = k$ and $\ell=\frac{n}{k}$, we can deduce that a $C_k$-factor exists as $\frac{k}{2}\cdot \frac{n}{k}=\frac{n}{2}$, and thus another iteration is possible. This implies that Algorithm~\ref{algo:greedy} is guaranteed to construct a tournament with $\lfloor\frac{n-2}{4}\rfloor + 1$ rounds.
\textbf{Statement El-Zahar, $k$ odd.} As long as Algorithm~\ref{algo:greedy} runs for at most $\lfloor\frac{n-2}{4}-\frac{n}{4k}\rfloor$ iterations, the degree of every vertex in the feasibility graph is at least $n-1-\frac{n-2}{2}+\frac{n}{2k} = \frac{n}{2}+ \frac{n}{2k}$. Hence from El-Zahar's conjecture with $k_1 = k_2 = \dots = k_\ell = k$ and $\ell=\frac{n}{k}$, we can deduce that a $C_k$-factor exists as $\frac{k+1}{2}\cdot \frac{n}{k}=\frac{n}{2}+ \frac{n}{2k}$, and thus the constructed tournament can be extended by one more round. This implies that the algorithm outputs a tournament with at least $\lfloor\frac{n-2}{4}-\frac{n}{4k}\rfloor +1$ rounds.
\end{proof}
\begin{proposition}\label{prop:obe2}
Let $H=C_k$ for fixed $k$. Algorithm~\ref{algo:greedy} can be implemented such that it runs in polynomial time for at least
\begin{enumerate}
\item $\lfloor\frac{n+2}{4}-\epsilon\cdot n \rfloor$ rounds, for $k$ even, and fixed $\epsilon>0$, or stops if, in the case of small $n$, no additional round is possible\;,
\item $\lfloor\frac{n+3}{6}-\epsilon \cdot n\rfloor$ rounds, for $k$ odd, and fixed $\epsilon>0$\;.
\end{enumerate}
\end{proposition}
\begin{proof}
\textbf{Case 1.} Assume $k$ is even. By the Alon-Yuster theorem, analogously to Case 2 of Theorem~\ref{thm:obe1}, the first $\lfloor\frac{n+2}{4}-\epsilon\cdot n \rfloor$ rounds exist and can be computed in polynomial time, given that $n>n_0$ for some $n_0$ that depends on $\epsilon$ and $k$. Since $\epsilon$ and $k$ are assumed to be constant, $n_0$ is constant. By enumerating all possibilities in the case $n\leq n_0$, we can bound the running time for all $n \in \mathbb{N}$ by a polynomial function in $n$. Note that the Alon-Yuster theorem only implies existence of $\lfloor\frac{n+2}{4}-\epsilon \cdot n\rfloor$ rounds if $n>n_0$, so it might be that the algorithm stops earlier, but in polynomial time, for $n\leq n_0$.
\textbf{Case 2.} Assume $k$ is odd. First note that the existence of the first $\lfloor\frac{n-3}{6}-\epsilon\cdot n\rfloor \leq \lfloor \frac{n+4}{6} \rfloor$ rounds follows from Theorem~\ref{thm:obe1}. Observe that for odd cycles $C_k$ the chromatic number is $\chi(C_k)=3$. As long as Algorithm~\ref{algo:greedy} runs for at most $\lfloor\frac{n-3}{6}-\epsilon\cdot n\rfloor$ iterations, the degree of every vertex in the feasibility graph is at least $n-1-2 \cdot \lfloor\frac{n-3}{6}-\epsilon\cdot n\rfloor\geq n-1-\frac{n-3}{3}+2\epsilon\cdot n = \frac{2n}{3}+2\epsilon\cdot n$. Hence by the Alon-Yuster theorem with $\epsilon'=2\epsilon$, there is an $n_0$ dependent on $k$ and $\epsilon$ such that a $C_k$-factor can be computed in polynomial time for all $n>n_0$. Since $\epsilon$ and $k$ are assumed to be constant, $n_0$ is constant. By enumerating all possibilities for $n\leq n_0$ we can bound the running time of the algorithm by a polynomial function in $n$ for all $n \in \mathbb{N}$.
\end{proof}
\begin{corollary}
For any fixed $\epsilon>0$, Algorithm~\ref{algo:greedy} is a $\frac{1}{3+\epsilon} $-approximation algorithm for the Oberwolfach problem.
\end{corollary}
\begin{proof}
Fix $\epsilon > 0$.
\paragraph{Case 1} If $n \geq \frac{12}{\epsilon} +6$, we choose $\epsilon '= \frac{1}{(3+ \epsilon) \frac{12}{\epsilon}}$ and use Proposition~\ref{prop:obe2} with $\epsilon'$. We observe
\begin{align*}&\left\lfloor\frac{n+3}{6}-\frac{1}{(3+ \epsilon) \frac{12}{\epsilon}}\cdot n\right\rfloor \cdot (3 + \epsilon) \geq \left(\frac{n-3}{6}-\frac{1}{(3+ \epsilon) \frac{12}{\epsilon}}\cdot n\right) \cdot (3 + \epsilon) \\
= &\left(\frac{(n-3)(3 + \epsilon)}{6}-\frac{\epsilon}{12}\cdot n\right) = \left(\frac{n-3}{2} + \frac{(n-3) \epsilon}{6} -\frac{\epsilon}{12}\cdot n\right)\\ = &\left(\frac{n-3}{2} + \frac{(2 n \epsilon -6 \epsilon)}{12} -\frac{\epsilon n}{12}\right) = \frac{n-1}{2} + \frac{\epsilon( n -6)-12}{12} \geq \frac{n-1}{2} \geq \text{OPT}\;.
\end{align*}
\paragraph{Case 2} If $n < \frac{12}{\epsilon} +6$,
$n$ is a constant and we can find a cycle-factor in each round by enumeration. By the Aigner-Brandt theorem the algorithm outputs $\lfloor \frac{n+4}{6}\rfloor \geq \frac{n-1}{6}$ rounds. This implies a $\frac{1}{3}> \frac{1}{3 + \epsilon}$ approximation algorithm.
\end{proof}
In the rest of the section, we show that the bound corresponding to El-Zahar's conjecture presented in Theorem \ref{thm:obe1} is essentially tight. Through a case distinction, we provide matching examples that show the tightness of the bounds provided by El-Zahar's conjecture for two of three cases. For $k$ even but not divisible by $4$, an additive gap of one round remains. All other cases are tight. Note that this implies that any improvement of the lower bound via an example by just one round (or by two for $k$ even but not divisible by $4$) would disprove El-Zahar's conjecture.
\begin{theorem}
There are infinitely many $n \in \mathbb{N}$ for which there exists a tournament with $H=C_k$ that is not extendable after
\begin{enumerate}
\item $\lfloor\frac{n+2}{4}-\frac{n}{4k}\rfloor$ rounds if $k$ is odd\;,
\item $\lfloor\frac{n+2}{4}\rfloor$ rounds if $k \equiv 0 \mod{4}$\;,
\item $\lfloor\frac{n+2}{4}\rfloor+ 1$ rounds if $k \equiv 2 \mod{4}$\;.
\end{enumerate}
\end{theorem}
\begin{proof}
\textbf{Case 1.} Assume $k$ is odd. Let $n =2k\sum_{j=0}^i k^j$ for some integer $i\in\mathbb{N}$.
We construct a tournament with $n$ participants and $H=C_k$. To do so, we start with the empty tournament and partition the set of vertices of the feasibility graph into two disjoint sets $A$ and $B$. The sets are chosen such that $A \cup B = V$, and $|A| = \frac{n}{2}-\frac{n}{2k}+1= (k-1)\sum_{j=0}^i k^j+1=k^{i+1}$, $|B|= \frac{n}{2}+\frac{n}{2k}-1$ vertices. We observe that $|A|\leq|B|$, since $\frac{n}{2k}\geq 1$.
We construct a tournament such that in the feasibility graph all edges between vertices in $A$ are deleted.
To do so, we use a result of \citet{alspach1989oberwolfach}, who showed that there is a solution for the Oberwolfach problem for all odd $k$ with $n \equiv 0 \mod{k}$ and $n$ odd.
Observe that $|A| \equiv 0 \mod{k}$, thus $|B| \equiv 0 \mod{k}$. Furthermore, $|A|-1$ is even and since $n$ is even this also applies to $\abs{B}-1$. By using the equivalence of the Oberwolfach problem to complete tournaments, there exists a complete tournament within $A$ and within $B$.
We combine these complete tournaments to a tournament for the whole graph with $\min\{|A|-1, |B|-1\}/2 = \frac{|A|-1}{2} = \frac{n}{4}-\frac{n}{4k}$ rounds. Since $|A|$ is odd, the number of rounds is integral.
Considering the feasibility graph of this tournament, there are no edges between vertices in $A$. Thus, every cycle of length $k$ can cover at most $\frac{k-1}{2}$ vertices of $A$. We conclude that there is no $C_k$-factor for the feasibility graph, since $\frac{n}{k}\cdot \frac{k-1}{2}=\frac{n}{2}-\frac{n}{2k}$, so we cannot cover all vertices of $A$. Thus, we constructed a tournament with $\frac{n}{4}-\frac{n}{4k}=\lfloor\frac{n+2}{4}-\frac{n}{4k}\rfloor$ rounds that cannot be extended.
\textbf{Case 2.} Assume $k$ is divisible by $4$. Let $n=i\cdot k$ for some odd integer $i\in\mathbb{N}$. We construct a tournament with $n$ participants by dividing the vertices of the feasibility graph into two disjoint sets $A$ and $B$ such that $|A| = |B|= \frac{n}{2} = i \cdot \frac{k}{2}$. \citet{liu2003equipartite} showed that there exist $n/4$ disjoint $C_k$-factors in a complete bipartite graph with $n/2$ vertices on each side of the bipartition, if $n/2$ is even. That is, every edge of the complete bipartite graph is in exactly one $C_k$-factor.
Since $n/2$ is even by case distinction, there is a tournament with $n/4 = \lfloor \frac{n+2}{4} \rfloor$ rounds such that in the feasibility graph there are only edges within $A$ and within $B$ left. Since $i$ is odd, $|A| = i \cdot \frac{k}{2}$ is not divisible by $k$. Thus, it is not possible to schedule another round by choosing only cycles within sets $A$ and $B$.
\textbf{Case 3.} Assume $k$ is even, but not divisible by 4. Let $n=i\cdot k$ for some odd integer $i\in\mathbb{N}_{\geq 9}$.
We construct a tournament with $n$ participants that is not extendable after $\frac{n+2}{4} + 1$ rounds in two phases. First, we partition the vertices into two disjoint sets $A$ and $B$, each of size $\frac{n}{2}$, and we construct a base tournament with $\frac{n-2}{4}$ rounds such that in the feasibility graph only edges between sets $A$ and $B$ are deleted. Second, we extend the tournament by two additional carefully chosen rounds.
After the base tournament, the feasibility graph consists of two complete graphs $A$ and $B$ connected by a perfect matching between all vertices from $A$ and all vertices from $B$. We use the additional two rounds to delete all of the matching-edges except for one. Using this, we show that the tournament cannot be extended.
In order to construct the base tournament, we first use a result of \citet{alspach1989oberwolfach}. It states that there always exists a solution for the Oberwolfach problem with $n'$ participants and cycle length $k'$ if $n'$ and $k'$ are odd and $n' \equiv 0 \mod{k'}$.
We choose $n'=n/2$ and $k' = k/2$ (observe that by assumption $k\geq 6$ and thus $k'\geq 3$) and then apply the result by \citet{alspach1989oberwolfach} to obtain a solution for the Oberwolfach problem with $n'$ and $k'$. Next we use a construction relying on an idea by \citet{archdeacon2004cycle} to connect two copies of the Oberwolfach solution. Fix the solution for the Oberwolfach problem with $n/2$ participants and cycle length $\frac{k}{2}$, and apply this solution to $A$ and $B$ separately.
Consider one round of the tournament and denote the $C_{\frac{k}{2}}$-factor in $A$ by $(a_{1+j}, a_{2+j}, \dots, a_{\frac{k}{2}+j})$ for $j=0, \frac{k}{2}, k, \dots, \frac{n}{2}-\frac{k}{2}$. By symmetry, the $C_{\frac{k}{2}}$-factor in $B$ can be denoted by $(b_{1+j}, b_{2+j}, \dots, b_{\frac{k}{2}+j})$ for $j=0, \frac{k}{2}, k, \dots, \frac{n}{2}-\frac{k}{2}$. We design a $C_k$-factor in the feasibility graph of the original tournament.
For each $j \in \{0, \frac{k}{2}, k, \dots, \frac{n}{2}-\frac{k}{2}\}$, we construct a cycle $(a_{1+j},b_{2+j},a_{3+j}, \dots, a_{\frac{k}{2}+j},b_{1+j}, a_{2+j},b_{3+j},\dots,b_{\frac{k}{2}+j})$ of length $k$ in $G$. These edges are not used in any other round due to the construction and we used the fact that $\frac{k}{2}$ is odd. We refer to \Cref{fig:basetournament} for an example of one cycle for $k=10$. Since each vertex is in one cycle in each round, the construction yields a feasible round of a tournament. Applying this procedure to all rounds yields the base tournament with $\frac{n-2}{4}$ rounds.
\begin{figure}[t]
\begin{minipage}{0.48\textwidth}
\centering
\begin{tikzpicture}[scale=0.7]
\draw (0,0) ellipse (4cm and 1.1cm);
\draw (0,-2.5) ellipse (4cm and 1.1cm);
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,0){};
\node[left] at (-2,0) {$a_1$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-1,-0.5){};
\node[left] at (-1,-0.5) {$a_2$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (1,-0.5){};
\node[right] at (1,-0.5) {$a_3$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,0){};
\node[right] at (2,0) {$a_4$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,0.5){};
\node[above] at (0,0.5) {$a_5$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,-2.5){};
\node[left] at (-2,-2.5) {$b_1$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-1,-3){};
\node[left] at (-1,-3) {$b_2$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (1,-3){};
\node[right] at (1,-3) {$b_3$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,-2.5){};
\node[right] at (2,-2.5) {$b_4$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,-2){};
\node[above] at (0,-2) {$b_5$};
\draw (-2,0) -- (-1,-0.5);
\draw (-1,-0.5) -- (1,-0.5);
\draw (1,-0.5) -- (2,0);
\draw (2,0) -- (0,0.5);
\draw (0,0.5) -- (-2,0);
\draw (-2,-2.5) -- (-1,-3);
\draw (-1,-3) -- (1,-3);
\draw (1,-3) -- (2,-2.5);
\draw (2,-2.5) -- (0,-2);
\draw (0,-2) -- (-2,-2.5);
\node at (0,-4){};
\end{tikzpicture}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\centering
\begin{tikzpicture}[scale=0.7]
\draw (0,0) ellipse (4cm and 1.1cm);
\draw (0,-2.5) ellipse (4cm and 1.1cm);
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,0){};
\node[left] at (-2,0) {$a_1$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-1,-0.5){};
\node[above] at (-1,-0.5) {$a_2$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (1,-0.5){};
\node[above] at (1,-0.5) {$a_3$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,0){};
\node[right] at (2,0) {$a_4$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,0.5){};
\node[left] at (0,0.5) {$a_5$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,-2.5){};
\node[left] at (-2,-2.5) {$b_1$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-1,-3){};
\node[left] at (-1,-3) {$b_2$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (1,-3){};
\node[left] at (1,-3) {$b_3$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,-2.5){};
\node[right] at (2,-2.5) {$b_4$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,-2){};
\node[below] at (0,-2) {$b_5$};
\draw (-2,0) -- (-1,-3);
\draw (-1,-3) -- (1,-0.5);
\draw (1,-0.5) -- (2,-2.5);
\draw (2,-2.5) -- (0,0.5);
\draw (0,0.5) -- (-2,-2.5);
\draw (-2,-2.5) -- (-1,-0.5);
\draw (-1,-0.5) -- (1,-3);
\draw (1,-3) -- (2,0);
\draw (2,0) -- (0,-2);
\draw (0,-2) -- (-2,0);
\node at (0,-4){};
\end{tikzpicture}
\end{minipage}
\caption{Construction of the base tournament. We transform two cycles of length $5$ into one cycle of length $10$.}\label{fig:basetournament}
\end{figure}
For each edge $e=\{a_{\bar{j}},a_j\}$ with $j\neq \bar{j}$ which is deleted in the feasibility graph of the tournament within $A$, we delete the edges $\{a_{\bar{j}},b_j\}$ and $\{a_j,b_{\bar{j}}\}$ in the feasibility graph. After the base tournament, all edges between $A$ and $B$ except for the edges $(a_1,b_1), (a_2,b_2), \dots , (a_{\frac{n}{2}},b_\frac{n}{2})$ are deleted in the feasibility graph.
In the rest of the proof, we extend the base tournament by two additional rounds. These two rounds are designed in such a way that after the rounds there is exactly one edge connecting a vertex from $A$ with one from $B$. To extend the base tournament by one round construct the cycles of the $C_k$-factor in the following way. For $j\in\{0, \frac{k}{2}, k, \dots, \frac{n}{2}-\frac{k}{2}\}$, we construct the cycle $(a_{1+j},b_{1+j},b_{2+j},a_{2+j}, \ldots,b_{\frac{k}{2}-2 +j} b_{\frac{k}{2}-1 +j}, b_{\frac{k}{2} +j}, a_{\frac{k}{2} +j},a_{\frac{k}{2}-1 +j})$, see \Cref{fig:extendround1}. Since all edges within $A$ and $B$ are part of the feasibility graph as well as all edges $(a_{j'},b_{j'})$ for $j' \in \{ 1, \ldots , \frac{n}{2}\}$ this is a feasible construction of a $C_k$-factor and thus an extension of the base tournament.
\begin{figure}[t]
\centering
\begin{tikzpicture}[scale=0.9]
\draw (0,0) ellipse (4cm and 1cm);
\draw (0,-2.5) ellipse (4cm and 1cm);
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,0){};
\node[left] at (-2,0) {$a_1$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-1,-0.5){};
\node[left] at (-1,-0.5) {$a_2$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (1,-0.5){};
\node[right] at (1,-0.5) {$a_3$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,0){};
\node[right] at (2,0) {$a_4$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,0.5){};
\node[left] at (0,0.5) {$a_5$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,-2.5){};
\node[left] at (-2,-2.5) {$b_1$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-1,-3){};
\node[left] at (-1,-3) {$b_2$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (1,-3){};
\node[left] at (1,-3) {$b_3$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,-2.5){};
\node[right] at (2,-2.5) {$b_4$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,-2){};
\node[left] at (0,-2) {$b_5$};
\draw (-2,0) -- (-2,-2.5);
\draw (-2,-2.5) -- (-1,-3);
\draw (-1,-3) -- (-1,-0.5);
\draw (-1,-0.5) -- (1,-0.5);
\draw (1,-0.5) -- (1,-3);
\draw (1,-3) -- (2,-2.5);
\draw (2,-2.5) -- (0,-2);
\draw (0,-2) -- (0,0.5);
\draw (0,0.5) -- (2,0);
\draw (2,0) -- (-2,0);
\end{tikzpicture}
\caption{An example of one cycle in the construction that is used for the extension of the base tournament.}\label{fig:extendround1}
\end{figure}
After the extension of the base tournament by one round the feasibility graph has the following structure. The degree of all vertices equals $\frac{n}{2}-2$ and the only edges between vertices from $A$ and $B$ are
\[\left\{(a_{\frac{k}{2}-1+j}, b_{\frac{k}{2}-1+j}) \mid j \in \left\{0, \frac{k}{2}, k, \dots, \frac{n}{2} - \frac{k}{2}\right\}\right\}\;.\]
We will construct one more round such that
after this round, there is only one of the matching edges remaining in the feasibility graph.
In order to do so, we will construct the $C_k$-factor with cycles $(C_1,\ldots, C_\frac{n}{k})$ by a greedy procedure as follows. Cycles $C_1, \dots, C_{\frac{n}{2k}-\frac{1}{2}}$ will all contain two matching edges and the other cycles none. In order to simplify notation we set
\[A_M = \left\{a_{\frac{k}{2}-1+j} \mid j \in \left\{0, \frac{k}{2}, k, \dots, \frac{n}{2} - \frac{k}{2}\right\}\right\}\;,\]
and $A_{-M} = A \setminus A_M$. We have $|A_{-M}| = \frac{n}{2}-\frac{n}{k}$. We define $B_M$ and $B_{-M}$ analogously. For some cycle $C_{z}$, $z\leq\frac{n}{2k}-\frac{1}{2}$, we greedily pick two of the matching edges. Let $(a_{\ell},b_{\ell})$ and $(a_r,b_r)$ be these two matching edges. To complete the cycle, we show that we can always construct a path from $a_{\ell}$ to $a_r$ by picking vertices from $A_{-M}$ and from $b_{\ell}$ to $b_r$ by vertices from $B_{-M}$. Assuming that we have already constructed cycles $C_1,\ldots, C_{z-1}$, there are still
\begin{align*}
\frac{n}{2} - \frac{n}{k} - (z-1) \left(\frac{k}{2}-2\right)
\end{align*}
unused vertices in the set $A_{-M}$. Even after choosing some vertices for cycle $z$ the number of unused vertices in $A_{-M}$ is at least
\[\frac{n}{2} - \frac{n}{k} - z \left(\frac{k}{2}-2\right) \geq \frac{n}{2} - \frac{n}{k} - z \frac{k}{2} \geq \frac{n}{2} - \frac{n}{k} - \frac{n}{2k} \frac{k}{2} = \frac{n}{4} - \frac{n}{k} \geq \frac{n}{12}\;.\]
Let $N(v)$ denote the neighborhood of vertex $v$. The greedy procedure that constructs a path from $a_{\ell}$ to $a_r$ works as follows. We set vertex $a_{\ell}$ active. For each active vertex $v$, we pick one of the vertices $a \in N(v) \cap A_{-M}$, delete $a$ from $A_{-M}$ and set $a$ active. We repeat this until we have chosen $\frac{k}{2}-3$ vertices. Next, we pick a vertex in $N(v) \cap A_{-M} \cap N(a_r)$ in order to ensure that the path ends at $a_r$. Since $|A_{-M}| \geq \frac{n}{12}$, we observe
\[|N(v) \cap A_{-M} \cap N(a_r)| \geq \frac{n}{12} - 1-2\;,\]
so there is always a suitable vertex as $n\geq 9 k\geq 54$.
The construction for the path from $b_{\ell}$ to $b_r$ is analogous.
For cycles $C_{\frac{n}{2k}+\frac{1}{2}}, \dots, C_\frac{n}{k}$, there are still $\frac{n}{4}+\frac{k}{4}$ leftover vertices within $A$ and within $B$.
The degree of each vertex within the set of remaining vertices is at least $\frac{n}{4}+\frac{k}{4}-3$. This is large enough to apply the Aigner-Brandt theorem as $i\geq 9$ and $k\geq 6$.
In this way, we construct a $C_k$-factor in the feasibility graph. This means we can extend the tournament by one more round. In total we constructed a tournament of $\frac{n+2}{4}+1$ rounds, which is obviously equal to $\lfloor \frac{n+2}{4} \rfloor +1$.
To see that this tournament cannot be extended further, consider the feasibility graph. Most of the edges within $A$ and $B$ are still present, while between $A$ and $B$ there is only one edge left. This means a $C_k$-factor can only consist of cycles that are entirely in $A$ or in $B$. Since $\abs{A}=\abs{B}$ and the number of cycles $\frac{n}{k}=i$ is odd, there is no $C_k$-factor in the feasibility graph and thus the constructed tournament is not extendable.
\end{proof}
\section{Conclusion and Outlook}
In this work, we studied the social golfer problem and the Oberwolfach problem from an optimization perspective. We presented bounds on the number of rounds that can be guaranteed by a greedy algorithm.
For the social golfer problem the provided bounds are tight. Assuming El-Zahar's conjecture \citep{el1984circuits} holds, a gap of one remains for the Oberwolfach problem. This gives a performance guarantee for the optimization variant of both problems. Since both a clique- and cycle-factor can be found in polynomial time for graphs with high degree, the greedy algorithm is a $\frac{k-1}{2k^2-3k-1}$-approximation algorithm for the social golfer problem and a $\frac{1}{3+\epsilon}$-approximation algorithm for any fixed $\epsilon>0$ for the Oberwolfach problem.
Given some tournament it would be interesting to analyze the complexity of deciding whether the tournament can be extended by an additional round. Proving \ensuremath{\mathsf{NP}}\xspace-hardness seems particularly complicated since one cannot use any regular graph for the reduction proof, but only graphs that are feasibility graphs of a tournament.
Lastly, the general idea of greedily deleting particular subgraphs $H$ from base graphs $G$ can also be applied to different choices of $G$ and $H$.
\section*{Acknowledgement}
This research started after supervising the Master's thesis of David Kuntz. We thank David for valuable discussions.
\bibliographystyle{apalike}
| proofpile-arXiv_065-323 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Let $\mathbb{N}$ be the set of all nonnegative integers. For any sequence of positive integers $A=\{a_1<a_2<\cdots\}$, let $P(A)$ be the subset sum set of $A$, that is,
$$P(A)=\left\{\sum_{i}\varepsilon_i a_i:\sum_{i}\varepsilon_i<\infty, \varepsilon_i\in\{0,1\}\right\}.$$
Here we note that $0\in P(A)$.
In 1970, Burr \cite{Burr} posed the following problem: which sets $S$ of integers are equal to $P(A)$ for some $A$? For the existence of such set $S$ he mentioned that if the complement of $S$ grows sufficiently rapidly such as $b_1>x_0$ and $b_{n+1}\ge b_n^2$, then there exists a set $A$ such that $P(A)=\mathbb{N}\setminus\{b_1,b_2,\cdots\}$. But this result is unpublished. In 1996, Hegyv\'{a}ri \cite{Hegyvari} proved the following result.
\begin{theorem}\cite[Theorem 1]{Hegyvari} If $B=\{b_1<b_2<\cdots\}$ is a sequence of integers with $b_1\ge x_0$ and
$b_{n+1}\ge5b_n$ for all $n\ge1$, then there exists a sequence $A$ of positive integers for which $P(A)=\mathbb{N}\setminus B$.
\end{theorem}
In 2012, Chen and Fang \cite{ChenFang} obtained the following results.
\begin{theorem}\cite[Theorem 1]{ChenFang} Let $B=\{b_1<b_2<\cdots\}$ be a sequence of integers with $b_1\in\{4,7,8\}\cup\{b:b\ge11\}$ and
$b_{n+1}\ge3b_n+5$ for all $n\ge1$. Then there exists a sequence $A$ of positive integers for which $P(A)=\mathbb{N}\setminus B$.
\end{theorem}
\begin{theorem}\cite[Theorem 2]{ChenFang} \label{ChenFang2} Let $B=\{b_1<b_2<\cdots\}$ be a sequence of positive integers with $b_1\in\{3,5,6,9,10\}$ or $b_2=3b_1+4$ or $b_1=1$ and $b_2=9$ or $b_1=2$ and $b_2=15$. Then there is no sequence $A$ of positive integers for which $P(A)=\mathbb{N}\setminus B$.
\end{theorem}
Later, Chen and Wu \cite{ChenWu} further improved this result. By observing Chen and Fang's results we know that the critical value of $b_2$ is $3b_1+5$. In this paper, we study the problem of critical values of $b_k$. We call this problem critical values of Burr's problem.
In 2019, Fang and Fang \cite{FangFang2019} considered the critical value of $b_3$ and proved the following result.
\begin{theorem}\cite[Theorem 1.1]{FangFang2019} If $A$ and $B=\{1<b_1<b_2<\cdots\}$ are two infinite sequences of positive integers with $b_2=3b_1+5$ such that $P(A)=\mathbb{N}\setminus B$, then $b_3\ge4b_1+6$. Furthermore, there exist two infinite sequences of positive integers $A$ and $B=\{1<b_1<b_2<\cdots\}$ with $b_2=3b_1+5$ and $b_3=4b_1+6$ such that $P(A)=\mathbb{N}\setminus B$.
\end{theorem}
Recently, Fang and Fang \cite{FangFang2020} introduced the following definition. For given positive integers $b$ and $k\ge3$, define $c_{k}(b)$ successively as follows:
(i) let $c_k=c_{k}(b)$ be the least integer $r$ for which, there exist two infinite sets of positive integers $A$ and $B=\{b_1<b_2<\cdots<b_{k-1}<b_{k}<\cdots\}$ with $b_1=b$, $b_2=3b+5$ and $b_i=c_i(3\le i<k)$ and $b_k=r$ such that $P(A)=\mathbb{N}\setminus B$ and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>b+1$;
(ii) if such $A,B$ do not exist, define $c_k=+\infty$.
In \cite{FangFang2020}, Fang and Fang proved the following result.
\begin{theorem}\cite[Theorem 1.1]{FangFang2020} For given positive integer $b\in\{1,2,4,7,8\}\cup\{b':b'\ge 11,b'\in\mathbb{N}\}$, we have
$$c_{2k-1}=(3b+6)(k-1)+b,~~c_{2k}=(3b+6)(k-1)+3b+5,~~k=1,2,\dots.$$
\end{theorem}
Naturally, we consider the problem that for any integer $b_1$ and $b_2\ge 3b_1+5$ if we can determine the critical value of $b_3$, instead of $b_2=3b_1+5$. This problem is posed by Fang and Fang in \cite{FangFang2019}. Recently, authors \cite{WuYan} answered this problem.
\begin{theorem}\cite{WuYan} \label{thm:1.6}If $A$ and $B=\{b_1<b_2<\cdots\}$ are two infinite sequences of positive integers with $b_2\ge 3b_1+5$ such that $P(A)=\mathbb{N}\setminus B$, then $b_3\ge b_2+b_1+1$.
\end{theorem}
\begin{theorem}\cite{WuYan} \label{thm:1.7}For any positive integers $b_1\in\{4,7,8\}\cup[11,+\infty)$ and $b_2\ge 3b_1+5$, there exists two infinite sequences of positive integers $A$ and $B=\{b_1<b_2<\cdots\}$ with $b_3=b_2+b_1+1$ such that $P(A)=\mathbb{N}\setminus B$.
\end{theorem}
In this paper, we go on to consider the critical value of $b_k$ for any integers $b_1$ and $b_2\ge 3b_1+5$. Motivated by the definition of Fang and Fang, we also introduce the following definition. For given positive integers $u$, $v\ge 3u+5$ and $k\ge3$, let $e_1=u$, $e_2=v$ and $e_k=e_k(u,v)$ be the least integer $r$ for which there exist two infinite sets of positive integer $A$ and $B=\{b_1<b_2<\cdots<b_{k-1}<b_k<\cdots\}$ with $b_i=e_i(1\le i<k)$ and $e_k=r$ such that $P(A)=\mathbb{N}\setminus B$ and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>u+1$. If such sets $A,B$ do not exist, define $e_k=+\infty$.
In this paper, we obtain the following results.
\begin{theorem}\label{thm:1.1} For given positive integers $u\in\{4,7,8\}\cup\{u:u\ge11\}$, $v\ge 3u+5$, we have
\begin{equation}\label{eq:c}
e_{2k+1}=(v+1)k+u,~~e_{2k+2}=(v+1)k+v,~~k=0,1,\dots.
\end{equation}
\end{theorem}
\begin{corollary}\label{thm:1.2} For given positive integers $u\in\{4,7,8\}\cup\{u:u\ge11\}$, $v\ge 3u+5$ and $k\ge3$, we have
$$e_{k}=e_{k-1}+e_{k-2}-e_{k-3},$$
where $e_0=-1$, $e_1=u$, $e_2=v$.
\end{corollary}
If $u\in\{3,5,6,9,10\}$ or $u=1,v=9\ge 3u+5$ or $u=2, v=15\ge 3u+5$, by Theorem \ref{ChenFang2} we know that such sequence $A$ does not exist. So we only consider the case for $u\in\{4,7,8\}\cup\{u:u\ge11\}$. In fact, we find Corollary \ref{thm:1.2} first, but in the proof of Theorem \ref{thm:1.1} we follow Fang and Fang's method. Some of skills are similar to \cite{WuYan}. For the convenience of readers, we provide all the details of the proof.
\section{Proof of Theorem \ref{thm:1.1}}
For given positive integers $u\in\{4,7,8\}\cup\{u:u\ge11\}$ and $v\ge 3u+5$, we define
$$d_{2k+1}=(v+1)k+u,~~d_{2k+2}=(v+1)k+v,~~k=0,1,\dots.$$
\begin{lemma}\label{lem:2.1}
Given positive integers $u\in\{4,7,8\}\cup\{u:u\ge11\}$, $v\ge 3u+5$ and $k\ge3$. Then there exists
an infinite set $A$ of positive integers such that $P(A)=\mathbb{N}\setminus\{d_1,d_2,\dots\}$ and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>u+1$.
\end{lemma}
\begin{proof} Let $s$ and $r$ be nonnegative integers with
$$v+1=(u+1)+(u+2)+\cdots+(u+s)+r,~~0\le r\le u+s.$$
Since $v\ge 3u+5$, it follows that $s\ge3$. Note that $u\ge4$. Then there exist integers $r_2,\dots,r_s$ such that
\begin{equation}\label{eq:2.1}
r=r_2+\cdots+r_s+\varepsilon(r),~~0\le r_2\le\cdots\le r_s\le u-1,
\end{equation}
where $\varepsilon(r)=0$ if $r=0$, otherwise $\varepsilon(r)=1$. If there is an index $3\le j\le s$ such that $r_j-r_{j-1}=u-1$, we replace $r_j$ and $r_{j-1}$ by $r_j-1$ and $r_{j-1}+1$. Then \eqref{eq:2.1} still holds and $r_j-r_{j-1}\le u-2$ for any index $3\le j\le s$.
We cite a result in \cite{ChenFang} that there exists a set of positive integers
$A_1$ with $A_1\subseteq[0,u-1]$ such that
$$P(A_1)=[0,u-1].$$
Let
$$a_1=u+1,~~a_s=u+s+r_s+\varepsilon(r),~~a_{t}=u+t+r_t,~~2\le t\le s-1.$$
Then
$$a_{t-1}<a_{t}\le a_{t-1}+u,~~2\le t\le s$$
and so
$$P(A_1\cup\{a_1,\dots,a_s\})=[0,a_{2}+\cdots+a_{s}+2u]\setminus\{u,a_{2}+\cdots+a_{s}+u\}.$$
Since
$$a_{2}+\cdots+a_{s}+u=(u+2+r_2)+\cdots+(u+s+r_s+\varepsilon(r))+u=v,$$
it follows that
\begin{equation}\label{eq:2.2}
P(A_1\cup\{a_1,\dots,a_s\})=[0,u+v]\setminus\{u,v\}.
\end{equation}
Let $a_{s+n}=(v+1)n$ for $n=1,2,\dots$. We will take induction on $k\ge1$ to prove that
\begin{equation}\label{eq:2.3}
P(A_1\cup\{a_1,\dots,a_{s+k}\})=[0,\sum_{i=1}^k a_{s+i} +u+v]\setminus\{d_1,d_2,\dots,d_{2m-1},d_{2m}\},
\end{equation}
where $m=k(k+1)/2+1$.
By \eqref{eq:2.2}, it is clear that
$$P(A_1\cup\{a_1,\dots,a_{s+1}\})=[0,a_{s+1}+u+v]\setminus\{d_1,d_2,d_3,d_4\},$$
which implies that \eqref{eq:2.3} holds for $k=1$.
Assume that \eqref{eq:2.3} holds for some $k-1\ge1$, that is,
\begin{equation}\label{eq:2.4}
P(A_1\cup\{a_1,\dots,a_{s+k-1}\})=[0,\sum_{i=1}^{k-1} a_{s+i} +u+v]\setminus\{d_1,d_2,\dots,d_{2m'-1},d_{2m'}\},
\end{equation}
where $m'=k(k-1)/2+1$. Then
\begin{eqnarray*}
a_{s+k}+P(A_1\cup\{a_1,\dots,a_{s+k}\})
=[(v+1)k,\sum_{i=1}^{k} a_{s+i} +u+v]\setminus D,
\end{eqnarray*}
where
$$D=\left\{d_{2k+1},d_{2k+2},\dots,d_{k(k+1)+1},d_{k(k+1)+2}\right\}.$$
Since $d_{2k+1}\le d_{k(k-1)+3}=d_{2m'+1}$, it follows that
\begin{equation*}
P(A_1\cup\{a_1,\dots,a_{s+k}\})=[0,\sum_{i=1}^k a_{s+i} +u+v]\setminus\{d_1,d_2,\dots,d_{2m-1},d_{2m}\},
\end{equation*}
where $m=k(k+1)/2+1$, which implies that \eqref{eq:2.3} holds.
Let $A=A_1\cup\{a_1,a_2,\dots\}$. We know that such $A$ satisfies Lemma \ref{lem:2.1}. This completes the proof of Lemma \ref{lem:2.1}.
\end{proof}
\begin{lemma}\label{lem:2.2}\cite[Lemma 1]{ChenFang}.
Let $A=\{a_1<a_2 <\cdots\}$ and $B=\{b_1<b_2 <\cdots\}$ be two sequences of positive integers with $b_1>1$ such that $P(A)=\mathbb{N}\backslash B$. Let $a_k<b_1<a_{k+1}$. Then
$$P(\{a_1,\cdots,a_i\})=[0,c_i], ~~i=1,2,\cdots,k,$$
where $c_1=1$, $c_2=3$, $c_{i+1}=c_i+a_i+1~(1\leq i\leq k-1)$, $c_k=b_1-1$ and $c_i+1\geq a_i+1~(1 \leq i \leq k-1)$.
\end{lemma}
\begin{lemma}\label{lem:2.3}
Given positive integers $u\in\{4,7,8\}\cup\{u:u\ge11\}$, $v\ge 3u+5$ and $k\ge3$. If $A$ is an infinite set of positive integers such that
$$P(A)=\mathbb{N}\setminus \{d_1<d_2<\cdots<d_{k-1}<b_{k}<\cdots\}$$
and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>u+1$, then there exists a subset $A_1\subseteq A$ such that
$$P(A_1)=[0,d_1+d_2]\setminus\{d_1,d_2\}$$
and $\min \{A\setminus A_1\} >u+1$.
\end{lemma}
\begin{proof} Let $A=\{a_1<a_2<\cdots\}$ be an infinite set of positive integers such that
$$P(A)=\mathbb{N}\setminus \{d_1<d_2<\cdots<d_{k-1}<b_{k}<\cdots\}$$
and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>u+1$. It follows from Lemma \ref{lem:2.2} that
$$P(\{a_1,\cdots,a_k\})=[0,u-1],$$
where $k$ is the index such that $a_k<u<a_{k+1}$. Since $v\ge 3u+5>u+1$, it follows that $u+1\in P(A)$. Hence, $a_{k+1}=u+1$. Then
$$P(\{a_1,\cdots,a_{k+1}\})=[0,2u]\setminus\{u\}.$$
Noting that $a_{k+t}>a_{k+1}=u+1$ for any $t\ge 2$, we have
$$a_{k+t}\le a_1+\cdots+a_{k+t-1}+1=a_{k+2}+\cdots+a_{k+t-1}+2u.$$
Then
$$P(\{a_1,\cdots,a_{k+2}\})=[0,a_{k+2}+2u]\setminus\{u,a_{k+2}+u\}.$$
If $a_{k+2}+\cdots+a_{k+t-1}+u\ge a_{k+t}$ and $a_{k+2}+\cdots+a_{k+t-1}\neq a_{k+t}$ for all integers $t\ge3$, then
$$P(\{a_1,\cdots,a_{k+t}\})=[0,a_{k+2}+\cdots+a_{k+t}+2u]\setminus\{u,a_{k+2}+\cdots+a_{k+t}+u\}.$$
Then $d_2\ge a_{k+2}+\cdots+a_{k+t}+u$ for any integer $t\ge3$, which is impossible since $d_2$ is a given integer. So there are some integers $3\le t_1<t_2<\cdots$ such that $a_{k+2}+\cdots+a_{k+t_i-1}+u< a_{k+t_i}$ or $a_{k+2}+\cdots+a_{k+t_i-1}= a_{k+t_i}$, and
$$P(\{a_1,\cdots,a_{k+t_1-1}\})=[0,a_{k+2}+\cdots+a_{k+t_1-1}+2u]\setminus\{u,a_{k+2}+\cdots+a_{k+t_1-1}+u\}.$$
If $a_{k+2}+\cdots+a_{k+t_1-1}+u< a_{k+t_1}$, then $d_2=a_{k+2}+\cdots+a_{k+t_1-1}+u$ and
$$P(\{a_1,\cdots,a_{k+t_1-1}\})=[0,d_1+d_2]\setminus\{d_1,d_2\},~~a_{k+t_1}>u+1.$$
So the proof is finished.
If $a_{k+2}+\cdots+a_{k+t_1-1}= a_{k+t_1}$, then
$$P(\{a_1,\cdots,a_{k+t_1}\})=[0,a_{k+2}+\cdots+a_{k+t_1}+2u]\setminus\{u,a_{k+t_1}+u,a_{k+2}+\cdots+a_{k+t_1}+u\}.$$
If $a_{k+t_1+1}>a_{k+t_1}+u$, then
$$d_2=a_{k+t_1}+u=a_{k+2}+\cdots+a_{k+t_1-1}+u$$
and
$$a_{k+2}+\cdots+a_{k+t_1-1}+2u=d_1+d_2.$$
Therefore,
$$P(\{a_1,\cdots,a_{k+t_1-1}\})=[0,d_1+d_2]\setminus\{d_1,d_2\},~~a_{k+t_1}>u+1.$$
So the proof is finished. If $a_{k+t_1+1}\le a_{k+t_1}+u$, then
$$P(\{a_1,\cdots,a_{k+t_1+1}\})=[0,a_{k+2}+\cdots+a_{k+t_1+1}+2u]\setminus\{u,a_{k+2}+\cdots+a_{k+t_1+1}+u\}.$$
By the definition of $t_2$ and $a_{k+t_1+1}\le a_{k+t_1}+u$ we know that $t_2\neq t_1+1$. Noting that $a_{k+2}+\cdots+a_{k+t-1}+u\ge a_{k+t}$ and $a_{k+2}+\cdots+a_{k+t-1}\neq a_{k+t}$ for any integer $t_1<t<t_2$, we have
$$P(\{a_1,\cdots,a_{k+t_2-1}\})=[0,a_{k+2}+\cdots+a_{k+t_2-1}+2u]\setminus\{u,a_{k+2}+\cdots+a_{k+t_2-1}+u\}.$$
Similar to the way to deal with $t_1$, we know that there is always a subset $A_1\subseteq A$ such that
$P(A_1)=[0,d_1+d_2]\setminus\{d_1,d_2\}$ and $\min\{A\setminus A_1\}>u+1$ or there exists an infinity sequence of positive integers $l_i\ge3$ such that
$$P(\{a_1,\cdots,a_{k+l_i}\})=[0,a_{k+2}+\cdots+a_{k+l_i}+2u]\setminus\{u,a_{k+2}+\cdots+a_{k+l_i}+u\}.$$
Since $d_2$ is a given integer, it follows that the second case is impossible. This completes the proof of Lemma \ref{lem:2.3}.
\end{proof}
\begin{lemma}\label{lem:2.4}
Given positive integers $u\in\{4,7,8\}\cup\{u:u\ge11\}$, $v\ge 3u+5$ and $k\ge3$. Let $A$ be an infinite set of positive integers such that
$$P(A)=\mathbb{N}\setminus \{d_1<d_2<\cdots<d_{k}<b_{k+1}<\cdots\}$$
and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>u+1$ and let $A_1$ be a subset of $A$ such that
$$P(A_1)=[0,u+v]\setminus\{d_1,d_2\}$$
and $\min\{A\setminus A_1\}>u+1$. Write $A\setminus A_1=\{a_1<a_2<\cdots\}$. Then $v+1\mid a_i$ for $i=1,2,\dots, m$, and
$$P(A_1\cup\{a_1,\dots,a_m\})=[0,\sum_{i=1}^{m}a_i+u+v]\setminus\{d_1,d_2,\dots,d_{n}\},$$
where $m$ is the index such that
$$\sum_{i=1}^{m-1}a_i+v<d_k\le \sum_{i=1}^{m}a_i+v$$
and
$$d_{n}=\sum_{i=1}^m a_i+v.$$
\end{lemma}
\begin{proof} We will take induction on $k\ge3$. For $k=3$, by $a_1>u+1$ we know that
$$v<d_3=u+v+1\le a_1+v,$$
that is, $m=1$. It is enough to prove that $v+1\mid a_1$ and
$$P(A_1\cup\{a_1\})=[0,a_1+u+v]\setminus\{d_1,d_2,\dots,d_{n}\},$$
where $d_{n}=a_1+v$.
Since $d_3\notin P(A)$ and $[0,v-1]\setminus\{u\}\subseteq P(A_1)$ and
$$a_1\le \sum_{\substack{a'<a_1\\a'\in A}}a'+1=\sum_{a'\in A_1}a'+1=u+v+1=d_3< a_1+v,$$
it follows that $d_3=a_1+u$, that is, $a_1=v+1$. Since
$$P(A_1)=[0,u+v]\setminus\{d_1,d_2\},$$
it follows that
$$a_1+P(A_1)=[a_1,a_1+u+v]\setminus\{a_1+d_1,a_1+d_2\}.$$
Then
$$P(A_1\cup\{a_1\})=[0,a_1+u+v]\setminus\{d_1,d_2,d_3,d_4\},$$
where $d_4=a_1+v$.
Suppose that $ v+1\mid a_i$ for $i=1,2,\dots,m$ and
\begin{equation}\label{eq0}
P(A_1\cup\{a_1,\dots,a_{m}\})=[0,\sum_{i=1}^{m}a_i+u+v]\setminus\{d_1,d_2,\dots,d_{n}\},
\end{equation}
where $m$ is the index such that
$$\sum_{i=1}^{m-1}a_i+v<d_{k-1}\le \sum_{i=1}^{m}a_i+v$$
and
$$d_{n}=\sum_{i=1}^{m} a_i+v.$$
If $d_{k-1}< \sum_{i=1}^{m}a_i+v$, then $d_{k}\le \sum_{i=1}^{m}a_i+v$. Then the proof is finished. If $d_{k-1}=\sum_{i=1}^{m}a_i+v$, then $d_{k}=\sum_{i=1}^{m}a_i+v+u+1$. It follows that
$$\sum_{i=1}^{m}a_i+v<d_{k}\le \sum_{i=1}^{m+1}a_i+v.$$
It is enough to prove that $ v+1\mid a_{m+1}$ and
$$P(A_1\cup\{a_1,\dots,a_{m+1}\})=[0,\sum_{i=1}^{m+1}a_i+u+v]\setminus\{d_1,d_2,\dots,d_{n'}\},$$
where
$$d_{n'}=\sum_{i=1}^{m+1} a_i+v.$$
Since $a_{m+1}\neq d_k$ and
$$a_{m}<a_{m+1}\le \sum_{\substack{a<a_{m+1}\\a\in A}} a+1=\sum_{i=1}^{m}a_i+u+v+1=d_{k},$$
it follows that there exists a positive integer $T$ such that
$$a_{m+1}< (v+1)T+u\le a_{m+1}+v+1$$
and
$$(v+1)T+u\le d_k.$$
Note that $d_i=(v+1)T+u\notin P(A)$ for some $i\le k$ and $[1,v+1]\setminus\{u,v\}\in P(A_1)$. Hence, $(v+1)T+u=a_{m+1}+u$ or $(v+1)T+u=a_{m+1}+v$.
If $(v+1)T+u=a_{m+1}+u$, then $a_{m+1}=(v+1)T$. If $(v+1)T+u=a_{m+1}+v$, then $a_{m+1}=(v+1)(T-1)+u+1$.
Since $v \ge 3u+5$, it follows that
$$a_{m+1}+u<(v+1)(T-1)+v<a_{m+1}+v.$$
Note that $[u+1,v-1]\subseteq P(A_1)$. Then $(v+1)(T-1)+v\in P(A)$, which is impossible. Hence, $v+1\mid a_{m+1}$. Moreover, $a_{m+1}=(v+1)T$. Since
$$a_{m+1}+P(A_1\cup\{a_1,\dots,a_{m}\})=[a_{m+1},\sum_{i=1}^{m+1}a_i+u+v]\setminus\{a_{m+1}+d_1,\dots,a_{m+1}+d_{n}\}$$
and
$$a_{m+1}+d_1=(v+1)T+u\le d_k=\sum_{i=1}^{m}a_i+v+u+1=d_{n+1},$$
it follows from \eqref{eq0} that
$$P(A_1\cup\{a_1,\dots,a_{m+1}\})=[0,\sum_{i=1}^{m+1}a_i+u+v]\setminus\{d_1,d_2,\dots,d_{n'}\},$$
where
$$d_{n'}=a_{m+1}+d_n=\sum_{i=1}^{m+1} a_i+v.$$
This completes the proof of Lemma \ref{lem:2.4}.
\end{proof}
\begin{lemma}\label{lem:2.5}
Given positive integers $u\in\{4,7,8\}\cup\{u:u\ge11\}$, $v\ge 3u+5$ and $k\ge3$. If $A$ is an infinite set of positive integers such that
$$P(A)=\mathbb{N}\setminus \{d_1<d_2<\cdots<d_{k}<b_{k+1}<\cdots\}$$
and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>u+1$, then $b_{k+1}\ge d_{k+1}$.
\end{lemma}
\begin{proof} By Lemma \ref{lem:2.3} we know that there exists $A_1\subseteq A$ such that
$$P(A_1)=[0,u+v]\setminus\{d_1,d_2\}$$
and $\min\{A\setminus A_1\}>u+1$. Write $A\setminus A_1=\{a_1<a_2<\cdots\}$. By Lemma \ref{lem:2.4} we know that $v+1\mid a_i$ for $i=1,2,\dots, m$ and
$$P(A_1\cup\{a_1,\dots,a_m\})=[0,\sum_{i=1}^{m}a_i+u+v]\setminus\{d_1,d_2,\dots,d_{n}\},$$
where $m$ is the index such that
$$\sum_{i=1}^{m-1}a_i+v<d_k\le \sum_{i=1}^{m}a_i+v$$
and
$$d_{n}=\sum_{i=1}^m a_i+v.$$
If $d_k<\sum_{i=1}^m a_i+v$, then $d_{k+1}\le\sum_{i=1}^m a_i+v=d_{n}$. Hence, $k+1\le n$. Thus, $b_{k+1}\ge d_{k+1}$.
If $d_k=\sum_{i=1}^m a_i+v$, then $d_{k+1}=\sum_{i=1}^m a_i+u+v+1$ and
\begin{equation}\label{eq:2.5}
P(A_1\cup\{a_1,\dots,a_m\})=[0,\sum_{i=1}^{m}a_i+u+v]\setminus\{d_1,\dots,d_{k}\}
\end{equation}
and
\begin{equation}\label{eq:2.6}
a_{m+1}+P(A_1\cup\{a_1,\dots,a_m\})=[a_{m+1},\sum_{i=1}^{m+1}a_i+u+v]\setminus\{a_{m+1}+d_1,\dots,a_{m+1}+d_{k}\}.
\end{equation}
Note that
$$a_{m}<a_{m+1}\le \sum_{\substack{a<a_{m+1}\\a\in A}} a+1=\sum_{i=1}^{m}a_i+u+v+1=d_{k+1}.$$
By $a_{m+1}\neq d_{k}$ we divide into two cases according to the value of $a_{m+1}$.
{\bf Case 1}: $d_{k}<a_{m+1}\le d_{k+1}$. It follows from \eqref{eq:2.5} and \eqref{eq:2.6} that
$$b_{k+1}\ge a_{m+1}+d_1\ge d_{k}+d_{1}+1=\sum_{i=1}^{m}a_i+v+u+1=d_{k+1}.$$
{\bf Case 2}: $a_{m}<a_{m+1}< d_{k}$. Similar to the proof of Lemma \ref{lem:2.4}, we know that there exists a positive integer $T$ such that
$$a_{m+1}=(v+1)T,~~a_{m+1}+d_1=(v+1)T+u\le d_k.$$
It follows from \eqref{eq:2.5} and \eqref{eq:2.6} that
\begin{equation}\label{eq:2.7}
P(A_1\cup\{a_1,\dots,a_{m+1}\})=[0,\sum_{i=1}^{m+1}a_i+u+v]\setminus\{d_1,\dots,d_{n'}\},
\end{equation}
where
$$d_{n'}=a_{m+1}+d_k.$$
Then $n'\ge k+1$. Thus $b_{k+1}\ge d_{k+1}$.
\end{proof}
\emph{Proof of Theorem \ref{thm:1.1}:} It follows from Theorem \ref{thm:1.6} and Theorem \ref{thm:1.7} that $e_3=(v+1)+u$. For $k\ge3$, suppose that $A$ is an infinite set of positive integers such that
$$P(A)=\mathbb{N}\setminus \{e_1<e_2<\cdots<e_{k}<b_{k+1}<\cdots\}$$
and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>u+1$, where $e_i(1\le i\le k)$ is defined in \eqref{eq:c}. By Lemma \ref{lem:2.5} we have $b_{k+1}\ge d_{k+1}$. By Lemma \ref{lem:2.1} we know that $d_{k+1}$ is the critical value, that is, $e_{k+1}=d_{k+1}$. This completes the proof of Theorem \ref{thm:1.1}.
\noindent\textbf{Acknowledgments.} This work was supported by the National Natural Science Foundation of China, Grant No.11771211 and NUPTSF, Grant No.NY220092.
\renewcommand{\refname}{References}
| proofpile-arXiv_065-324 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Nomenclature}
{\renewcommand\arraystretch{1.0}
\noindent\begin{longtable*}{@{}l @{\quad=\quad} l@{}}
$(\cdot)^*$ & symbol for optimal solution \\
$t$ & time \\
$\tau $ & time interval transformation \\
$J$ & cost function \\
$\C{M}$ & Mayer cost \\
$\C{L}$ & Lagrangian \\
$\m{y}$ & state \\
$\m{Y}$ & state approximation\\
$\m{u}$ & control \\
$\m{U}$ & control approximation\\
$\m{c}$ & path constraint \\
$\m{b}$ & boundary constraint \\
$\m{g}$ & chance constraint \\
$K$ & number of mesh intervals \\
$N_k$ & number of LGR collocation points in mesh interval $k$ \\
$\ell_i$ & $i^{th}$ Lagrange polynomial \\
$t$ & time \\
$\m{D}$ & Differentiation matrix \\
$P_N$ & $N^{th}$--degree Legendre Polynomial \\
$\g{\lambda}$ & costate \\
$\g{\Lambda}$ & defect constraint Lagrange multiplier \\
$P$ & probability \\
$\Omega$ & sample space for a random event \\
$\bb{E}$ & expectation (mean) function \\
$1_{(\cdot)}$ & indicator function \\
$\boldsymbol{\xi}$ & random vector \\
$\psi$ & random variable \\
$ \mathbf{q} $ & event boundary \\
$\epsilon_{()}$ & risk violation parameter \\
$k_{()}$ & kernel \\
$K_{()}$ & integrated kernel function \\
$h$ & bandwidth \\
$\phi$ & user defined parameter \\
$B$ & bias of kernel \\
$T$ & execution time for $\mathbb{GPOPS-II}$ \\
$C$ & number of convergent runs \\
$H$ & number of runs converging to higher cost solution \\
\end{longtable*}}
\section{Introduction}
Optimal control problems arise frequently in a variety of engineering and non-engineering disciplines. The goal of an optimal control problem is to determine the state and control of a controlled dynamical system that optimize a given performance index while being subject to path constraints, event constraints, and boundary conditions~\cite{Betts3}. Optimal control problems are either deterministic or stochastic. A deterministic optimal control problem is an optimal control problem that contains no uncertainty, while a stochastic optimal control problem is one that contains uncertainty. Forms of uncertainty include measurement error, process noise, model error, and uncertainty in the constraints. Examples where the constraints contain uncertainty include fuzzy boundary keep out zone path constraints~\cite{Keil1}, variable control limitation path constraints~\cite{Kumar1}, and event constraints with variations in the state~\cite{Kumar1,Caillau}. Constraints with uncertainty are often modeled as chance constraints, and optimal control problems subject to chance constraints are called chance constrained optimal control problems (CCOCPs).
Due to the probabilistic form of the chance constraints, most CCOCPs must be solved numerically. Numerical methods for solving optimal control problems have been, however, developed primarily for solving deterministic optimal control problems. As a result, methods for transforming the CCOCP to a deterministic optimal control problem have been developed. Many of these methods focus on transforming the chance constraints to deterministic constraints in a manner that retains the key stochastic properties of the original chance constraint. Such methods include the methods of Refs.~\cite{blackmore10,Blackmore1,ono10,okamoto19,hokayem13} that are applicable to linear chance constraints. The methods of Refs.~\cite{Pinter,muhlpfordt18,Nemirovski} are applicable to chance constraints when certain information about the chance constraint is available. When information about the chance constraints is not available, the methods of Refs.~\cite{Kumar1,Caillau,Pagnoncelli1,ono15,Calafiore1,Calafiore2,Campi1,Chai,Ahmed,Calfa,Keil2} are applicable.
Recently, Ref.~\cite{Keil2} has developed a new method for transforming chance constraints to deterministic constraints using biased kernel density estimators (KDEs). An advantage of the method developed in Ref.~\cite{Keil2} is that the deterministic constraint is not overly conservative relative to the chance constraint and does not violate the boundary of the chance constraint. In addition, the method developed in Ref.~\cite{Keil2} has a key feature that it is formulated using an adaptive Gaussian quadrature orthogonal collocation method \cite{Benson2,Rao8,Garg1,Garg2,Patterson2015} known as LGR collocation. By combining biased KDEs with LGR collocation as performed in Ref.~\cite{Keil2}, it is possible to take advantage of several properties of Gaussian quadrature collocation. First, using Gaussian quadrature collocation, the constraints are evaluated independently at each collocation point. In addition, Gaussian quadrature collocation provides high-accuracy solutions along with exponential convergence for smooth optimal control problems \cite{Garg1,Garg2,Patterson2015,HagerHouRao15a,HagerHouRao16a,HagerLiuMohapatraWangRao19,DuChenHager2019}.
While Ref.~\cite{Keil2} provides a method for transforming CCOCPS to deterministic optimal control problems using biased KDEs, it does not provide a computationally reliable and efficient method for solving the resulting optimization problem. In fact, it is shown in this paper that, using a naive approach, solving the nonlinear programming problem (NLP) that arises from the approach of Ref.~\cite{Keil2} produces different results depending upon the manner in which the problem is initialized. Moreover, even when a solution can be obtained, it is shown that unnecessary computational effort is required. As a result, it is important in practical applications to to develop a computational approach that can be used with the method of Ref.~\cite{Keil2} that simultaneously leads to a tractable optimization problem and enables the optimization problem to be solved efficiently. Situations that could benefit from the approach developed in this paper include rapid (that is, on short notice) or real time trajectory optimization.
This paper describes a new computational approach for solving CCOCPs using biased KDEs together with LGR collocation. The approach developed in this paper is called a {\em warm start method} because it improves the initial guess provided to the NLP solver for the transcribed CCOCP. Moreover, because the transcribed CCOCP is solved using collocation together with mesh refinement, this warm start approach is used to generate an initial guess for the NLP solver on each mesh. The key benefit of this approach is that it improves the reliability and efficiency of solving CCOCPs using biased KDEs together with LGR collocation.
The warm start method developed in this paper has three major components. The first component tunes a parameter of the biased KDE in order to improve the starting point for the NLP. The second component is a kernel switching procedure that allows changing kernels which ensures that the starting kernel leads to a tractable optimization problem, while maintaining the ability to obtain results for other kernels. The third component is a procedure that incrementally increases the number of samples required for use with the biased KDEs.
The key contribution of this research is to develop a novel method to reliably and efficiently solve the NLP that arises from transforming a CCOCP using biased KDEs together with LGR collocation. The goal is to provide researchers with an approach that is tractable for solving increasingly complex CCOCPs. Using a systematic formulation for the method to solve two example CCOCPs, significant improvements are found using the approach developed in this paper.
This paper is organized as follows. Section~\ref{sect:review} provides a brief review of biased KDEs and LGR collocation. In Section~\ref{sect:Jorris2Dnaive}, a complex CCOCP is solved using biased KDEs and LGR collocation without a warm start. In Section~\ref{sect:tech}, the warm start method is developed. Section~\ref{sect:discusstech} provides the results of solving the complex CCOCP with the warm start method. In Section~\ref{sect:examples}, a second complex CCOCP is solved using the warm start method. Sections~\ref{sect:conclude} and~\ref{sect:discussion} provide a discussion and some conclusions, respectively.
\section{Chance Constrained Optimal Control\label{sect:review}}
In this section, biased kernel density estimators (KDEs) are combined with Legendre-Gauss-Radau (LGR) collocation to transform a chance constrained optimal control problem (CCOCP) to a nonlinear programming problem (NLP). First, Section~\ref{sect:discCCOCP} describes a general continuous CCOCP. Section~\ref{sect:LGRcolloc} then describes LGR collocation. Finally, in Section~\ref{sect:biasKDE}, biased KDEs are applied to transform the chance constraints to deterministic constraints, where the deterministic constraints retain the main probability properties of the chance constraint.
\subsection{General Chance Constrained Optimal Control Problem}\label{sect:discCCOCP}
Consider the following general continuous CCOCP. Determine the state
$\m{y}(\tau)\in\bb{R}^{n_y}$ and the control $\m{u}(\tau)\in\bb{R}^{n_u}$ on the domain $\tau \in [-1, +1]$, the initial time, $t_0$, and the terminal time $t_f$ that minimize the cost functional
\begin{subequations}
\begin{equation}\label{bolza-cost-s}
\C{J} =\C{M}(\m{y}(-1),t_0,\m{y}(+1),t_f) + \frac{t_f-t_0}{2}\int_{-1}^{+1} \C{L}(\m{y}(\tau),\m{u}(\tau), t(\tau, t_0, t_f))\, d\tau,
\end{equation}
subject to the dynamic constraints
\begin{equation}\label{bolza-dyn-s}
\frac{d\m{y}}{d\tau} -
\frac{t_f-t_0}{2}\m{a}(\m{y}(\tau),\m{u}(\tau), t(\tau, t_0, t_f) )=\m{0},
\end{equation}
the inequality path constraints
\begin{equation}\label{bolza-path-s}
\m{c}_{\min} \leq \m{c}(\m{y}(\tau),\m{u}(\tau), t(\tau, t_0, t_f) )\leq \m{c}_{\max},
\end{equation}
the boundary conditions
\begin{equation}\label{bolza-bc-s}
\m{b}_{\min} \leq \m{b}(\m{y}(-1),t_0,\m{y}(+1),t_f) \leq \m{b}_{\max},
\end{equation}
and the chance constraint
\begin{equation}\label{bolza-pathcc-s}
P( \m{F} (\m{y}(\tau),\m{u}(\tau),t(\tau, t_0, t_f);\g{\xi}) \geq \m{q}) \geq 1- \epsilon.
\end{equation}
\end{subequations}
The random vector $\g{\xi}$ is supported on set $\Omega \subseteq \bb{R}^d$. The function $ \m{F}(\cdot) \geq \m{q}$ is an event in the probability space $P(\cdot)$ where $\m{q} \subset \bb{R}^{n_g}$ is the boundary of the event and $\epsilon$ is the risk violation parameter. It is noted that the path, event and dynamic constraints can all be in the form of a chance constraint. Additionally, it is noted that the time interval $\tau\in[-1,+1]$ can be transformed
to the time interval $t\in[t_0,t_f]$ via the affine transformation
\begin{equation}\label{tau-to-t}
t \equiv t(\tau,t_0,t_f) = \frac{t_f-t_0}{2}\tau + \frac{t_f+t_0}{2}.
\end{equation}
The continuous CCOCP of Eqs.~\eqref{bolza-cost-s}-\eqref{bolza-pathcc-s} must be transformed to a form that is solvable using numerical methods. For application with numerical methods, the CCOCP is discretized on the domain $\tau\in[-1,+1]$ which is partitioned into a {\em mesh} consisting of $K$ {\em mesh intervals} $\C{S}_k=[T_{k-1},T_k],\; k=1,\ldots,K$, where $-1 = T_0 < T_1 < \ldots < T_K = +1$. The mesh intervals have the property that $\displaystyle \cup_{k=1}^{K} \C{S}_k=[-1,+1]$. Let $\m{y}^{(k)}(\tau)$ and $\m{u}^{(k)}(\tau)$ be the state and control in $\C{S}_k$. Using the transformation given in Eq.~\eqref{tau-to-t}, the CCOCP of Eqs.~\eqref{bolza-cost-s}-\eqref{bolza-pathcc-s} can then be rewritten as follows. Minimize the cost functional
\begin{subequations}
\begin{equation}\label{bolza-cost-segmented}
\C{J} = \C{M}(\m{y}^{(1)}(-1),t_0,\m{y}^{(K)}(+1),t_f) + \frac{t_f-t_0}{2}\sum_{k=1}^K \int_{T_{k-1}}^{T_k} \C{L}(\m{y}^{(k)}(\tau),\m{u}^{(k)}(\tau),t)\, d\tau,
\end{equation}
subject to the dynamic constraints
\begin{equation}\label{bolza-dyn-segmented}
\displaystyle\frac{d\m{y}^{(k)}(\tau)}{d\tau} - \frac{t_f-t_0}{2}\m{a}(\m{y}^{(k)}(\tau),\m{u}^{(k)}(\tau), t)=\m{0}, \quad (k=1,\ldots,K),
\end{equation}
the path constraints
\begin{equation}\label{bolza-path-segmented}
\m{c}_{\min} \leq \m{c}(\m{y}^{(k)}(\tau),\m{u}^{(k)}(\tau), t) \leq \m{c}_{\max},\quad (k=1,\ldots,K),
\end{equation}
the boundary conditions
\begin{equation}\label{bolza-bc-segmented}
\m{b}_{\min} \leq \m{b}(\m{y}^{(1)}(-1),t_0,\m{y}^{(K)}(+1),t_f) \leq \m{b}_{\max},
\end{equation}
and the chance constraint
\begin{equation}\label{bolza-pathcc-segmented}
P(\m{F} (\m{y}^{(k)}(\tau),\m{u}^{(k)}(\tau),t;\g{\xi}) \geq \m{q}) \geq 1- \epsilon.
\end{equation}
Because the state must be continuous, the following condition
\begin{equation}\label{eq:contin}
\m{y}^{(k)}(T_k)=\m{y}^{(k+1)}(T_k),\;(k=1,\ldots,K-1).
\end{equation}
\end{subequations}
\subsection{Legendre-Gauss-Radau Collocation}\label{sect:LGRcolloc}
The form of discretization that will be applied to the CCOCP in Section~\ref{sect:discCCOCP} is collocation at
LGR points~\cite{Garg1,Garg2,Patterson2015}. In the LGR
collocation method, the state of the continuous CCOCP is approximated in $\C{S}_k,\;k\in[1,\ldots,K]$, as
\begin{equation}\label{state-approximation-LGR}
\begin{split}
\m{y}^{(k)}(\tau) \approx \m{Y}^{(k)}(\tau) & = \sum_{j=1}^{N_k+1}
\m{Y}_{j}^{(k)} \ell_{j}^{(k)}(\tau),\\ \ell_{j}^{(k)}(\tau) & = \prod_{\stackrel{l=1}{l\neq j}}^{N_k+1}\frac{\tau-\tau_{l}^{(k)}}{\tau_{j}^{(k)}-\tau_{l}^{(k)}},
\end{split}
\end{equation}
where $\tau\in[-1,+1]$, $\ell_{j}^{(k)}(\tau),$ $j=1,\ldots,N_k+1$, is a
basis of Lagrange polynomials,
$\left(\tau_1^{(k)},\ldots,\tau_{N_k}^{(k)}\right)$ are the
LGR collocation points in $\C{S}_k =$ $[T_{k-1},T_k)$, and
$\tau_{N_k+1}^{(k)}=T_k$ is a noncollocated point. Differentiating
$\m{Y}^{(k)}(\tau)$ in Eq.~(\ref{state-approximation-LGR}) with
respect to $\tau$ gives
\begin{equation}\label{diff-state-approximation-LGR}
\frac{d\m{Y}^{(k)}(\tau)}{d\tau} = \sum_{j=1}^{N_k+1}\m{Y}_{j}^{(k)}\frac{d\ell_j^{(k)}(\tau)}{d\tau}.
\end{equation}
Defining $t_i^{(k)}=t(\tau_i^{(k)},t_0,t_f)$ using
Eq.~\eqref{tau-to-t}, the dynamics are then approximated at the $N_k$
LGR points in mesh interval $k\in[1,\ldots,K]$ as
\begin{equation}\label{collocation-LGR}
\sum_{j=1}^{N_k+1}D_{ij}^{(k)} \m{Y}_j^{(k)} - \frac{t_f-t_0}{2}\m{a}(\m{Y}_i^{(k)},\m{U}_i^{(k)},t_i^{(k)})=\m{0}, \ (i=1,\ldots,N_k),
\end{equation}
where $D_{ij}^{(k)} = d\ell_j^{(k)}(\tau_i^{(k)})/d\tau,\;(i=1,\ldots,N_k),\;(j=1,\ldots,N_k+1)$ are the elements of the $N_k\times (N_k+1)$ {\em Legendre-Gauss-Radau differentiation matrix}~\cite{Garg1} in mesh interval
$\C{S}_k,\;k\in[1,\ldots,K]$. The LGR discretization then leads to
the following resulting form of the discretized CCOCP. Minimize
\begin{equation}\label{cost-LGR}
\C{J} \approx \C{M}(\m{Y}_{1}^{(1)},t_0,\m{Y}_{N_K+1}^{(K)},t_f) + \sum_{k=1}^{K} \sum_{j=1}^{N_k} \frac{t_f-t_0}{2}
w_{j}^{(k)} \C{L}(\m{Y}_{j}^{(k)},\m{U}_{j}^{(k)},t_j^{(k)}),
\end{equation}
subject to the collocation constraints of Eq.~\eqref{collocation-LGR}
and the constraints
\begin{gather}\label{eq:differential-collocation-conditions-LGR}
\m{c}_{\min} \leq \m{c}(\m{Y}_{i}^{(k)},\m{U}_{i}^{(k)},t_i^{(k)}) \leq \m{c}_{\max},\; (i=1,\ldots,N_k),\\
\m{b}_{\min} \leq \m{b}(\m{Y}_{1}^{(1)},t_0,\m{Y}_{N_K+1}^{(K)},t_f) \leq \m{b}_{\max}, \\
P( \m{F} (\m{Y}_{i}^{(k)},\m{U}_{i}^{(k)},t_i^{(k)};\g{\xi}) \geq \m{q}) \geq 1- \epsilon ,\; (i=1,\ldots,N_k) , \label{cc-constraint}\\
\m{Y}_{N_k+1}^{(k)} = \m{Y}_1^{(k+1)} , \quad (k=1,\ldots,K-1), \label{continuity-constraint}
\end{gather}
where $N = \sum_{k=1}^{K} N_k$ is the total number of LGR points and
Eq.~\eqref{continuity-constraint} is the continuity condition on
the state that is enforced at the interior mesh points
$(T_1,\ldots,T_{K-1})$ by treating $\m{Y}_{N_k+1}^{(k)}$ and
$\m{Y}_1^{(k+1)}$ as the same variable. In order for Eqs.~\eqref{collocation-LGR}-~\eqref{continuity-constraint} to be an NLP, the chance constraint is transformed to a deterministic constraint in the next section.
\subsection{Biased Kernel Density Estimators}\label{sect:biasKDE}
In this section, the chance constraint of Eq.~\eqref{cc-constraint} is transformed to a deterministic constraint using biased KDEs~\cite{Keil2}. In order to transform the chance constraint, first, it is noted that the function $\m{F}(\cdot)$ of Eq.~\eqref{cc-constraint} is itself a random vector $\g{\psi}$ whose associated probabilistic properties are unknown. Consequently, the constraint of Eq.~\eqref{cc-constraint} can be redefined as:
\begin{equation}\label{eq:CCexample}
P(\g{\psi} \geq \m{q}) \geq 1-\epsilon.
\end{equation}
Because $\g{\psi}$ is a random vector, Eq.~\eqref{eq:CCexample} is a joint chance constraint. Using Boole's inequality together with the approach of Refs.~\cite{Nemirovski,Blackmore1,Kumar1}, the chance constraint given in Eq.~\eqref{eq:CCexample} can be redefined in terms of the following set of conservative constraints (see Refs.~\cite{Blackmore1} and~\cite{Kumar1} for the proof):
\begin{equation}\label{eq:CC_scalar}
\begin{array}{c}
P( \psi \geq q_m ) \geq 1-\epsilon_m, \\
\sum\limits_{m =1}^{n_g} \epsilon_m \leq \epsilon,
\end{array}
\end{equation}
or, equivalently
\begin{equation}\label{eq:CC_scalar_comp}
\begin{array}{c}
P( \psi < q_m ) \leq \epsilon_m, \\
\sum\limits_{m =1}^{n_g} \epsilon_m \leq \epsilon,
\end{array}
\end{equation}
where $m\in [1,\dots,n_g]$ is the index corresponding to the $m$th component of the event and $\psi$ is the $m$th variable of random vector $\g{\psi}$.
As a result of the chance constraint now being in the scalar form of Eq.~\eqref{eq:CC_scalar}, this chance constraint can be transformed to a deterministic constraint using biased KDEs. First, for this transformation, the kernel $k(\cdot) $ of the KDE is integrated to obtain the following integrated kernel function $K(\cdot)$:
\begin{equation}\label{eq:kernCDF}
\begin{array}{c}
K(\eta_j) = \int_{-\infty}^{\eta_j}k(v_j) d v_j, \\
\eta_j = \frac{q_m-\psi_j}{h},
\end{array}
\end{equation}
where $h$ is the bandwidth and $j = 1,\dots,N$ is the number of samples of the random variable $\psi$. It is noted that the samples of $\psi$ are obtained by sampling $\g{\xi}$. Next, the following relation between the chance constraint of Eq.~\eqref{eq:CC_scalar} and the integrated kernel function $K(\cdot)$ is defined:
\begin{equation}\label{eq:KDEbias}
\frac{1}{N} \sum_{j = 1}^N K_B \left( \eta_j \right) \leq P(\psi < q_m) \leq \epsilon_m,
\end{equation}
where the subscript $B$ on the integrated kernel function indicates that the kernel function has been biased by amount $B(h)$ and the left-hand side of Eq.~\eqref{eq:KDEbias} is the biased KDE.
The bias $B(h)$ is chosen such that the biased integrated kernel function $K_B(\cdot)$ satisfies the following inequality:
\begin{align}\label{eq:geninequality}
1_{[0,+\infty)} \left( \nu \right) \leq K_B(\nu), \ \forall \nu,\\
\nu = \frac{\psi-q_m}{h}, \ \textrm{with}~h >0.
\end{align}
The relation of Eq.~\eqref{eq:geninequality} is the first requirement from Ref.~\cite{Keil2} for the relation between the biased KDE and the chance constraint from Eq.~\eqref{eq:KDEbias} to hold. The second requirement is that the number of MCMC samples $N$ reaches a value $N_c$ that is sufficiently large~\cite{Kumar1,Keil2,Kumar3,Kumar2} in order to accurately approximate the characteristics of the distribution of the random vector $\boldsymbol{\xi}$ \cite{MCMCMethods}. If these samples are available, the following expression
\begin{equation}\label{eq:inequaltosat}
\lim_{N \to N_c} \frac{1}{N} \sum\limits_{j = 1}^N K_B (\nu_j) \ = \bb{E} [ K_B (\nu)] \geq 1-\epsilon_m,
\end{equation}
converges to the expectation $\bb{E} [K_B(\nu)]$, where the expectation $\bb{E} [ K_B (\nu)]$ exists for $h>0$ for the nonempty compact set
\begin{equation}\label{eq:setsforproof}
1-\epsilon_m \leq \bb{E} [ K_B (\nu)].
\end{equation}
If the first and second requirements from, respectively, Eq.~\eqref{eq:geninequality} and Eq.~\eqref{eq:inequaltosat} are satisfied, the chance constraint of Eq.~\eqref{cc-constraint} can be transformed to the following set of deterministic constraints:
\begin{equation}\label{eq:KDEfinalform}
\begin{array}{c}
\frac{1}{N} \sum\limits_{j = 1}^N K_B \left( \eta_j \right) \leq \epsilon_m, \\
\sum\limits_{m =1}^{n_g} \epsilon_m \leq \epsilon.
\end{array}
\end{equation}
By replacing the chance constraint from Eq.~\eqref{cc-constraint} by the deterministic constraints from Eq.~\eqref{eq:KDEfinalform}, the system of equations from Eqs.~\eqref{collocation-LGR}--\eqref{continuity-constraint} is now a NLP that can be solved available software such as SNOPT~\cite{Gill1,Gill2}, IPOPT~\cite{Biegler2}, and KNITRO~\cite{Byrd1}.
\section{Motivation for Warm Start Method\label{sect:Jorris2Dnaive}}
This section provides motivation for the warm start method developed in Section~\ref{sect:tech}. This motivation is furnished via a complex example CCOCP that is solved using biased KDEs and LGR collocation without a warm start method. In Section~\ref{sect:Jorris2D}, the example is presented. Section~\ref{sect:guesseschoic} provides the initialization. Next, Section~\ref{sect:setup} describes the setup for the optimal control software $\mathbb{GPOPS-II}$. Finally, Section~\ref{sect:Jorris2Ddiscuss} provides the results of solving the example.
\subsection{Example 1\label{sect:Jorris2D}}
Consider the following chance constrained variation of a deterministic optimal control problem from Ref~\cite{Jorris2}. Minimize the cost functional
\begin{equation}\label{eq:costvers2}
J = t_f,
\end{equation}
subject to the dynamic constraints
\begin{equation} \label{eq:dynvers2}
\begin{array}{ccc}
\dot x (t) & = & V \cos \theta (t), \\
\dot y (t) & = & V \sin \theta (t), \\
\dot \theta (t) & = & \frac{\tan(\sigma_{\max})}{V}u, \\
\dot V & = & a,
\end{array}
\end{equation}
the boundary conditions
\begin{equation}\label{eq:boundvers2}
\begin{array}{cccccc}
\ x(0) & = & -1.385, & x(t_f) & = & 0.516, \\
\ y(0) & = & 0.499, & y(t_f) & = & 0.589, \\
\theta (0) & = & 0, & \theta(t_f) & = & \textrm{free}, \\
V (0) & = & 0.293, & V(t_f) & = & \textrm{free}, \\
\end{array}
\end{equation}
the control bounds
\begin{equation}\label{eq:contvers2}
-1 \leq u \leq 1,
\end{equation}
the event constraints
\begin{equation}\label{eq:eventvers2}
\big( x(t_i)-x_i,y(t_i)-y_i \big)=\big( 0,0 \big),\quad (i=1,2),
\end{equation}
and the chance path inequality constraint (keep-out zone constraint)
\begin{equation}\label{eq:CC1}
P \left( R^2 - \Delta x_{\xi_1}^2-\Delta y_{\xi_2}^2 > \delta \right) \leq \epsilon_d,
\end{equation}
where $(\Delta x_{\xi_1}, \Delta y_{\xi_2})$ are defined as
\begin{equation}\label{eq:deltas}
\big( \Delta x_{\xi_1},\Delta y_{\xi_2} \big) = \big( x+\xi_1 -x_c, y+\xi_2-y_c \big).
\end{equation}
The random variables $\xi_1$ and $\xi_2$ have normal distributions of $N(\mu_1,\sigma_1^2)$ and $N(\mu_2,\sigma_2^2)$, respectively. The parameters for the example problem are provided in Table~\ref{table3}.
\begin{table}[ht]
\caption{Parameters for Example 1.\label{table3}}
\renewcommand{\baselinestretch}{1}\small\normalfont
\centering
\begin{tabular}{| c | c |}
\hline
Parameter & Value \\ \hline\hline
$(x_c,y_c)$ & $(0.193, 0.395)$\\ \hline
$R$ & $0.243$ \\ \hline
$\epsilon_d$ & $0.010$ \\ \hline
$\delta$ & $0.020$ \\ \hline
$a $ & $-0.010$ \\ \hline
$(x_1, y_1)$ & $(-0.737,0.911)$ \\ \hline
$(x_2, y_2)$ & $(-0.340,0.297)$ \\ \hline
$\sigma_{\textrm{max}}$ & $0.349$ \\ \hline
$ (\mu_1,\mu_2)$ & $(0,0)$ \\ \hline
$(\sigma_1,\sigma_2)$ & $(0.001,0.0005)$ \\ \hline
\end{tabular}
\end{table}
\subsection{Initialization for Example 1}\label{sect:guesseschoic}
For the initialization of Example 1, the problem is divided into three phases. The first phase begins at $ ( x(0),y(0)) $ and ends at $(x_1,y_1)$. The second phase ends at $(x_2,y_2)$, while the third phase ends at the $(x(t_f),y(t_f))$. The constraints of Eqs.~\eqref{eq:dynvers2}--\eqref{eq:contvers2} and Eq.~\eqref{eq:CC1} are included in every phase.
As Example 1 is divided into phases, an initial guess of the states and control must be provided for each phase. Because the deterministic path constraint is active in the third phase and this constraint depends only on $(x,y)$, the initial guess of $(x,y)$ for the third phase affects whether or not the NLP solver converges to solution. Consequently, as shown in Fig.~\ref{fig:InitialG}, the different initial guesses of $(x,y)$ for the third phase are two line segments connected at the points $(0.175,0.611)$, $(0.175,0.785)$, $(0.175,0.960)$ and $(0.349,0.960)$. These initial guesses will be referred to, respectively, as initial guesses I, II, III, and IV as shown in Table~\ref{tableInitialGuessesPhase3}. For all other states, a straight line approximation between the known initial and terminal conditions per phase was applied. If endpoint conditions were not available, the same constant initial guess that did not violate the constraint bounds for each phase was used. The control was set as a constant of zero for all three phases.
\begin{figure}[ht!]
\centering
\vspace*{0.25in}
{\includegraphics[height = 2.1in]{initialguess.eps}}
\caption{State initial guesses for Example 1.\label{fig:InitialG}}
\end{figure}
\begin{table}[ht!]
\centering
\caption{Initial guesses used for phase 3 of Example 1 and corresponding to Fig.~\ref{fig:InitialG}.\label{tableInitialGuessesPhase3}}
\renewcommand{\baselinestretch}{1}\small\normalfont
\begin{tabular}{|c|c|}\hline
Initial Guess & Label \\ \hline
$(0.175,0.611)$ & I \\ \hline
$(0.175,0.785)$ & II \\ \hline
$(0.175,0.960)$ & III \\ \hline
$(0.349,0.960)$ & IV \\ \hline
\end{tabular}
\end{table}
The chance path constraint is transformed to a deterministic path constraint using the approach of Section~\ref{sect:review}, and evaluating this deterministic path constraint at each collocation point can be computationally intractable when the number of samples is sufficiently large. In order to ensure computational tractability, the chance path constraint of Eq.~\eqref{eq:CC1} is reformulated as follows~\cite{Keil2}:
\begin{equation}\label{eq:CC2}
\epsilon_d \geq
\begin{cases}
0, \ & \textrm{if} \ \Delta x^2 + \Delta y^2 \geq (R+b)^2, \\
P \left( R^2 - \Delta x_{\xi_1}^2-\Delta y_{\xi_2}^2 - \delta > 0 \right), & \textrm{if} \ \Delta x^2 + \Delta y^2 < (R+b)^2,
\end{cases}
\end{equation}
where $(\Delta x, \Delta y)$ is defined as
\begin{equation}\label{eq:Deltas_noxi}
\big( \Delta x,\Delta y) = \big( x-x_c,y-y_c), \\
\end{equation}
and $b$ is a user defined parameter for determining when the chance path constraint will be evaluated with or without samples. Due to the size of $R$, $b$ is set equal to $0.05$. The chance path constraint of Eq.~\eqref{eq:CC1} evaluates to a small number if the distance between $(x,y)$ and $(x_c,y_c)$ is large enough, and this small number indicates that the chance path constraint is inactive. Consequently, if the distance between $(x,y)$ and $(x_c,y_c)$ is larger than $R+b$, the chance path constraint is taken to be inactive and will be set equal to an arbitrary constant less than $\epsilon_d$ (which in this case is zero). Conversely, if the distance between $(x,y)$ and $(x_c,y_c)$ is smaller than $R+b$, the chance path constraint will be evaluated using samples. Therefore, when the chance path constraint is transformed to a deterministic path constraint, this deterministic path constraint will only be evaluated using samples at a subset of the collocation points, thus improving computational tractability.
\subsection{Setup for Optimal Control Software $\mathbb{GPOPS-II}$ \label{sect:setup}}
Example 1 was solved using the $hp$-adaptive Gaussian quadrature collocation \cite{Garg1,Garg2,Garg3,Patterson2015,Liu2015,Liu2018,Darby2,Darby3,Francolin2014a} optimal control software $\mathbb{GPOPS-II}$~\cite{Patterson2014} together with the NLP solver SNOPT~\cite{Gill1,Gill2} (using a maximum of $500$ NLP solver iterations) and the mesh refinement method of Ref.~\cite{Liu2018}. All derivatives required by the NLP solver were obtained using sparse central finite-differencing~\cite{Patterson2012}. The initial mesh consisted of $10$ mesh intervals with four collocation points each. Next, constant bandwidths for each kernel were determined using the MATLAB$^{\textrm{\textregistered}}$ function \textsf{ksdensity}~\cite{bowman1,silverman1}. Furthermore, the method of Neal~\cite{MCMCMethods,Neal2} was used to obtain $50,000$ MCMC samples per run. Finally, twenty runs for each kernel were performed using a 2.9 GHz Intel$^{\textrm{\textregistered}}$ Core i9 Macbook Pro running Mac OS-X version 10.13.6 (High Sierra) with 32 GB 2400 MHz DDR4 RAM using MATLAB$^{\textrm{\textregistered}}$ version R2018a (build 9.4.0.813654)
Because a new set of samples is generated for each run, it is not guaranteed that the NLP solver will converge to a solution on every run. Therefore, consecutive runs are performed to determine the reproducibility of the results. Consequently, the same number of mesh refinements must be applied per run. In order to ensure consistent results, the number mesh refinement iterations is limited to two. It is noted that, from trial and error, twenty runs was found to be sufficient to determine if there were issues of reproducibility, such as a run not converging, because these issues would surface for at least one of the twenty runs.
\subsection{Results and Discussion for Example 1: Without a Warm Start}\label{sect:Jorris2Ddiscuss}
In this section, results are provided for solving Example 1 using the approach of Section~\ref{sect:review} without a warm start, and with the following three kernels: the Split-Bernstein~\cite{Keil2} kernel, the Epanechnikov kernel~\cite{Epanech1} with a bias equal to the bandwidth, and the Gaussian kernel with a bias equal to three times the bandwidth. The Gaussian kernel was chosen despite not satisfying the requirements of a biased kernel, so that solutions could be obtained using a smooth kernel~\cite{Keil2}.
\begin{figure}[ht]
\centering
\vspace*{0.25in}
\subfloat[States.]{\includegraphics[height = 2.1in]{localSB_pos.eps}}
~~~~\subfloat[Control.]{\includegraphics[height = 2.1in]{localSB_control.eps}}
\caption{Higher cost solution for Example 1.}
\label{fig:LocalResult}
\end{figure}
\begin{figure}[ht]
\centering
\vspace*{0.25in}
\subfloat[States.]{\includegraphics[height = 2.1in]{optSB_pos.eps}}
~~~~\subfloat[Control.]{\includegraphics[height = 2.1in]{optSB_control.eps}}
\caption{Lower cost solution for Example 1.}
\label{fig:OptResult}
\end{figure}
For Example 1, the NLP solver could either converge to a higher cost or lower cost solution, or not converge. Figures~\ref{fig:LocalResult} and~\ref{fig:OptResult} show the two different possible solutions obtained using the Split-Bernstein kernel. It is noted that, for the Gaussian kernel, an infeasible solution was obtained for one run with initial guess II. This was the only infeasible solution obtained for all of the runs. Additionally, for Example 1, Tables~\ref{tableSB_naive}--\ref{tableG_naive} contain the results of twenty runs using, respectively, the Split-Bernstein, Epanechnikov, and Gaussian kernels. For Tables~\ref{tableSB_naive}--\ref{tableG_naive}, $C$ is the number of times the NLP solver converged and $H$ is the number of times the NLP solver converged to the higher cost solution. Additionally, $\mu_T$, $T_{\min}$, and $T_{\max}$ are, respectively, the average, minimum, and maximum of the execution times for $\mathbb{GPOPS-II}$ obtained for all the runs. Comparing the results shown in Tables~\ref{tableSB_naive}--\ref{tableG_naive}, it is seen that a large percentage of the runs either converge to the higher cost solution or do not converge. The best convergence results were for the Epanechnikov kernel with the initial guess $(0.349,0.960)$. Furthermore, the run times for all three kernels are high regardless of the initial guess, where the best run times were for the Gaussian kernel with the initial guess $(0.175,0.960)$. Thus, both the kernel and initial guess have an impact on convergence of the NLP solver and run time.
\begin{table}[ht]
\centering
\caption{Results for Example 1 Using the Split Bernstein, Epanechnikov, and Gaussian Kernels.\label{tableExample1Results}}
\renewcommand{\baselinestretch}{1}\small\normalfont
\subfloat[Results for Split Bernstein Kernel.\label{tableSB_naive}]{
\begin{tabular}{| c || c | c | c | c |}
\hline
\backslashbox{Quantity}{Initial Guess} & I & II & III & IV \\ \hline
$C$ & $17$ & $14$ & $18$ & $18$ \\ \hline
$H$ & $7$ & $7$ & $19$ & $3$ \\ \hline
$\mu_T$ (s) & $314.6$ & $165.1$ & $129.1$ & $ 192.4$ \\ \hline
$T_{\min}$ (s) & $52.85$ & $43.05$ & $47.69$ & $59.61$ \\ \hline
$T_{\max}$ (s) & $2465.8$ & $398.3$ & $1090.4$ & $1916.8$ \\ \hline
\end{tabular}
}
\subfloat[Results for Epanechnikov Kernel.\label{tableE_naive}]{
\begin{tabular}{| c || c | c | c | c |}
\hline
\backslashbox{Quantity}{Initial Guess} & I & II & III & IV \\ \hline
$C$ & $18$ & $20$ & $20$ & $19$ \\ \hline
$H$ & $3$ & $16$ & $19$ & $4$ \\ \hline
$\mu_T$ (s) & $346.0$ & $363.5$ & $159.2$ & $261.1$ \\ \hline
$T_{\min}$ (s) & $84.64$ & $80.88$ & $51.94$ & $64.43$ \\ \hline
$T_{\max}$ (s) & $2656.2$ & $1375.0$ & $517.6$ & $1954.8$ \\ \hline
\end{tabular}
}
\subfloat[Results for Gaussian Kernel.\label{tableG_naive}]{
\begin{tabular}{| c || c | c | c | c |}
\hline
\backslashbox{Quantity}{Initial Guess} & I & II & III & IV \\ \hline
$C$ & $17$ & $19$ & $20$ & $19$ \\ \hline
$H$ & $3$ & $3$ & $20$ & $9$ \\ \hline
$\mu_T$(s) & $197.4$ & $287.8$ & $45.98$ & $98.23$ \\ \hline
$T_{\min}$ (s) & $55.38$ & $41.38$ & $34.61$ & $44.27$ \\ \hline
$T_{\max}$ (s) & $980.7$ & $3060.8$ & $85.07$ & $541.7$ \\ \hline
\end{tabular}
}
\end{table}
This example demonstrates that, without a warm start, solving a complex CCOCP using the approach of Section~\ref{sect:review} can be computationally challenging. In particular, these computational challenges include the inability to solve the NLP and large computation times. Moreover, it is noted that these computational issues are affected by the choice of the kernel and initial guess. In order to overcome these computational issues, in Section \ref{sect:tech} a warm start method is developed for solving CCOCPs.
\section{Warm Start Method\label{sect:tech}}
In this section, a warm start method is developed for efficiently solving CCOCPs using the approach of Section~\ref{sect:review}. The warm start method consists of three components that are designed to aid the NLP solver in converging to a solution. Once the NLP solver has converged, the components are applied to efficiently cycle through mesh refinement iterations. The three components are: (1) bandwidth tuning (Section \ref{sect:bandwidth-tuning}); (2) kernel switching (Section \ref{sect:kernel-switching}); and (3) sample size increase (Section \ref{sect:sample-size-increasing}). Section~\ref{sect:sumpre} summarizes the warm start method.
\subsection{Component 1: Tuning the Bandwidth\label{sect:bandwidth-tuning}}
The first component of the warm start method is tuning the bandwidth. The need for tuning the bandwidth arises from the deterministic constraint obtained by transforming the chance constraint using the approach of Section~\ref{sect:review} being difficult for the NLP solver to evaluate. Increasing the size of the bandwidth will improve the starting point for the NLP solver, and the NLP solver is more likely to converge with this better starting point. The solution obtained using the larger bandwidth will, however, have a higher cost than the solution obtained using the original bandwidth. It is noted that using a larger starting bandwidth that is later reduced to the original bandwidth increases the likelihood that the NLP solver will converge. Moreover, the solution ultimately obtained will be for the original non-smooth constraint.
In this paper, the following approach is used to tune the bandwidth. First, the starting bandwidth is the original bandwidth multiplied by a constant $w \geq 1$, where the original bandwidth is obtained using the MATLAB function \textsf{ksdensity}. When the mesh refinement error is less that a user chosen parameter $\phi$, $w$ is set to unity. For tuning the bandwidth, $w$ is started at unity and increased over a series of trial runs of $\mathbb{GPOPS-II}$ until either the NLP solver converges on every run, or the NLP solver can no longer converge to the same solution as when the original bandwidth was used. It is noted that this approach for tuning the bandwidth is different from that of Ref.~\cite{Keil2} in that Ref.~\cite{Keil2} does not choose the starting bandwidth relative to the original bandwidth. Finally, it is noted that this first component of the warm start method requires the starting bandwidth to be tuned separately for each kernel.
\subsection{Component 2: Kernel Switching\label{sect:kernel-switching}}
The second component of the warm start method is switching the kernel. The purpose of kernel switch is that, even with bandwidth tuning, the likelihood that the NLP solver will not converge is higher if certain kernels are chosen. Conversely, via the choice of an appropriate kernel, tuning the bandwidth improves the chances that the NLP solver will converge. Thus, starting with an appropriate combination of bandwidth and kernel and later switching to the desired kernel improves the chances that the NLP solver will converge, even if the second kernel would have resulted in divergence of the NLP solver if it had been used at the outset. It is noted that an additional benefit to this switch is that kernels, like the Gaussian kernel, that do not satisfy the criteria for a biased KDE from Section~\ref{sect:review}, can still be applied as the starting kernel. Only the desired kernel must satisfy this criteria.
In the method of this paper, the kernel switch is performed as follows. First, a starting bandwidth and kernel are chosen by trial runs. Next, the kernel is switched when the mesh refinement error is below $\phi$. Thus, the bandwidth and kernel are updated simultaneously.
\subsection{Component 3: Incrementally Increasing Sample Size\label{sect:sample-size-increasing}}
The third component of the warm start method is an approach for incrementally increasing the sample set size. The purpose of incrementally increasing the sample set size is that it is computationally expensive to evaluate the deterministic constraint obtained by transforming the chance constraint using the approach of Section~\ref{sect:review} when the sample size is large. If, on the other hand, the number of samples is reduced, the computational effort required by the NLP solver is also reduced~\cite{Roycet1}. Note, however, that by reducing the sample size, it is no longer possible to satisfy the bound on the deterministic constraint as described in Section~\ref{sect:review}. Conversely, if the number of samples is increased incrementally from a small amount to the total number of samples through a series of mesh refinement iterations, the computational expense is reduced while ultimately satisfying the bound on the deterministic constraint.
In this paper, the following approach is used to incrementally increase the sample set size. First, a small number of samples is selected as the starting sample set. Next, when the mesh refinement error drops below the user-specified value $\phi$, the number of samples is increased by a user-specified amount. Thus, the first increase in sample size occurs when the bandwidth and kernel are updated. After the first increase in the number of samples, the sample size is incrementally increased on every subsequent mesh refinement iteration until the full sample size is reached. It is noted that, because the increments for increasing the samples are tied to the mesh refinement iterations, using an excessive number of increments may lead to a need for extra mesh refinement iterations. Moreover, for every set of samples, a different bandwidth must be generated using $\textsf{ksdensity}$. Consequently, the starting bandwidth will be the bandwidth for the smallest sample set multiplied by $w$ as described in Section \ref{sect:bandwidth-tuning}.
\subsection{Summary of Warm Start Method}\label{sect:sumpre}
The three components of the warm start method are changing the bandwidth, switching the kernel, and an approach for incrementally increasing the sample set size. These components increase the chances of the NLP solver converging while reducing run time, regardless of the choice of kernel. Also, the sensitivity to the initial guess will be reduced by applying an appropriate starting bandwidth, kernel, and subset of samples. The three components are combined into the warm start method that is presented below.
\begin{shadedframe}
\vspace{-10pt}
\begin{center}
\shadowbox{\bf Warm Start Method for Solving CCOCPs}
\end{center}
\begin{enumerate}[{\bf Step 1:}]
\item Determine bandwidths for $2000$ and $10,000$ samples from full sample set, as well as for the full sample set.
\item Choose a constant $w$ and kernel pair.\label{step:multiplier}
\begin{enumerate}[{\bf (a):}]
\item Choose a trial constant $w_i$ and kernel.
\item Perform 10-20 runs for up to two mesh refinement iterations, using $2000$ samples from full sample set.\label{step:refine}
\item If NLP solver converges on all runs, set $w = w_i$. Otherwise, choose $w_{i+1} > w_{i}$ and possibly change the kernel. Return to {\bf~\ref{step:refine}}.
\end{enumerate}
\item Run problem through optimal control software for a series of mesh refinement iterations with $2000$ samples from full sample set. \label{step:runfull}
\item For the first mesh refinement iteration when mesh error decreases to $\phi$: set $w = 1$, change to $10,000$ samples from full sample set, update the bandwidth, and switch the kernel.
\item On the following mesh refinement iterations, change the sample set to the full set of samples and update the bandwidth.
\end{enumerate}
\end{shadedframe}
\section{Solution to Example 1 Using Warm Start Method \label{sect:discusstech}}
Example 1 is now re-solved using biased KDEs and LGR collocation together with the warm start method of Section \ref{sect:tech}. For Example 1, the values $\phi = 5 \times 10^{-5}$ and $w = 100$ are used, and the Split-Bernstein kernel is the starting kernel. In Section~\ref{sect:lowmesh}, Example 1 is solved using three mesh refinement iterations such that the final mesh refinement is performed using the full sample set, in order to provide a fair comparison with the results obtained without a warm start as given in Section \ref{sect:Jorris2Ddiscuss} (where it is noted that two mesh refinement iterations were used to obtain the results shown in Section \ref{sect:Jorris2Ddiscuss}). In Section~\ref{sect:highmesh}, Example 1 is solved with enough mesh refinement iterations to reach mesh convergence, along with a deterministic version of Example 1.
\subsection{Limited Number of Mesh Refinement Iterations\label{sect:lowmesh}}
Recall from Section \ref{sect:Jorris2Dnaive} that a maximum of two mesh refinement iterations was allowed when solving Example 1 without a warm start. To demonstrate the improvement of using a warm start while providing a fair comparison with the results obtained in Section \ref{sect:Jorris2Dnaive}, in this section, Example 1 is solved using the approach of Section~\ref{sect:review}, with the warm start method of Section~\ref{sect:tech} and a maximum of three mesh refinement iterations. Tables~\ref{tableSB_threeiter}--\ref{tableG_threeiter} show the results obtained using the warm start method for, respectively, the Split-Bernstein, Epanechnikov and Gaussian kernels. The results show that, with the warm start method, the NLP solver converges to the lower cost solution for all runs, as compared to converging to the higher cost solution or not converging without a warm start, as shown previously in Tables~\ref{tableSB_naive}--\ref{tableG_naive}. Also, the results obtained using the warm start method indicate that convergence of the NLP solver was not affected by the kernel or initial guesses.
Furthermore, the run times using all three kernels are much lower when a warm start is included, when compared with the results of Section~\ref{sect:Jorris2Dnaive}. In addition, the run times are similar regardless of the choice of the kernel or the initial guess. This last observation indicates that the computation time is not affected significantly by the choice of the kernel or the initial guess. To see the differences between the results with and without the warm start method, Table~\ref{perform} provides the percentage increase in computational performance when solving Example 1 with a warm start relative to not including a warm start. The results in Table~\ref{perform} show that the most significant difference between including and not including the warm start method occurs in the maximum and average computation times. Additionally, the difference in the performance increase between the three kernels is insignificant.
\begin{table}[ht]
\centering
\caption{Results for Example 1 with a warm start. \label{tableExample1ResultsWarmStart}}
\renewcommand{\baselinestretch}{1}\small\normalfont
\subfloat[Results for Split Bernstein kernel.\label{tableSB_threeiter}]{
\begin{tabular}{| c || c | c | c | c |}
\hline
\backslashbox{Quantity}{Initial Guess} & I & II & III & IV \\ \hline
$C$ & $20$ & $20$ & $20$ & $20$ \\ \hline
$H$ & $0$ & $0$ & $0$ & $0$ \\ \hline
$\mu_T$ (s) & $29.79$ & $32.15$ & $37.23$ & $30.54$ \\ \hline
$T_{\min}$ (s) & $23.30$ & $23.62$ & $22.82$ & $18.68$ \\ \hline
$T_{\max}$ (s) & $49.92$ & $68.59$ & $83.15$ & $85.10$ \\ \hline
\end{tabular}
}
\subfloat[Results for Epanechnikov kernel.\label{tableE_threeiter}]{
\begin{tabular}{| c || c | c | c | c |}
\hline
\backslashbox{Quantity}{Initial Guess} & I & II & III & IV \\ \hline
$C$ & $20$ & $20$ & $20$ & $20$ \\ \hline
$H$ & $0$ & $0$ & $0$ & $0$ \\ \hline
$\mu_T$ (s) & $31.71$ & $38.34$ & $33.27$ & $33.14$ \\ \hline
$T_{\min}$ (s) & $18.70$ & $28.41$ & $27.36$ & $27.59$ \\ \hline
$T_{\max}$ (s) & $51.80$ & $71.13$ & $53.85$ & $49.42$ \\ \hline
\end{tabular}
}
\subfloat[Results for Gaussian kernel.\label{tableG_threeiter}]{
\begin{tabular}{| c || c | c | c | c |}
\hline
\backslashbox{Quantity}{Initial Guess} & I & II & III & IV \\ \hline
$C$ & $20$ & $20$ & $20$ & $20$ \\ \hline
$H$ & $0$ & $0$ & $0$ & $0$ \\ \hline
$\mu_T$ (s) & $25.67$ & $21.43$ & $21.32$ & $23.29$ \\ \hline
$T_{\min}$ (s) & $18.67$ & $15.68$ & $15.43$ & $17.72$ \\ \hline
$T_{\max}$ (s) & $75.86$ & $27.94$ & $26.22$ & $30.96$ \\ \hline
\end{tabular}
}
\end{table}
\begin{table}[h!]
\small
\caption{Increase in performance using the warm start method.\label{perform}}
\renewcommand{\baselinestretch}{1}\small\normalfont
\centering
\begin{tabular}{| c || c | c | c | c |}
\hline
Initial Guess & I & II & III & IV \\ \hline
Split-Bernstein $\mu_T$ & $90.53 \% $ &$80.53 \%$ & $ 71.15 \%$ & $84.12 \%$ \\ \hline
Split-Bernstein $T_{\min}$ & $ 55.91 \% $ &$45.13 \%$ & $52.14 \%$ & $68.66 \%$ \\ \hline
Split-Bernstein $T_{\max}$ & $97.96 \% $ &$82.78 \%$ & $92.37 \%$ & $95.56 \%$ \\ \hline
Epanechnikov $\mu_T$ & $90.84 \% $ &$89.45 \%$ & $79.11 \%$ & $87.31 \%$ \\ \hline
Epanechnikov $T_{\min}$ &$77.91 \%$ &$64.87 \%$ &$47.32 \%$ & $57.18 \%$ \\ \hline
Epanechnikov $T_{\max}$ & $98.05 \%$ & $94.83 \%$ & $89.60 \%$ & $97.47 \%$ \\ \hline
Gaussian $\mu_T$ & $87.00 \% $ &$92.55 \%$ & $53.63 \%$ & $76.29 \%$ \\ \hline
Gaussian $T_{\min}$ & $66.28 \% $ &$62.10 \%$ & $55.41 \%$ & $55.97 \%$ \\ \hline
Gaussian $T_{\max}$ & $92.26 \% $ &$99.09 \%$ & $69.17 \%$ & $94.28 \%$ \\ \hline
\end{tabular}
\end{table}
\clearpage
\subsection{Unlimited Number of Mesh Refinement Iterations\label{sect:highmesh}}
To further demonstrate the effectiveness of the warm start method of Section~\ref{sect:tech}, Example 1 is now solved using biased KDEs with the warm start method, but with no limit on the number of mesh refinement iterations to reach a user-specified mesh refinement error tolerance. The limit on the number of mesh refinement iterations is removed because it was found in Section \ref{sect:lowmesh} that the NLP solver converged on every run when the number of mesh refinement iterations was limited. For the analysis of solving Example 1 with the warm start method and unlimited mesh refinement iterations, a deterministic formulation of Example 1 is also presented, in order to compare the chance constrained solutions to deterministic solutions from the literature. The deterministic and chance constrained formulations of Example 1 are the same with the exception that the chance constraint of Eq.~\eqref{eq:CC2} is replaced by the following deterministic path constraint
\begin{equation}\label{eq:ex2path}
R^2 - \Delta x^2-\Delta y^2 \leq 0.
\end{equation}
The solution to the chance constrained version of Example 1 using the Epanechnikov kernel is shown in Fig.~\ref{fig:Jorris2DEpanech} alongside the solution to the deterministic formulation of Example 1, where it is seen that the chance constrained and deterministic solutions are similar. Next, Table~\ref{table_ex1_final} compares the following results: (1) results obtained using the approach of Section~\ref{sect:review} for twenty runs of the chance constrained version of the example [Eqs.~\eqref{eq:costvers2}--\eqref{eq:eventvers2} and~\eqref{eq:CC2}] applying two different kernels, and (2) results obtained for twenty runs of the deterministic formulation of the example [Eqs.~\eqref{eq:costvers2}--\eqref{eq:eventvers2} and \eqref{eq:ex2path}]. For Table~\ref{table_ex2_final}, the quantities $\mu_{J^*}$ and $\sigma_{J^*}$ are the average and standard deviations, respectively, of the optimal cost obtained over all of the runs. It is noted that only the Split-Bernstein and Epanechnikov kernel were used to obtain results, because the Gaussian kernel was only applied when the number of mesh refinements were restricted in order to determine if the computational challenges were mitigated by using a smooth kernel. Now that the kernel affects have been reduced, there is no longer a need for a smooth kernel, particularly when the kernel does not satisfy the criteria for a biased KDE from Section~\ref{sect:review}.
The results indicate that the average optimal cost obtained using the chance constrained formulation was lower than the deterministic optimal cost. The reason for this difference in cost is that the deterministic keep out zone path constraint is designed so that the path of $(x,y)$ can be outside, or on the boundary of keep out zone of radius $R$. For the chance constrained formulation, a one percent chance of risk violation ($\epsilon_d = 0.01)$ is allowed so that the path of $(x,y)$ can now be a $\delta$ distance radially inside the keep out zone. As a result, the $(x,y)$ path shown in Fig.~\ref{fig:Jorris2DEpanech} is shorter for the chance constrained formulation than for the deterministic formulation, and subsequently it will take less time to travel this shorter path. Thus, because the optimal cost is final time, and it takes less time to travel a shorter path, the average optimal cost for the chance constrained formulation will be lower than for the deterministic formulation.
Additionally, the run times were higher for the chance constrained formulation than for the deterministic formulation. The deterministic formulation uses a deterministic keep out zone constraint that is not dependent on samples, as opposed to the chance constrained formulation. Thus, less computational effort is required to solve the deterministic formulation.
\begin{figure}[ht!]
\centering
\vspace*{0.25in}
\subfloat[Chance Constrained.]{\includegraphics[height = 2.1in]{fullE_pos.eps}}
~~~~\subfloat[Deterministic.]{\includegraphics[height = 2.1in]{positionJorris_Det.eps}} \\
\subfloat[Chance Constrained.]{\includegraphics[height = 2.1in]{fullE_control.eps}}
~~~~\subfloat[Deterministic.]{\includegraphics[height = 2.1in]{controlJorris_Det.eps}} \\
\caption{Solution for Example 1 and deterministic variation of Example 1.}
\label{fig:Jorris2DEpanech}
\end{figure}
\begin{table}[ht]
\caption{Results for Example 1 with warm start method and unlimited mesh refinement iterations. \label{table_ex1_final}}
\renewcommand{\baselinestretch}{1}\small\normalfont
\centering
\begin{tabular}{| c || c | c | c |} \hline
& Split-Bernstein & Epanechnikov & Deterministic \\ \hline
$\mu_{J^*}$ (s) & $8545.052$ & $8545.021$ & $8704.199$ \\ \hline
$ \sigma_{J^*}$ (s) & $0.0606$ & $0.0517$ & $0$ \\ \hline
$\mu_T$ (s) & $39.640$ & $43.445$ & $1.478$ \\ \hline
$T_{\min}$ (s) & $21.904$ & $20.813$ & $ 1.430$ \\ \hline
$T_{\max}$ (s) & $87.333$ & $73.209$ & $ 1.591$ \\ \hline
\end{tabular}
\end{table}
\section{Example 2 Using Warm Start Method\label{sect:examples}}
To further demonstrate the applicability of the warm start method, in this section the warm start method of Section \ref{sect:tech} is applied to a more complex version of Example 1. Section~\ref{sect:Jorris3Dpresent} provides both a chance constrained and deterministic formulation of this second example. Section~\ref{sect:Jorris3Dset} describes the initialization. Finally, Section~\ref{sect:discusdiff} provides the results obtained when solving the example using the warm start method of Section \ref{sect:tech}.
\subsection{Example 2}\label{sect:Jorris3Dpresent}
Consider the following chance constrained variation of the deterministic optimal control problem from Ref~\cite{Jorris3}. Minimize the cost functional
\begin{equation}\label{eq:costvers3}
J = t_f,
\end{equation}
subject to the dynamic constraints
\begin{equation} \label{eq:dynvers3}
\begin{array}{ccc}
\dot x (t) & = & V \cos \theta (t), \\
\dot y (t) & = & V \sin \theta (t), \\
\dot h (t) & = & V \gamma (t), \\
\dot V(t) & = & - \frac{B V^2 \exp \big(- \beta r_0 h (1+ c_l^2) \big) }{2 E^*}, \\
\dot \gamma (t) & = & BV \exp(- \beta r_0 h) c_l \cos \sigma - \frac{1}{V} + V, \\
\dot \theta (t) & = & BV \exp(- \beta r_0 h) c_l \sin \sigma \\
\end{array}
\end{equation}
the boundary conditions
\begin{equation}\label{eq:boundvers3}
\begin{array}{cccccc}
\ x(0) & = & -1.385, & x(t_f) & = & 1.147, \\
\ y(0) & = & 0.499, & y(t_f) & = & 0.534, \\
\ h(0) & = & 0.0190, & h(t_f) & = & 0.0038, \\
\gamma (0) & = & -0.0262, & \gamma (t_f) & = & \textrm{free}, \\
V (t_0) & = & 0.927, & V(t_f) & = & \textrm{free}, \\
\theta (0) & = & 0.0698, & \theta(t_f) & = & \textrm{free}, \\
\end{array}
\end{equation}
the control bounds
\begin{equation}\label{eq:contvers3}
\begin{array}{ccccc}
- \frac{\pi}{3} & \leq & \sigma & \leq & \frac{\pi}{3}, \\
0 & \leq & c_l & \leq & 2,
\end{array}
\end{equation}
the event constraints
\begin{equation}\label{eq:eventvers3}
(x(t_i)-x_i,y(t_i)-y_i)=(0,0),\quad (i=1,2),
\end{equation}
the path inequality constraints
\begin{equation}\label{eq:ex3path}
\begin{array}{c}
R_1^2 - (x-x_{c,1})^2 - (y-y_{c,1}) \leq 0, \\
K\exp \bigg( \beta r_0 \frac{h}{2} \bigg) V^3 -1 \leq 0,
\end{array}
\end{equation}
and the chance path inequality constraint (keep-out zone constraint)
\begin{equation}\label{eq:CCnofly2}
P \left( R_2^2 - \Delta x_{\xi_1,2}^2-\Delta y_{\xi_2,2}^2 > \delta \right) \leq \epsilon_d,
\end{equation}
where $(\Delta x_{\xi_1,2},\Delta y_{\xi_2,2})$ are defined as
\begin{equation}\label{eq:finDels}
\big( \Delta x_{\xi_1,2},\Delta y_{\xi_2,2} \big) = \big( x+\xi_1 -x_{c,2}, y+\xi_2-y_{c,2} \big).
\end{equation}
The random variables $\xi_1$ and $\xi_2$ have normal distributions of $N(\mu_1,\sigma_1^2)$ and $N(\mu_2,\sigma_2^2)$, respectively. Furthermore, a deterministic version of Example 2 is also solved in order to compare the solutions for chance constrained and deterministic formulations of Example 2. The deterministic version of Example 2 is identical to that given in Eqs.~\eqref{eq:dynvers3}--\eqref{eq:ex3path}, with the exception that the chance constraint of Eq.~\eqref{eq:ex3path} is replaced with the following deterministic inequality path constraint
\begin{equation}\label{eq:ex3path_nofly}
R_2^2 - \Delta x_2^2-\Delta y_2^2 \leq 0.
\end{equation}
Finally, the parameters for Example 2 are given in Table~\ref{table_example 2}.
\begin{table}[ht]
\caption{Parameters for Example 2.\label{table_example 2}}
\renewcommand{\baselinestretch}{1}\small\normalfont
\centering
\begin{tabular}{| c | c |} \hline
Parameter & Value \\ \hline\hline
$(x_{c,1},y_{c,1})$ & $(0.008,0.389)$ \\ \hline
$R_1$ & $0.277$\\ \hline
$(x_{c,2},y_{c,2})$ & $(1.022, 0.943)$ \\ \hline
$R_2$ & $0.434$ \\ \hline
$\epsilon_d$ & $0.010$ \\ \hline
$\delta$ & $0.020$ \\ \hline
$K$ & $0.759$ \\ \hline
$(x_1,y_1)$ & $(-0.466, 0.594)$ \\ \hline
$(x_2,y_2)$ & $(0.728, 0.580)$ \\ \hline
$B$ & $942.120$ \\ \hline
$\beta $ & $1.400 \times 10^{-4}$ \\ \hline
$r_0$ & $6.408 \times 10^6$ \\ \hline
$E^*$ & $3.240$ \\ \hline
$ (\mu_1,\mu_2)$ & $(0,0)$ \\ \hline
$(\sigma_1,\sigma_2)$ & $(0.0007,0.001)$ \\ \hline
\end{tabular}
\end{table}
\subsection{Initialization for Example 2}\label{sect:Jorris3Dset}
Example 2 is implemented as a four-phase problem. Phase 1 starts at $(x(0),y(0))$ and terminates when the second path constraint of Eq.~\eqref{eq:ex3path} reaches its boundary. Next, phases 2, 3, and 4 terminate, respectively, at $(x_1,y_1)$, $(x_2,y_2)$, and $(x(t_f),y(t_f))$. Furthermore, the constraints of Eqs.~\eqref{eq:dynvers3}--\eqref{eq:ex3path} and Eq.~\eqref{eq:CC2_ex2} are included in every phase. The initial guess for each phase is a straight line approximation between the known initial and terminal conditions for all states. For any phase where an endpoint was not available, a constant initial guess that did not violate the constraint bounds was used. The controls were set as straight line approximations between values within the control bounds.
In order to maintain computational tractability (see Section~\ref{sect:guesseschoic}), the chance constraint of Eq.~\eqref{eq:CCnofly2} is reformulated as follows~\cite{Keil2}:
\begin{equation}\label{eq:CC2_ex2}
\epsilon_d \geq
\begin{cases}
0, \ \textrm{if} \ \Delta x_2^2 -\Delta y_2^2 \geq (R_2+b)^2, \\
\begin{aligned}
P \left( R_2^2 - \Delta x_{\xi_1,2}^2-\Delta y_{\xi_2,2}^2 - \delta > 0 \right), \\ \ \textrm{if} \ \Delta x_2^2 -\Delta y_2^2 < (R+b)^2,
\end{aligned}
\end{cases}
\end{equation}
where $(\Delta x_2,\Delta y_2)$ are defined as
\begin{equation}
(\Delta x_2,\Delta y_2) = (x-x_{c,2},y-y_{c,2}),
\end{equation}
and $b$ is set equal to $0.07$, due to the size of $R$.
\subsection{Results and Discussion for Example 2}\label{sect:discusdiff}
This section provides results for solving Example 2 using the approach of Section~\ref{sect:review}, along with the warm start method from Section~\ref{sect:tech}. The values $\phi = 5\times 10^{-5}$ and $w = 1$ were used, and the Gaussian kernel was the starting kernel. Additionally, it is noted that both the chance constrained and deterministic formulations of Example 2 were solved with the $\mathbb{GPOPS-II}$ setup discussed in Section~\ref{sect:setup} with no limit on the number of mesh refinement iterations required to reach a solution with a user-specified accuracy tolerance. The solutions shown in Figs.~\ref{fig:Jorris3DStates} and~\ref {fig:Jorris3DControls} correspond to a single run using the Epanechnikov kernel and a single run when solving the deterministic version of this example. The figures indicate that the solutions for the chance constrained and deterministic formulations are similar, and so the results obtained for the chance constrained formulation are reasonable. It is further noted that the solutions shown in Figs.~\ref{fig:Jorris3DStates} and~\ref {fig:Jorris3DControls} indicate that, even though the formulation of Example 2 is similar to the formulation of Example 1, the solutions are quite different. In particular, as shown in Fig.~\ref{fig:Jorris2DEpanech} for Example 1 and Fig.~\ref{fig:Jorris2DEpanech} for Example 2, the control for Example 1 has a bang-bang structure, while the controls for Example 2 are smooth. This difference in the behavior of the control is why tuning of the starting bandwidth was required to obtain solutions for Example 1 ($w = 100$), but not for Example 2.
Next, Table~\ref{table_ex2_final} compares the following results: (1) results obtained using the approach of Section~\ref{sect:review} for twenty runs of the chance constrained version of the example [Eqs.~\eqref{eq:costvers3}--\eqref{eq:ex3path} and~\eqref{eq:CC2_ex2}] applying two different kernels, and (2) results obtained for twenty runs of the deterministic formulation of the example [Eqs.~\eqref{eq:costvers3}--\eqref{eq:ex3path} and \eqref{eq:ex3path_nofly}]. The results in Table~\ref{table_ex2_final} indicate that the average optimal costs were lower for the chance constrained formulation than for the deterministic formulation of Example 2, for reasons discussed in Section~\ref{sect:highmesh}. Moreover, as discussed in Section~\ref{sect:highmesh}, because of the use of sampling in the chance constrained formulation of Example 2, the run times are higher than the run times for the deterministic formulation.
\begin{figure}[ht!]
\centering
\vspace*{0.25in}
\subfloat[Chance Constrained]{\includegraphics[height = 2.1in]{fullE3D_pos.eps}}
~~~~\subfloat[Deterministic]{\includegraphics[height = 2.1in]{position3DJorris_Det.eps}} \\
\caption{States for Example 2 and deterministic variation of Example 2.}
\label{fig:Jorris3DStates}
\end{figure}
\begin{figure}[ht!]
\centering
\vspace*{0.25in}
\subfloat[Chance Constrained]{\includegraphics[height = 2.1in]{fullE3D_control1.eps}}
~~~~\subfloat[Deterministic]{\includegraphics[height = 2.1in]{sigma3DJorris_Det.eps}} \\
\subfloat[Chance Constrained]{\includegraphics[height = 2.1in]{fullE3D_control2.eps}}
~~~~\subfloat[Deterministic]{\includegraphics[height = 2.1in]{cl3DJorris_Det.eps}} \\
\caption{Controls for Example 2 and deterministic variation of Example 2.}
\label{fig:Jorris3DControls}
\end{figure}
\begin{table}[ht]
\caption{Results for Example 2. \label{table_ex2_final}}
\renewcommand{\baselinestretch}{1}\small\normalfont
\centering
\begin{tabular}{| c || c | c | c | c | } \hline
& Split-Bernstein & Epanechnikov & Deterministic \\ \hline
$\mu_{J^*}$ & $3052.414$ s & $3052.407$ s & $ 3086.738$ s \\ \hline
$ \sigma_{J^*}$ & $0.0252$ s & $0.0248$ s & $0$ s \\ \hline
$\mu_T$ & $55.733$ s & $63.066$ s & $9.424$ s \\ \hline
$T_{\min}$ & $39.771$ s & $45.075$ s & $9.210$ s \\ \hline
$T_{\max}$ & $80.866$ s & $81.973$ s & $9.584$ s \\ \hline
\end{tabular}
\end{table}
Now comparing the results for Example 1 from Table~\ref{table_ex1_final} to the results for Example 2 from Table~\ref{table_ex2_final}, the run times are slightly higher for Example 2. This difference is due to Example 2 being a more complex problem than Example 1. The results indicate that two complex CCOCPs were efficiently solved using the approach of Ref~\ref{sect:review} along with the warm start method developed in Section~\ref{sect:tech}.
\section{Discussion}\label{sect:discussion}
The results of Sections \ref{sect:discusstech} and \ref{sect:examples} demonstrate the capabilities of the warm start method developed in Section \ref{sect:tech}. In particular, the warm start method developed in Section~\ref{sect:tech} was applied effectively to solve two complex CCOCPs given in Sections~\ref{sect:discusstech} and~\ref{sect:examples} using biased KDEs and LGR collocation. Moreover, it was found in Section \ref{sect:discusstech} that solving Example 1 using the warm start method was far more reliable and computationally efficient than solving Example 1 without a warm start (Section~\ref{sect:Jorris2Dnaive}).
Now, while the warm start method developed in this paper is found to improve reliability and computational efficiency when solving CCOCPs, it is important to note several aspects of the method that must be implemented carefully. First, for the two components of the method that were described in Sections \ref{sect:bandwidth-tuning} and \ref{sect:kernel-switching}, it is important to choose an appropriate starting bandwidth and kernel. In particular, choosing an inappropriate starting bandwidth and kernel can result in the NLP solver not converging. Moreover, with an inappropriate choice of a starting bandwidth, the NLP solver may converge to a solution different from the optimal solution. Also, tuning the bandwidth and kernel can be time consuming. It is noted, however, that if the trial runs for determining an appropriate starting bandwidth and kernel are performed using a small sample set and with limited mesh refinement iterations, results can be obtained rather quickly. Additionally, when tuning the bandwidth and kernel using trial runs, the process can be terminated as soon as the NLP solver is found to not converge to a solution for one of the runs. Thus, the maximum number of trail runs is not used until after an appropriate bandwidth and kernel combination has been found. It is also noted that convergence to an infeasible solution is always possible because a slightly different solution to the CCOCP is obtained with each run due to the use of a different sample set for each run. As a result, obtaining an infeasible solution in the trial runs is probable. The choice of starting kernel can, however, affect how often the NLP solver converges to an infeasible solution. It is further noted that the NLP solver will sometimes shift from this infeasible solution to a feasible solution when the bandwidth and kernel are switched.
Next, examining the third component of the method as described in Section \ref{sect:sample-size-increasing}, the size of the starting sample set can affect whether or not a solution is obtained. When the initial sample size is too small, important features of the samples such as modes, mean, and range can be lost. Thus, when the starting sample size is too small, the key features for the smaller sample set will be different from those of the larger sample sets. As a result, the solution of the NLP obtained using larger sample sizes may have different properties from the solution of the NLP obtained using the smaller sample size. This difference can, in turn, lead to the NLP solver not converging to a solution when the number of samples is increased. Additionally, it was found that, even if the NLP solver converges, more computation effort may be required on the mesh refinement iteration when the sample size is first increased. Finally, there was large variation in the amount of time required for the mesh refinement iteration where the sample set is switched to the full sample. Increasing the sample size more gradually (that is, by using more increments with smaller changes between increments) can potentially decrease the time for the mesh refinement iteration where the full sample size is used. Conversely, by using a greater number of increments, the total computation time may start to increase, even if the time for that one mesh refinement iteration is reduced. Additionally, this computation time may be increased even further if the greater number of increments results in extra mesh refinement iterations (as discussed in Section~\ref{sect:sample-size-increasing}).
\section{Conclusions}\label{sect:conclude}
A warm start method has been developed to increase the efficiency of solving chance constrained optimal control problems using biased kernel density estimators and Legendre-Gauss-Radau collocation. First, through a motivating example, it was shown that solving a chance constrained optimal control problem without a warm start can be unreliable and computationally inefficient. Using the computational issues of solving this example as a starting point, the warm start method has been developed. The warm start method consists of three components that are designed to aid convergence of the NLP solver, while simultaneously decreasing the required computation time and reducing sensitivity to the kernel and the initial guess. These three components of the warm start method are: bandwidth tuning, kernel switching, and incremental sample size increasing. The warm start method has then been applied to solve the motivating chance constrained optimal control problem using biased kernel density estimators and Legendre-Gauss-Radau collocation. Finally, a second and more complex variation of this chance constrained optimal control problem has also been solved with the warm start method, and the results analyzed. The results show that the warm start method developed in this paper has the potential to significantly improve reliability and computational efficiency when solving complex chance constrained optimal control problems.
\section*{Acknowledgments}
The authors gratefully acknowledge support for this research from the from the U.S.~National Science Foundation under grants CMMI-1563225, DMS-1522629, and DMS-1819002.
\renewcommand{\baselinestretch}{1.0}
\normalsize\normalfont
\bibliographystyle{aiaa}
| proofpile-arXiv_065-325 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The prospect of achieving non-reciprocity in elastic systems is becoming increasingly appealing to the physics and engineering communities~\cite{nassar2020}. This is motivated by the potential exploitation of this effect to realize mechanical diodes and other uni-directional devices~\cite{Boechler2011, Maznev2013, Sklan2015, Devaux2015, Zhou2019, Brandenbourger2019}. In non-reciprocal systems, wave-like excitations propagate with markedly-different amplitudes in one direction and the opposite. One way to achieve this effect is by modulating the properties of the system in space and time~\cite{Lurie97}. The dynamic behavior of mechanical systems with time-varying parameters has attracted the scientific community for more than a century~\cite{Rayleigh87,Raman}. However, the simultaneous variation of the elastic or inertial properties of a medium in both time and space has not received much attention in the mechanics community partly due to the infeasibility of the experiments.
Only recent advances in smart structures~\cite{Airoldi2011, Hatanaka2014, Bilal2017}, together with fundamental studies on spatio-temporally modulated periodic media~\cite{Lurie97,Swinteck2015, Trainiti2016,Nassar2017jmps, Nassar2017prsa}, have allowed the realization of such systems in the context of periodic materials.
The phenomenon of time modulation-based non-reciprocity can be effectively explained with a one-dimensional example. Consider a 1D phononic crystal generated by periodically arranging an array of unit cells. Assume that the properties of each cell (stiffness and/or mass) can be independently varied in time. If we coordinate this variation in neighboring units to generate a wave-like pattern of properties that varies in space and time, we create a pump or modulating wave. Under specific frequency and wavelength constraints, mechanical waves that propagate in this system can interact with the modulating wave. In turn, this can lead to the appearance of asymmetric Bragg scattering bandgaps located at different frequency ranges for waves propagating from left to right and from right to left, and to non-reciprocal propagation~\cite{Swinteck2015, Trainiti2016, Deymier2017, Yi2018}. In physical terms, this spatio-temporal modulation breaks time-reversal symmetry. Similar considerations apply to locally-resonant metamaterials featuring an elastic wave-carrying medium equipped with a set of auxiliary resonators~\cite{Liu2000}. In this case, a wave-like modulation of the properties of the resonators causes the appearance of additional asymmetric features within the dispersion relation, such as bandgaps and veering points~\cite{Nassar2017prsa, Nassar2017eml, Attarzadeh2018, Chen2019, Huang2019}. Exciting a modulated metamaterial at specific frequencies leads to phenomena such as non-reciprocal wave filtering and frequency conversion of transmitted/reflected waves~\cite{Nassar2017eml}.
So far, investigations on elastic wave non-reciprocity via time-modulated resonators have been limited to axial and flexural waves in either discrete phononic systems~\cite{Wang2018} or beam-like metamaterials~\cite{Chen2019, Attarzadeh2020, Marconi2020}. However, it is of interest to extend this concept to elastic waves propagating near the surface of a semi-infinite medium, also known as surface acoustic waves (SAW). In this context, metamaterials can be realized by arrays of resonators located on the free surface, and are therefore known as \emph{elastic metasurfaces}~\cite{Colquitt2017}. To the best of our knowledge, surface wave non-reciprocity has been so far demonstrated only in semi-infinite structured media with a gyroscopic architecture~\cite{Zhao2020}. Achieving surface wave non-reciprocity on elastic half-spaces via metasurfaces could lead to the realization of novel SAW devices for high-frequency applications where phononic systems have already shown their promise, from acoustifluidics and particle manipulation~\cite{Guo2015, Collins2016} to mechanical signal processing~\cite{Hatanaka2014, Cha2018Nano}.
In this work, we study how surface waves of the Rayleigh type interact with spatio-temporally modulated metasurfaces, as illustrated in the schematic in Fig.~\ref{f:met}.
We use a combination of analytical tools and numerical simulations to investigate the effects of temporal stiffness modulations on an isolated resonator, and to identify ranges of modulation parameters where a small-modulation approximation is valid. We leverage this understanding to derive analytical solutions for the dispersion relation of Rayleigh surface waves interacting with a spatio-temporally modulated metasurface. In particular, we describe the interaction between the incident and scattered fields generated by the modulated resonators and predict the appearance of directional wave responses.
Additionally, by means of a first-order asymptotic analysis, we estimate how the modulation parameters affect the extent of the non-reciprocal wave features.
We confirm our analytical findings via numerical simulations, and demonstrate non-reciprocal wave effects such as one-way filtering and frequency conversion for transmitted and reflected signals. While our work is entirely theoretical, we envision that our analysis could guide the experimental realization of modulated metasurfaces, featuring, for example, electromechanical~\cite{Alan2019, Marconi2020} or tunable contact resonators~\cite{Palermo2019}.
\begin{figure}[!htb]
\centering
\includegraphics[scale=1.0]{Fig_Metasurface.pdf}
\caption{Schematic of a time-modulated metasurface, depicting the non-reciprocal propagation of surface waves. A sinusoidal space-time evolution of the stiffness function of the resonators, $K(x,t)$, is illustrated. The inset is a close-up on one of the $N$ identical resonators placed on the free surface of the semi-infinite elastic medium.}
\label{f:met}
\end{figure}
The rest of the article is organized as follows. In Section~\ref{s:sdof}, we analyze the free response, stability and response to base excitation of a single time-modulated resonator. In Section~\ref{s:saw}, we study the interaction of Rayleigh waves with arrays of modulated surface resonators and obtain the dispersion curves. In Section~\ref{s:nr}, we use numerical analyses to further study the effects of spatio-temporal modulations on non-reciprocal propagation of surface waves. The conclusions and outlook of our work are reported in Section~\ref{s:concl}.
\section{Dynamics of a modulated resonator}
\label{s:sdof}
We begin by focusing on the dynamics of a single resonator. Two scenarios are captured by this analysis: a fixed, rigid substrate (Section~\ref{s:sfree}) and an oscillating, rigid substrate (Section~\ref{s:base}). These analyses allow us to better understand the interaction between the surface waves and an array of modulated resonators. By comparing analytical predictions and numerical simulations on a single resonator, we gain an understanding on the effects of stiffness modulations, we evaluate the quality of our analytical predictions, and we explore the stability of these modulated systems. This information allows us to set bounds on the choice of modulation parameters to be used for the surface wave-metasurface analysis.
\subsection{Free vibrations}
\label{s:sfree}
We first consider a single, clamped resonator with mass $m$, damping coefficient $c$ and time-varying stiffness $K(t)$ (see the inset in Fig.~\ref{f:met}). We assume $K(t)$ to be:
\begin{equation}
K(t)=K_0+2dK \cos{\left( \omega_m t \right)},
\label{e:kdef}
\end{equation}
where $K_0$ is the average stiffness, $2dK$ is the modulation amplitude and $\omega_m$ is the modulation frequency. Note that the modulation can have the form of any periodic function~\cite{Trainiti2016,Nassar2017eml}; we choose a sinusoidal one for simplicity. For future reference, we define $\omega_r=\sqrt{K_0/m}$ and choose a small damping ratio $\xi=c/(2 m\omega_r)=0.001$. Ignoring the motion of the substrate, the equation governing the displacement $V(t)$ reads:
\begin{equation}
m\frac{d^2V}{dt^2}+c\frac{dV}{dt}+K(t)V=0.
\label{e:eom}
\end{equation}
This is equivalent to assuming that the substrate is fixed and rigid. {As commonly done in the literature~\cite{Vila2017, Nassar2017eml}, we assume that the restoring force exerted by the time modulated spring is obtained by multiplying stiffness and displacement at the same time instant}. Since the stiffness in Eq.~\ref{e:eom} is time-periodic, we re-write it in complex Fourier series form:
\begin{equation}
K(t)=\sum_{p=-\infty}^{\infty}\hat{K}_p\,e^{i p \omega_m t},
\label{e:k}
\end{equation}
with Fourier coefficients defined as:
\begin{equation}
\hat{K}_p=\frac{\omega_m}{2\pi}\int_{-\frac{\pi}{\omega_m}}^{\frac{\pi}{\omega_m}} K(t)\,e^{-ip\omega_mt} dt.
\label{e:kh}
\end{equation}
For the specific choice of $K(t)$ in Eq.~\ref{e:kdef}, we are effectively truncating the sum such that $|p| \le P=1$ and the only Fourier coefficients we obtain are $\hat{K}_0=K_0$, $\hat{K}_{+1}=\hat{K}_{-1}=dK$. From now on, we adopt the truncated notation for $p$. We also assume a harmonic solution with time-modulated amplitude and expand it in Fourier series, obtaining:
\begin{equation}
V(t)=\left(\sum_{n=-\infty}^{\infty}\hat{V}_n\,e^{i n \omega_m t}\right)e^{i \omega t},
\label{e:V}
\end{equation}
with $\omega$ being an unknown frequency at this stage, and with $\hat{V}_n$ being the Fourier coefficients of the wave amplitude.
Differentiating $V(t)$, plugging it into Eq.~\ref{e:eom} together with $K(t)$ and simplifying $e^{i\omega t}$, yields:
\begin{equation}
\sum_{n=-\infty}^{\infty} \left[-m\left( \omega +n\omega_m \right)^2+ic\left( \omega +n\omega_m \right)\right]\hat{V}_n\,e^{in\omega_mt}+
\sum_{n=-\infty}^{\infty}\sum_{p=-P}^{P}\hat{K}_p\hat{V}_n\,e^{i (n+p) \omega_m t}=0.
\end{equation}
To simplify this expression, we pre-multiply it by $e^{ih\omega_m t}\omega_m/(2\pi)$, where $h$ is an arbitrary integer, and we integrate the result over the modulation period, from $-\pi/\omega_m$ to $\pi/\omega_m$. This averaging procedure is a standard method to study the dynamics of systems with time-varying properties, and has been adopted by others in the context of modulated media~\cite{Trainiti2016, Vila2017, Attarzadeh2018}.
Leveraging the orthogonality of harmonic functions, we drop the summation in $n$ and obtain the following equation, valid for all values of $h$:
\begin{equation}
\left[-m\left( \omega +h\omega_m \right)^2+ic\left( \omega +h\omega_m \right)\right]\hat{V}_h+\!\!\sum_{p=-P}^{P}\hat{K}_p\hat{V}_{h-p}=0.
\label{e:eig}
\end{equation}
This system of equations needs to be solved for all integer values of $h$ to obtain an exact solution. Here, we intend to verify the validity of a truncated expansion of the solution by setting $|h| \le H=1$. Under this assumption, and recalling that $P=1$ for our choice of stiffness modulation function, Eq.~\ref{e:eig} reduces to the system of three equations:
\begin{equation}
\left(\begin{bmatrix}
\hat{K}_0 & \hat{K}_{-1} & 0\\
\hat{K}_{+1} & \hat{K}_0 & \hat{K}_{-1}\\
0 & \hat{K}_{+1} & \hat{K}_0
\end{bmatrix}-m\begin{bmatrix}
\left( \omega -\omega_m \right)^2 & 0 & 0\\
0 & \omega^2 & 0\\
0 & 0 & \left( \omega +\omega_m \right)^2
\end{bmatrix}\right.
+
\left.ic\begin{bmatrix}
\omega -\omega_m & 0 & 0\\
0 & \omega & 0\\
0 & 0 & \omega +\omega_m
\end{bmatrix}\right)
\left[\begin{matrix}
\hat{V}_{-1}\\
\hat{V}_{0}\\
\hat{V}_{+1}
\end{matrix} \right]
=
\left[ \begin{matrix}
0\\
0\\
0
\end{matrix} \right],
\label{e:eig3}
\end{equation}
which can be written in compact form as $\mathbf{D}(\omega)\,\mathbf{\hat{V}}=\mathbf{0}$. The approximated resonance frequencies of damped vibrations are the local minima of the determinant $|\mathbf{D}(\omega)|$, as shown in Fig.~\ref{f:free}(a) for parameters $dK/K_0=0.1$, $\omega_m/\omega_r=0.25$ and $\xi=0.001$.
\begin{figure*}[!htb]
\centering
\includegraphics[scale=1.0]{Fig_SDOF_FreeNew}
\caption{Dynamics of a resonator with time-modulated stiffness, Eq.~\ref{e:eom}. (a) Analytical evaluation of the determinant of the dynamic matrix for $dK/K_0=0.1$, $\omega_m/\omega_r=0.25$ and $\xi=0.001$. The markers indicate the minima. (b) Fourier Transform of the response to an initial velocity for the same parameters used in (a). The markers indicate the resonance peak and its side-bands. (c) Stability diagram, as a function of the modulation parameters. The stability contours are given for three values of damping ratio $\xi$. The unstable (U) regions for $\xi=0.001$ are shaded in gray. The star marker indicates that parameters $dK/K_0=0.1$ and $\omega_m/\omega_r=0.25$ yield stable (S) results.}
\label{f:free}
\end{figure*}
The choice of a harmonically-modulated stiffness and a truncated solution at $|h|\le H=1$ yields three resonance frequencies for damped vibrations; these are a central frequency $\omega_r$ and two shifted ones near $\omega_r+\omega_m$ and $\omega_r-\omega_m$.
To verify the validity of the analytical approach, we solve Eq.~\ref{e:eom} numerically using a central difference scheme, in the $0 \leq t \leq 600\,T_r$ time range, with $T_r=2\pi/\omega_r$ and time increment $dt=T_r/(10\pi)$. We choose initial conditions $[V,dV/dt]_{t=0}=[0,1]$. The normalized spectrum of the steady-state portion of the displacement signal is shown in Fig.~\ref{f:free}(b). It features a central resonance peak and multiple side-bands, as expected for modulated oscillators~\cite{Minkov2017}. One can see two main differences between the analytical and numerical results. First, the numerical results yield more peaks than the analytical approximation in Eq.\ref{e:eig3} predicted: in addition to the sidebands near $\omega_r+\omega_m$ and $\omega_r-\omega_m$, there are others near $\omega_r+2\omega_m$ and $\omega_r-2\omega_m$. Moreover, the numerical sidebands are slightly shifted in frequency when compared to their respective eigenvalues (although this is not easy to appreciate from the figure). These inconsistencies are attributed to the
truncation of the analytical results.
This is discussed in more detail in Section~\ref{s:base}.
\subsection{Stability}
\label{s:stab}
When the modulations have a cosine profile in time, Eq.~\ref{e:eom} is known as Mathieu's equation. It is well known that some combinations of parameters can lead to instabilities in systems governed by this equation~\cite{Kovacic2018}. Here, we determine the regions of the modulation parameter space for which the motion of the resonator remains stable. First, we select a range of variables of interest: $0.01 \leq \omega_m/\omega_r \leq 1$ and $0 \leq dK/K_0 \leq 1$. For each $\omega_m/\omega_r$ and $dK/K_0$ couple, we solve Mathieu's equation, obtained from Eq.~\ref{e:eom} via a change of variables:
\begin{equation}
\frac{d^2V}{d\tau^2}+\bar{c}\frac{dV}{d\tau}+\left( \delta+\epsilon \cos{\tau} \right)\,V=0,
\label{e:Mat}
\end{equation}
where, for our specific problem:
\begin{equation}
\tau=\omega_mt,\,\,\,\bar{c}=2\xi\frac{\omega_r}{\omega_m},\,\,\,\delta=\frac{\omega_r^2}{\omega_m^2},\,\,\,\epsilon=2\frac{dK}{K_0}\frac{\omega_r^2}{\omega_m^2}.
\end{equation}
Eq.~\ref{e:Mat} is solved numerically for $\tau \in [0,2\pi]$, for two sets of initial conditions: (i) $[V,dV/d\tau]_{\tau=0}=[1,0]$, which yields displacement $V_1(\tau)$; (ii) $[V,dV/d\tau]_{\tau=0}=[0,1]$, which yields displacement $V_2(\tau)$. For each pair of $\omega_m/\omega_r$ and $dK/K_0$, according to Ref.~\cite{Kovacic2018}, the system is stable if:
\begin{equation}
\left|
\mathrm{Tr}
\begin{bmatrix}
V_1(\tau) & V_2(\tau)\\
dV_1(\tau)/d\tau & dV_2(\tau)/d\tau
\end{bmatrix}_{\tau=2\pi}
\right| < 2,
\end{equation}
where $\mathrm{Tr}$ is the trace operator.
The stability diagram as a function of the modulation frequency ratio $\omega_m/\omega_r$ and the modulation amplitude ratio $dK/K_0$ is illustrated in Fig.~\ref{f:free}(c). The shaded regions between the tongues represent the unstable regions for the damping of choice, $\xi=0.001$. One can see that the parameters used in Fig.~\ref{f:free}(a,b), corresponding to the red star-like marker in Fig.~\ref{f:free}(c), yield a stable response. The contours of the unstable regions are strongly dependent on damping. Increasing damping shrinks the unstable regions, while decreasing damping expands them. When damping is 0, the unstable tongues can extend to $dK/K_0=0$; however, one can appreciate that even an extremely small damping can guarantee stability for a wide range of parameters. This stability diagram represents an important tool to properly choose the modulation parameters.
\subsection{Base excitation}
\label{s:base}
To bridge the gap between single resonator dynamics and surface wave-metasurface interactions, we incorporate the harmonic motion of the substrate into our model. In fact, a resonator on a semi-infinite medium subject to Rayleigh waves exchanges stresses with the substrate, and these stresses are a function of the relative displacement between the base and resonator~\cite{Garova1999,Boechler2013}. At this stage, we ignore the interaction between the resonators through the substrate, and focus on the response of a single modulated oscillator to a base excitation. This is equivalent to assuming the substrate as rigid; we will consider the full problem in Section~\ref{s:saw}.
The base excitation problem can be analyzed similarly to the free vibrations case. {Here, the forced equation of motion is a non-homogeneous version of Eq.~\ref{e:eom} and it} reads:
\begin{equation}
m\ddot{V}+c\dot{V}+K(t)V=c\dot{v}+K(t)v,
\label{e:eomb}
\end{equation}
where $v(t)=v_0\,e^{i\Omega t}$ is the harmonic base displacement, $\Omega$ the corresponding frequency of excitation and the overdot indicates a time derivative. Following the same steps detailed in Section~\ref{s:sfree}
leads to the following system of equations:
\begin{equation}
\left(\begin{bmatrix}
\hat{K}_0 & \hat{K}_{-1} & 0\\
\hat{K}_{+1} & \hat{K}_0 & \hat{K}_{-1}\\
0 & \hat{K}_{+1} & \hat{K}_0
\end{bmatrix}-m\begin{bmatrix}
\left( \Omega -\omega_m \right)^2 & 0 & 0\\
0 & \Omega^2 & 0\\
0 & 0 & \left( \Omega +\omega_m \right)^2
\end{bmatrix}\right.
+
\left.ic\begin{bmatrix}
\Omega -\omega_m & 0 & 0\\
0 & \Omega & 0\\
0 & 0 & \Omega +\omega_m
\end{bmatrix}\right)
\left[ \begin{matrix}
\hat{V}_{-1}\\
\hat{V}_{0}\\
\hat{V}_{+1}
\end{matrix} \right]
=
\left[ \begin{matrix}
\hat{K}_{-1}v_0\\
(\hat{K}_{0}+ic\,\Omega)v_0\\
\hat{K}_{+1}v_0
\end{matrix} \right],
\label{e:eig3b}
\end{equation}
which can be written in a compact form as $\mathbf{D}(\Omega)\,\mathbf{\hat{V}}=\mathbf{F}_b$. This expression can be solved to find three Fourier coefficients $\hat{V}_j$ for any excitation frequency $\Omega$. Coefficient $\hat{V}_0$ corresponds to frequency $\Omega$, $\hat{V}_{-1}$ to $\Omega-\omega_m$, and $\hat{V}_{+1}$ to $\Omega+\omega_m$. To quantify the accuracy of this analytical solution, we solve Eq.~\ref{e:eomb} using the same numerical procedure used in Sec.~\ref{s:sfree}. This process is illustrated in Fig.~\ref{f:base} and explained in the following.
\begin{figure*}[!htb]
\centering
\includegraphics[scale=1.0]{Fig_SDOF_BaseNew}
\caption{Base excitation response of a single resonator with time-modulated stiffness. (a) Normalized Fourier transform of the numerical response of a system with $dK/K_0=0.1$, $\omega_m/\omega_r=0.45$ and $\xi=0.001$ to a harmonic base excitation of frequency $\Omega=\omega_r$. The cross markers indicate the peaks of the response, and the circular markers indicate the relative amplitudes of the Fourier coefficients. (b) Response to various base frequencies $\Omega$, where we track the numerical maxima $\bar{V}_j/\bar{V}_0$ and the relative Fourier coefficients $\hat{V}_j/\hat{V}_0$. Note that (a) is a slice of (b), and that the same legend applies to (a) and (b). (c) Evolution of the maxima of the numerical responses, and of the relative Fourier coefficients, as a function of $\Omega$. From (c), we extract the discrepancy between analytical and numerical results in predicting the frequency location of the side peaks. (d) Frequency discrepancy map.
The star markers indicate modulation parameters of interest.
}
\label{f:base}
\end{figure*}
First, we compute the numerical response to a base excitation of frequency $\Omega$. The Fourier transform of the steady-state part of the response to an excitation at $\Omega/\omega_r=1$ is shown as a continuous line in Fig.~\ref{f:base}(a), for parameters $dK/K_0=0.1$, $\omega_m/\omega_r=0.45$ and $\xi=0.001$. According to Fig.~\ref{f:free}(c), the free response of the resonator is stable for this choice of parameters. This frequency response shows several peaks: one at $\Omega/\omega_r$, and side-bands at $(\Omega+\omega_m)/\omega_r$ and $(\Omega-\omega_m)/\omega_r$. Other side-bands are also present in the numerical solution, but they are not captured by the analytical solution in Eq.~\ref{e:eig3b}. The response is normalized by the amplitude of the peak at $\Omega/\omega_r$. The peaks of interest are highlighted with cross markers in Fig.~\ref{f:base}(a). Then, we plot the analytically-derived Fourier coefficients $\hat{V}_0$, $\hat{V}_{-1}$, $\hat{V}_{+1}$, normalized by $\hat{V}_0$, at their corresponding frequencies. These are indicated as circular markers.
We compute the response of the resonator for other values of $\Omega/\omega_r$, as presented in the waterfall plot in Fig.~\ref{f:base}(b). To quantify the discrepancy between the numerical and analytical evaluation of the frequency location of the side-bands, we track the maxima of the numerical response (cross markers) and the Fourier coefficients (circles), as a function of $\Omega/\omega_r$. This is shown in Fig.~\ref{f:base}(c), from which we calculate the discrepancy in frequency as $\max(\Delta\omega_{-1}, \Delta\omega_{+1})$, where $\Delta\omega_{-1}$ and $\Delta\omega_{+1}$ are the discrepancies between the two sets of peaks.
This procedure is repeated for all modulation parameters of interest. We restrict our analysis to $0 \leq dK/K_0 \leq 0.3$ and $0.1 \leq \omega_m/\omega_r \leq 0.5$, all within the stable region for $\xi=0.001$. As a result, we obtain the discrepancy map of Fig.~\ref{f:base}(d). This map can be used to evaluate the error introduced by the truncated expansion of the analytical solution. It shows that there are wide parameter regions where the truncated expansion is accurate, with frequency discrepancies below 5\%.
In light of these results, we choose the following parameters to perform the surface wave-metasurface interaction analysis: (i) $dK/K_0=0.1$ and $\omega_m/\omega_r=0.25$, which yield a discrepancy in frequency of 5\%; (ii) $dK/K_0=0.1$ and $\omega_m/\omega_r=0.45$, which yield a discrepancy in frequency of 2\%. Both sets of parameters correspond to resonators with stable responses.
\section{Surface wave dispersion in modulated metasurfaces}
\label{s:saw}
Now that we have studied the dynamics of a single resonator and learned about the acceptable ranges for the modulation parameters, we tackle the problem of a spatio-temporally modulated metasurface. Here, we couple the motion of an elastic substrate with the array of modulated resonators using an effective medium approach~\cite{Garova1999} and a truncated plane-wave expansion of the solution~\cite{Vila2017}. To quantify the dispersion characteristics of the modulated metasurface, we use a first-order asymptotic analysis~\cite{Nassar2017prsa, Nassar2017eml}.
\subsection{Analytical dispersion relation of non-modulated metasurfaces}
\label{s:metasurf}
We begin our investigation by first recalling the dynamics of vertically polarized surface waves (of the Rayleigh type) propagating in an isotropic, elastic, homogeneous medium of infinite depth, decorated with an array of vertically-vibrating resonators. We restrict our analysis to plane waves propagating in the $x,z$ plane (see Fig.~\ref{f:met}), and we assume plane-strain conditions. The displacements along $x$ and $z$ are called $u$ and $v$, respectively. In the absence of body forces, pressure and shear waves propagating in the substrate are described by the wave equations~\cite{Graff1991}:
\begin{subequations}
\begin{equation} \label{e:bulk 1}
\nabla^{2} \Phi=\frac{1}{c_{L}^{2}} \frac{\partial^{2} \Phi}{\partial t^{2}},
\end{equation}
\begin{equation} \label{e:bulk 2}
\nabla^{2} \Psi_{y}=\frac{1}{c_{S}^{2}} \frac{\partial^{2} \Psi_{y}}{\partial t^{2}},
\end{equation}
\end{subequations}
where the dilational $\Phi$ and the transverse $\Psi_{y}$ potentials are introduced via Helmholtz decomposition of the substrate displacement field, $u=\frac{\partial\Phi}{\partial x}-\frac{\partial\Psi_y}{\partial z}$ and $v=\frac{\partial\Phi}{\partial z}+\frac{\partial\Psi_y}{\partial x}$. The pressure ($c_{L}$) and shear ($c_{S}$) wave velocities are given as:
\begin{equation}
c_{L}=\sqrt{\frac{\lambda+2\mu}{\rho}}, \quad c_{S}=\sqrt{\frac{\mu}{\rho}},
\end{equation}
where $\lambda$ and $\mu$ are the elastic Lam\'e constants and $\rho$ is the mass density of the substrate. Following a standard approach for the derivation of Rayleigh waves dispersion, we assume the following form of the potentials:
\begin{subequations}
\begin{equation} \label{e:pot 1}
\Phi=A_{0}\,e^{\sqrt{k^2-{\omega^{2}}/{c_{L}^{2}}}\,z}\,e^{i(\omega t-kx)},
\end{equation}
\begin{equation} \label{e:pot 2}
\Psi_{y}=B_{0}\,e^{\sqrt{k^2-{\omega^{2}}/{c_{S}^{2}}}\,z}\,e^{i(\omega t-kx)},
\end{equation}
\label{e:pot}
\end{subequations}
with $k$ being the wavenumber along $x$.
In parallel, we account for the presence of the surface resonators. This is done by considering the equation of motion of an undamped resonator placed on the free surface (corresponding to $z=0$) and excited by the substrate motion $v(x,0,t)=v_{0}$:
\begin{equation}
m\ddot{V}+K_0(V-v_{0})=0.
\label{e:eom2}
\end{equation}
Following the procedure adopted in Ref.~\cite{Garova1999}, we assume a harmonic motion $V=V_0\,e^{i(\omega t-kx)}$ for the resonator and consider the normal stress exerted by the resonator at the surface as its inertial force divided by the footprint area $A=s^2$, where $s$ is the distance between resonators, i.e., the unit cell size of the array. This stress is defined as:
\begin{equation} \label{e:average stress}
\sigma_{zz,r}=-\frac{m}{A}\ddot{V}=\frac{m}{A}\omega^2 V.
\end{equation}
By using this assumption, often referred to as effective medium approach~\cite{Boechler2013}, we restrict our analysis to wave propagation regimes where the surface wavelengths are much larger than the characteristic resonator spacing, $s$. The average stress in Eq.~\eqref{e:average stress} can be used as a boundary condition for the normal stress of the elastic half-space at $z=0$:
\begin{subequations}
\begin{equation} \label{e:normal stress bc at z=0}
\sigma_{zz}=\sigma_{zz,r},
\end{equation}
together with the free stress condition on the tangential component:
\begin{equation} \label{e:tang. stress bc at z=0}
\sigma_{zx}=0.
\end{equation}
\end{subequations}
For a linear elastic and isotropic material, the stresses can be related to the potentials $\Phi$ and $\Psi_y$ using the constitutive relations~\cite{Graff1991}:
\begin{subequations}
\begin{align}
\label{sigzx}
\sigma_{zx} &= \mu \left(2\frac{\partial^2\Phi}{\partial x \partial z} + \frac{\partial^2\Psi_y}{\partial x^2 } - \frac{\partial^2\Psi_y}{\partial z^2 }\right),
\\
\label{sigzz}
\sigma_{zz} &= (\lambda+2\mu) \left(\frac{\partial^2\Phi}{\partial z^2 }+ \frac{\partial^2\Psi_y}{\partial x \partial z}\right) + \lambda \left(\frac{\partial^2\Phi}{\partial x^2 } - \frac{\partial^2\Psi_y}{\partial x \partial z}\right).
\end{align}
\label{e:sig}
\end{subequations}
At this stage, using Eq.~\eqref{e:sig}, we express the boundary conditions in Eq.~\eqref{e:normal stress bc at z=0} and Eq.~\eqref{e:tang. stress bc at z=0} in terms of surface wave potentials in Eqs.~\eqref{e:pot}, and obtain the expressions:
\begin{subequations}
\begin{equation}
\left[-2i\mu \, \sqrt{k^{2}-\frac{\omega^{2}}{c_{L}^{2}}}\,k A_0 + \mu\left(\frac{\omega^2}{c_S^2} - 2k^2\right) B_0\right]\,e^{i(\omega t-kx)}=0,
\end{equation}
\begin{equation}
\left[\left(2\mu k^{2}-2\mu\frac{\omega^{2}}{c_{L}^{2}} - \lambda\frac{\omega^2}{c_{L}^2}\right)A_0 -2i\mu k \sqrt{k^{2}-\frac{\omega^{2}}{c_{S}^{2}}}\,B_0 -m\frac{\omega^2}{A}V_0\right]\,e^{i(\omega t-kx)}=0.
\end{equation}
\end{subequations}
Coupling these two equations with the equation of motion of the resonator, Eq.~\eqref{e:eom2}, and dropping the exponential $e^{i(\omega t-kx)}$, we obtain:
\begin{equation}
\label{e:metasurf}
\left[\begin{array}{ccc}
{-2i\mu k\sqrt{k^{2}-\frac{\omega^{2}}{c_{L}^{2}}}} & {\mu(\frac{\omega^2}{c_S^2} - 2k^2)} & {0} \\
{2\mu (k^{2}-\frac{\omega^{2}}{c_{L}^{2}}) - \lambda\frac{\omega^2}{c_{L}^2}} & {-2i\mu k \sqrt{k^{2}-\frac{\omega^{2}}{c_{S}^{2}}} } & {-m\frac{\omega^2}{A}} \\
{-K_{0}\sqrt{k^{2}-\frac{\omega^{2}}{c_{L}^{2}}}} & {i K_0 k} &{-m\omega^2 + K_0}
\end{array}\right]\left[\begin{array}{ccc}{A_0}\\{B_0}\\{V_0}\end{array}\right]=
\left[\begin{array}{ccc}{0}\\{0}\\{0}\end{array}\right].
\end{equation}
This system of three equations can be written in compact form as $\boldsymbol{\Pi}(k,\omega)\,\mathbf{q}_0=\mathbf{0}$. It represents the necessary condition for the plane-wave solutions to hold.
Non-trivial solutions of Eq.~\ref{e:metasurf} are found by setting $|\boldsymbol{\Pi}(k,\omega)|=0$, which yields the non-modulated metasurface dispersion relation. An example of this dispersion relation is given by the solid black lines in Fig.~\ref{f:disp}(a), for an elastic substrate with $c_L/c_S=1.5$ and a metasurface with mass ratio $m \omega_r/(A \rho c_S)=0.15$.
Note that the coupling between Rayleigh waves and surface resonators induces a subwavelenght bandgap in the surface wave spectrum. This gap covers the frequency range $\omega_r < \omega < \omega_r(\beta+\sqrt{\beta^2+1})$, where $\beta=\frac{m\omega_r}{2\rho A c_S}\sqrt{1-c_S^2/c_L^2}$ ~\cite{Palermo2016}. Further details about the dispersive features of a non-modulated metasurface can be found in Refs.~\cite{Garova1999, Boechler2013, Palermo2016}.
\subsection{Analytical dispersion relation of modulated metasurfaces}
\label{s:modmetasurf}
We consider a plane-wave spatio-temporal modulation of the stiffness of the resonators:
\begin{equation}
K(t)=K_0+2dK \cos{\left( \omega_m t -k_m x \right)},
\label{e:kdef meta}
\end{equation}
where $k_m$ is the modulation wavenumber. The key difference between Eqs.~\ref{e:kdef} and~\ref{e:kdef meta} is the presence of a spatially-varying phase term, $k_mx$. {Note that such one-dimensional modulation restricts our investigation to those scenarios where the surface wave is collinear with the direction of the stiffness modulation.}
This spatial modulation of the stiffness parameter, on its own, results in the appearance of a symmetric frequency gap in the dispersion relation of the surface waves (symmetric with respect to $k$). When combined with the temporal modulations, these frequency gaps occur at different frequencies for forward- and backward-traveling waves, i.e., non-reciprocal propagation emerges~\cite{Swinteck2015,Trainiti2016,nassarPRB}.
Based on the results of Section~\ref{s:sdof}, we choose a modulation amplitude $dK/K_0$ and frequency $\omega_m/\omega_r$ such that the response of the resonators remain stable, and the truncated approximation of the response is acceptable.
To ensure stability in the presence of spatio-temporal modulations, we need to additionally check that the modulation wave speed is smaller than the phase velocity of the medium~\cite{Cassedy1963}, i.e., $\omega_m/k_m<c\,(\omega)$. This condition might not be respected near $\omega_r$ if the resonant bandgap is at very low frequencies. Note, however, that our results on the stability of a single resonator in Fig.~\ref{f:free}(c) already warned us to stay away from values of the modulating frequency that are close to $\omega_r$.
The modulating wave generates a scattered wavefield, here described by the vector of amplitudes $\mathbf{q}_j=[\hat{A}_j, \hat{B}_j, \hat{V}_j]^T$, where $j$ is a non-zero integer. These amplitudes are associated to the substrate potentials:
\begin{subequations} \label{e:potential function j}
\begin{equation}
\Phi_j=\hat{A}_{j}\,e^{\sqrt{k_j^{2}-{\omega_j^{2}}/{c_{L}^{2}}}\,z}\,e^{i(\omega_j t-k_j x)},
\end{equation}
\begin{equation}
\Psi_{y,j}=\hat{B}_{j}\,e^{\sqrt{k_j^{2}-{\omega_j^{2}}/{c_{S}^{2}}}\,z}\,e^{i(\omega_j t-k_j x)},
\end{equation}
\end{subequations}
and to the resonator displacement:
\begin{equation}
V_{j}=\hat{V}_{j}\,e^{i(\omega_j t-k_j x)},
\end{equation}
For convenience, we define the shifted frequency and wavenumber as:
\begin{equation}
\omega_j=\omega+j\omega_m, \quad k_j=k+j k_m.
\end{equation}
The scattered field has a non-negligible amplitude only when the phase matching condition $|\boldsymbol{\Pi}(k,\omega)|$=$|\boldsymbol{\Pi}(k_j,\omega_j)|$=0 is met~\cite{Nassar2017prsa}, namely at the crossing points between the original dispersion curves $|\boldsymbol{\Pi}(k,\omega)|$=0 and the shifted curves $|\boldsymbol{\Pi}(k+j k_m,\omega+j \omega_m)|=0$. A graphical representation of two shifted curves for $j=\pm 1$ is provided in Fig.~\ref{f:disp}(a) for a metasurface modulated with frequency $\omega_m/\omega_r=0.25$ and wavenumber $k_m/k_r=2.5$, where $k_r=\omega_r/c_S$.
\begin{figure*}[!htb]
\centering
\includegraphics[scale=1]{Fig_SAW_Dispersion}
\caption{Dispersion properties of modulated and non-modulated metasurfaces. (a) Dispersion curves. The solid black curves represent the non-modulated dispersion relation, while the dashed red and blue lines are the shifted curves for $j=-1$ and $j=+1$, respectively, for modulation parameters $\omega_m/\omega_r=0.25$ and $k_m/k_r=2.5$. The crossing points are highlighted with circular markers. The thin gray lines connect phase-matched points of the original dispersion curves. (b), (c) Details of the crossing points that are highlighted by boxes in (a). The dark regions of the colormap follow the minima of the determinant of Eq.~\eqref{e:metasurf mod}, while the circular red markers indicate the asymptotic evaluation of the modulated dispersion. The thin dotted line represents the sound cone. All cases correspond to modulation amplitude $dK/K_0=0.05$. (b) A case of veering, where no frequency band gap is found. (c) A case of locking that features a frequency bandgap of normalized width $2\delta \omega/\omega_r$. (d) Evolution of the width of the bandgap in (c) as a function of the modulation amplitude.}
\label{f:disp}
\end{figure*}
The asymmetric positioning of the crossing points between regions with positive and negative wavenumbers suggests the occurrence of direction-dependent phenomena within the metasurface. We predict the dispersion properties of the modulated meatsurface near these crossing points using a truncated plane-wave expansion. In particular, we assume that the surface wave potentials have the following form, comprising non-modulated and scattered amplitudes:
\begin{subequations}
\begin{equation} \label{e:pot 1 PW}
\Phi=\hat{A}_{0}\,e^{\sqrt{k^2-{\omega^{2}}/{c_{L}^{2}}}\,z} \,e^{i(\omega t-k x)} + \sum^{1}_{\substack{j=-1 \\ j \neq 0}}\hat{A}_{j}\,e^{\sqrt{k_j^{2}-{\omega_j^{2}}/{c_{L}^{2}}}\,z} \,e^{i(\omega_j t-k_j x)},
\end{equation}
\begin{equation} \label{e:pot 2 PW}
\Psi_y=\hat{B}_{0}\,e^{\sqrt{k^2-{\omega^{2}}/{c_{S}^{2}}}\,z} \,e^{i(\omega t-k x)} + \sum^{1}_{\substack{j=-1 \\ j \neq 0}}\hat{B}_{j}\,e^{\sqrt{k_j^{2}-{\omega_j^{2}}/{c_{S}^{2}}}\,z} \,e^{i(\omega_j t-k_j x)},
\end{equation}
\end{subequations}
and a resonator displacement:
\begin{equation} \label{e: res PW}
V=\hat{V}_{0}\,e^{i(\omega t-k x)} + \sum^{1}_{\substack{j=-1 \\ j \neq 0}}\hat{V}_{j}\,e^{i(\omega_j t-k_j x)}.
\end{equation}
The choice of $j=\pm1$ is direct consequence of using a harmonic plane-wave modulation in Eq.~(\ref{e:kdef meta}); otherwise, higher-order terms need to be included.
Following the same procedure adopted for the
non-modulated case, we substitute the expanded potentials, Eq.~\eqref{e:pot 1 PW} and Eq.~\eqref{e:pot 2 PW}, into the constitutive equations, Eq.~\eqref{sigzx} and Eq.~\eqref{sigzz}. Similarly, we use the truncated resonator displacement, Eq.~\eqref{e: res PW}, in the governing equation of the resonator, Eq.~\eqref{e:eom2}, and boundary condition, Eq~\eqref{e:average stress}. The result is finally substituted into the boundary conditions, Eq.~\eqref{e:normal stress bc at z=0} and Eq.~\eqref{e:tang. stress bc at z=0}. After collecting and simplifying the common exponential in each equation, we obtain:
\begin{equation}
\label{e:metasurf mod}
\left[\begin{array}{ccc}
{\boldsymbol{\Pi}(k_{-1},\omega_{-1})}&{\boldsymbol{\Gamma}(k,\omega)} &\mathbf{0}\\
{\boldsymbol{\Gamma}(k_{-1},\omega_ {-1})}&{\boldsymbol{\Pi}(k,\omega)}&{\boldsymbol{\Gamma}(k_{+1},\omega_{+1})}\\
\mathbf{0}&{\boldsymbol{\Gamma}(k,\omega)} &{\boldsymbol{\Pi}(k_{+1},\omega_{+1})}
\end{array}\right]\left[\begin{array}{ccc}{\mathbf{q}_{-1}}\\{\mathbf{q}_0}\\ \mathbf{q}_{+1}\end{array}\right]=\mathbf{0},
\end{equation}
where the submatrix $\boldsymbol{\Pi}$ is defined in Eq.~\eqref{e:metasurf}, and the submatrix $\boldsymbol{\Gamma}$ is defined as:
\begin{equation} \label{e:Gamma}
\boldsymbol{\Gamma}(k,\omega)=\left[\begin{array}{ccc}
{0} & {0} & {0} \\
{0} & {0} & {0} \\
{-dK\sqrt{k^{2}-\frac{\omega^{2}}{c_{L}^{2}}}} & {i\,dK\,k} & {dK}
\end{array}\right].
\end{equation}
{\noindent Note that the operator $\boldsymbol{\Gamma}(k_j,\omega_j)$ describes the coupling between the scattered $j$ and fundamental $0$ wave fields introduced by the stiffness modulation of the resonator.}
The expression in Eq.~\eqref{e:metasurf mod}, written in compact form as $\mathbf{\Lambda}(k,\omega)\,\mathbf{q}=\mathbf{0}$, describes the relation between the Rayleigh waves and the modulation-induced scattered waves. This relation is valid when the scattered field interacts strongly with the main field, i.e., near the crossings of non-modulated and translated dispersion curves, as indicated in Fig.~\ref{f:disp}(a).
Nontrivial solutions of Eq.~\eqref{e:metasurf mod} are obtained by setting the determinant of the $9\times9$ matrix equal to 0, $|\mathbf{\Lambda}(k,\omega)|=0$.
The resulting equation describes the dispersion relation of the modulated system in the vicinity of the crossing points between the fundamental and the shifted dispersion curves. We refrain from seeking a closed-form expression of its roots. Nevertheless, by evaluating the determinant $|\mathbf{\Lambda}(k,\omega)|$ in the neighborhood of the crossing points, and finding its local minima, we can identify the dispersion branches for the modulated system. Examples of modulated branches are provided in Fig.~\ref{f:disp}(b,c), where the magnitude of $|\mathbf{\Lambda}(k,\omega)|$ near the two crossing points is displayed as a colormap, with the minima being darker. In the neighborhood of the crossing points, the modulated branches are characterized by frequency ($\delta \omega$) and wavenumber ($\delta k$) shifts with respect to the intersection of the fundamental ($|\mathbf{\Pi}(k,\omega)|=0$) and translated ($|\boldsymbol{\Pi}(k+j k_m,\omega+j \omega_m)|=0$) dispersion curves. These shifts result from the repulsion between the two interacting modes.
The pair ($\delta k,\delta \omega$) can be calculated as the leading-order correction to ($k,\omega$) in an asymptotic analysis of the problem ~\cite{Nassar2017prsa,Hinch}.
For this purpose, we expand the surface wave potentials and the resonator displacement around the crossing point of interest, as shown in the following:
\begin{subequations}
\begin{equation} \label{e:pot 1 cor}
\tilde{\Phi}=\left(\tilde{A}_{0}\,e^{\sqrt{(k+\delta k)^2-{(\omega+\delta \omega)^{2}}/{c_{L}^{2}}}\,z} \,e^{i(\omega t-k x)}+\tilde{A}_{j} \,e^{\sqrt{(k_j+\delta k)^{2}-{(\omega_j+\delta \omega)^{2}}/{c_{L}^{2}}}\,z}\,e^{i(\omega_j t-k_j x)}\right)\,e^{i(\delta\omega t-\delta k x)},
\end{equation}
\begin{equation} \label{e:pot 2 cor}
\tilde{\Psi}_y=\left(\tilde{B}_{0}\,e^{\sqrt{(k+\delta k)^2-{(\omega+\delta \omega)^{2}}/{c_{S}^{2}}}\,z} \,e^{i(\omega t-k x)}+\tilde{B}_{j} \,e^{\sqrt{(k_j+\delta k)^{2}-{(\omega_j+\delta \omega)^{2}}/{c_{S}^{2}}}\,z}\,e^{i(\omega_j t-k_j x)}\right)\,e^{i(\delta\omega t-\delta k x)},
\end{equation}
\begin{equation} \label{e: res cor}
\tilde{V}=\left(\tilde{V}_{0}\,e^{i(\omega t-k x)}+\tilde{V}_{j}\,e^{i(\omega_j t-k_j x)}\right)\,e^{i(\delta\omega t-\delta k x)},
\end{equation}
\end{subequations}
where $j$ is either +1 or -1 depending on which shifted branch satisfies the phase matching condition with the fundamental dispersion curve.
With these ansatzes, and replicating the procedure we used to obtain the dispersion relation for the modulated metasurface, we obtain:
\begin{equation}
\label{e:metasurf correction}
\left[\begin{array}{ccc}
{\boldsymbol{\Pi}}(k+\delta k,\omega+\delta \omega) & {\boldsymbol{\Gamma}(k_j+\delta k,\omega_j+\delta \omega)} \\
{{\boldsymbol{\Gamma}}(k+\delta k,\omega+\delta \omega) } & {\boldsymbol{\Pi}(k_j+\delta k,\omega_j+\delta \omega)}
\end{array}\right]\left[\begin{array}{ccc}{\mathbf{q}_0}\\ \mathbf{q}_j\end{array}\right]=\mathbf{0},
\end{equation}
We can then find the corrections $\delta k$ and $\delta \omega$ by setting the determinant of the $6\times6$ matrix in Eq.~\eqref{e:metasurf correction} to zero.
Further details on this computation are given in~\ref{a:analy}.
Examples of corrected portions of the dispersion relation are shown in Fig.~\ref{f:disp}(b,c) as red dotted curves. We can see that the corrections are non-zero only in the neighborhood of the crossing points, and that they show an excellent agreement with the minima of the determinant of the matrix in Eq.~\eqref{e:metasurf mod}.
\subsection{Physical insight on the modulated dispersion relation}
From Fig.~\ref{f:disp}(b,c), we observe that the presence of a spatio-temporal modulation causes the fundamental and shifted dispersion curves to repel each other. Two distinct phenomena are observed depending on whether the fundamental and shifted branches propagate along the same direction or not, i.e., whether the group velocities $c_g={\partial \omega}/{\partial k}$ and $c_{gj}={\partial \omega_j}/{\partial k_j}$ satisfy $c_{g}c_{gj}>0$ or $c_{g}c_{gj}<0$, respectively. For a pair of co-directional branches like those shown in Fig.~\ref{f:disp}(b), the interacting modes veer without crossing as a result of the repulsion between the fundamental and scattered modes. No significant frequency shift is found and consequently no directional band gaps are generated.
Conversely, for a couple of contra-directional branches, as shown in Fig.~\ref{f:disp}(c), the repulsion between the pair of coupled modes results in a branch locking phenomenon~\cite{mace2012} and, in some occasions, in the opening of a directional bandgap. We quantify the branch repulsion by evaluating the bandgap width at the locking point, $2\delta \omega$, as a function of the modulation amplitude, $dK$. As expected from the first-order nature of the correction terms in Section~\ref{s:modmetasurf}, the width of a directional bandgap is proportional to the modulation amplitude; see Fig.~\ref{f:disp}(d).
We remark that for any crossing point $(k^*,\,\omega^*)$ at the intersection of $|\boldsymbol{\Pi}(k,\omega)|=0$ and $|\boldsymbol{\Pi}(k+ k_m,\omega+\omega_m)|=0$, we can identify a crossing point $(\omega^*+\omega_m,\,k^*+k_m)$, e.g., at the intersection of $|\boldsymbol{\Pi}(k,\omega)|=0$ and $|\boldsymbol{\Pi}(k- k_m,\omega- \omega_m)|=0$, that is phase-matched to $(k^*,\,\omega^*)$ via the pumping wave~\cite{Nassar2017eml}. In Fig.~\ref{f:disp}(a), all crossing points connected by thin gray lines are phase-matched, being only separated by a $\pm(k_m,\,\omega_m)$ translation. According to Eq.~\eqref{e:pot 1 cor} and Eq.~\eqref{e:pot 2 cor} we expect that for a surface wave traveling within the modulated metasurface with frequency $\omega^*$ and wavenumber $k^*$, a scattered field is generated with modulated frequencies and wavenumber $(\omega^*+\omega_m,\,k^*+k_m)$.
Similarly, for a fundamental surface wave at $(\omega^*+\omega_m,\,k^*+k_m)$, a scattered field at $(\omega^*,\,k^*)$ is expected.
In other words, if we send a wave at a frequency near one of the crossings, the metasurface will generate waves at the frequency of the corresponding phase-matched point~\cite{Nassar2017prsa}. Numerical evidence of this intriguing dynamic behavior, that hints to the possibility of using modulated metasurfaces as frequency converters for surface waves, is provided in Section~\ref{s:nr}.
\section{Surface wave non-reciprocity and other modulation-induced effects}
\label{s:nr}
We now resort to finite element (FE) simulations to analyze the propagation of surface waves in a modulated metasurface and to validate the directional behavior predicted by our analytical model. Our 2D plane-strain FE model, implemented in COMSOL Multiphysics, consists of a portion of an elastic substrate of depth $H=4\lambda_0$, where $\lambda_0=\omega_r/c_R$ and $c_R$ is the Rayleigh wave velocity in the substrate. One of our models is sketched in Fig.~\ref{f:disp_num}(a).
\begin{figure*}[!htb]
\centering
\includegraphics[scale=1]{Fig_SAW_Num1}
\caption{Numerical reconstruction of the modulated dispersion curves. (a) Schematic of the numerical models for right-going and left-going surface waves, with a right-going modulating wave. (b) Time history and (c) frequency content of the point force applied at the source. (d) Dispersion curves reconstructed via a 2D-DFT of the space-time evolution of the vertical displacement on the surface underneath the resonators, $v(x,0,t)$. The system has modulation parameters $dK/K_0=0.1$, $\omega_m/\omega_r=0.25$ and $k_m/k_r=2.5$. The colormap is scaled with respect to its maximum value. The analytical dispersion, shown as a thick red line, is obtained by tracing the local minima of $|\mathbf{\Lambda}(k,\omega)|$ in a range $\pm 0.1 k$ and $\pm 0.1 \omega$ around each crossing point. The dispersion curves of the non-modulated metasurface, $|\boldsymbol{\Pi}(k,\omega)|=0$, and its shifted twins, $|\boldsymbol{\Pi}(k+k_m,\omega+\omega_m)|=0$ and $|\boldsymbol{\Pi}(k-k_m,\omega-\omega_m)|=0$, are shown as black, red and blue dashed lines, respectively. (e) Same as (d), for modulation parameters $dK/K_0=0.1$, $\omega_m/\omega_r=0.45$ and $k_m/k_r=2.5$.}
\label{f:disp_num}
\end{figure*}
The substrate features an array of resonators mounted on its free surface with spacing $s=\lambda_0/23$. All edges of the domain, apart from the one decorated with resonators, are characterized by low-reflecting boundary conditions. A convergent mesh of quadratic Lagrangian elements is used to discretize the substrate and to ensure that the wave field is accurately captured in the frequency range of interest. The stiffness of each resonator varies in space and time according to Eq.~\eqref{e:kdef meta}. Based on the previous considerations on accuracy and stability in Section~\ref{s:sdof}, we choose modulation parameters $dK=0.1\,K_0$, $k_m=2.5\,k_r$ and either $\omega_m=0.25\,\omega_r$ or $\omega_m=0.45\,\omega_r$.
\subsection{Numerical dispersion reconstruction}
We perform transient simulations to numerically reconstruct the dispersion properties of the modulated metasurface, using the models shown in Fig.~\ref{f:disp_num}(a). We excite the medium with a vertical sine-sweep point force having frequency content $0.5\,\omega_r<\omega<2\,\omega_r$, as shown in Fig.~\ref{f:disp_num}(b,c). We record the vertical surface displacement $v(x,0,t)$ at 1000 equally-spaced locations along a length $L_a=15\lambda_0$ for a normalized time $0 < \bar{t} < 125$, where $\bar{t}=t/T_r$ and $T_r=2\pi/\omega_r$. To reconstruct the dispersion branches for $k>0$ and $k<0$, we simulate both a right-propagating (top panel of Fig.~\ref{f:disp_num}(a)) and a left-propagating wave (bottom panel), with a modulating wave that is always right-propagating. In both cases, the source is placed at a distance $d_s=5\lambda_0$ from the closest recording point. The recorded space-time traces are then transformed via 2D Discrete Fourier Transform (2D-DFT) to obtain the wavenumber-frequency spectrum $\bar{v}(k,0,\omega)$. By following the higher-amplitude regions of this two-dimensional spectrum, we can identify the numerical dispersion branches.
The reconstructed dispersion for modulation parameters $dK=0.1\,K_0$, $\omega_m=0.25\,\omega_r$ and $k_m=2.5\,k_r$ is shown as a colormap in Fig.~\ref{f:disp_num}(d). The analytical dispersion, shown as a thick red line, is obtained by tracing the minima of $|\mathbf{\Lambda}(k,\omega)|$ near the crossing points. For convenience, we also replicate on the same figure the original (non-modulated) dispersion curve and its shifted analogs (thin dashed lines). This plot unequivocally illustrates that the dispersive features observed in the numerical results are consistent with the analytical predictions. In particular, one can see that the numerical results clearly indicate the presence of several modulation-induced features: (i) two coupled directional bandgaps of narrow extent at $0.69\,\omega_r$ for left-propagating and $0.93\,\omega_r$ for right-propagating waves; (ii) two coupled veering points at $0.73\,\omega_r$ and $0.98\,\omega_r$, both for right-propagating waves; (iii) two coupled and relatively-wide directional gaps at $0.92\,\omega_r$ and $1.17\,\omega_r$ for left- and right-propagating waves, respectively.
We repeat this reconstruction procedure for different modulation parameters: $dK=0.1\,K_0$, $\omega_m=0.45\,\omega_r$ and $k_m=2.5\,k_r$. The results are shown in Fig.~\ref{f:disp_num}(e), and they display a similar consistency with the analytical predictions as for the previous configuration. In this case, the features of interest are two coupled directional gaps at the locking frequencies $0.86\,\omega_r$ and $1.31\,\omega_r$, for left- and right-propagating waves, respectively. These gaps are of interest because they are characterized by a significant reduction in spectral amplitude.
\subsection{Non-reciprocal transmission and conversion-by-reflection}
\label{s:nrtr}
To verify the characteristics of the scattered field responsible for directional wave propagation, we perform transient simulations with narrow-band waveforms centered at those frequencies. For these analyses, we use the models shown in Fig. \ref{f:TR}(a,b), cf. Fig.~\ref{f:disp_num}(a). In both cases, we have two substrate-only regions separated by a region of length $L_a=12.5\,\lambda_0$ that features a large number of surface resonators (286) spaced at $s=\lambda/23$. The response is recorded at locations $x_l$ and $x_r$, that mark the left and right edges of the region with resonators, respectively. In both configurations, the point source is located on the free surface at a distance $d_s=3.5\,\lambda$ from the corresponding edge of the resonators region. In all cases, the modulating wave is right-propagating, with $dK=0.1\,K_0$, $\omega_m=0.45\,\omega_r$ and $k_m=2.5\,k_r$. This corresponds to the dispersion curve in Fig.~\ref{f:disp_num}(e).
\begin{figure*}[!htb]
\centering
\includegraphics[scale=1]{Fig_SAW_Num2}
\caption{Transient FE simulations of the propagation of narrow-band signals centred at the directional gap frequencies ($0.86\,\omega_r$ and $1.31\,\omega_r$, Fig.~\ref{f:disp_num}(e)) through a modulated metasurface. Schematic of the numerical setup for (a) right-propagating and (b) left-propagating surface waves. Spectral content of the vertical surface wave field recorded at the left and right edges of the resonators array for: (c) right-propagating waves at $\Omega=1.31\,\omega_r$, (e) right-propagating waves at $\Omega=0.86\,\omega_r$, (g) left-propagating waves at $\Omega=1.31\,\omega_r$, (i) left-propagating waves at $\Omega=0.86\,\omega_r$. Radon transform of time-space surface wave records computed along the resonator array for: (d) right-propagating waves at $\Omega=1.31\,\omega_r$, (f) right-propagating waves at $\Omega=0.86\,\omega_r$, (h) left-propagating waves at $\Omega=1.31\,\omega_r$, (j) left-propagating waves at $\Omega=0.86\,\omega_r$.}
\label{f:TR}
\end{figure*}
We begin our investigation by considering a right-propagating surface wave (i.e., incident to the array at $x_l$) at frequency $\Omega=1.31\,\omega_r$. The spectra of the time signals recorded at $x_l$ and $x_r$ are shown in Fig.~\ref{f:TR}(c). The spectrum at $x_r$ (blue line), corresponding to a wave transmitted through the array of resonators, shows a significant amplitude reduction at $\Omega=1.31\,\omega_r$, in agreement with the directional gap predicted by our analysis. The amplitude gap is accompanied by the generation of a side peak at the twin locking frequency $0.86\,\omega_r$. This frequency content appears even more markedly in the spectrum of the signal recorded at the $x_l$ location (red line). This second peak corresponds to the reflected field caused by the modulated array of resonators. To support this claim, we compute the two-dimensional Radon transform (wave speed $c$ versus frequency $\omega$) of the time-space data matrix recorded within the array of resonators. By means of this transform, we determine if a signal with a certain frequency content is right-propagating (positive $c$) or left-propagating (negative $c$). The amplitude of this spectrum, shown as a colormap in Fig.~\ref{f:TR}(d), confirms that the signal content at $0.86\,\omega_r$ travels from right to left, opposite to the direction of the incident signal at $1.31\,\omega_r$. This indicates that the modulated metasurface can convert an incident wave into a reflected wave with a different frequency content---shifted from the original frequency by the modulating one~\cite{Nassar2017eml}. To verify non-reciprocity, we send a left-propagating wave with frequency centered at $1.31\,\omega_r$. In this case, the signal travels undisturbed through the metasurface, as confirmed by the spectra at $x_l$ and $x_r$, shown in Fig.~\ref{f:TR}(g). Moreover, no evidence of reflected waves is found in the Radon transform shown in Fig.~\ref{f:TR}(h).
We replicate these analyses for left- and right-propagating surface waves excited at the phase-matched locking frequency $\Omega=0.86\,\omega_r$. In this case, left-propagating waves travel almost undisturbed within the metasurface, as confirmed by the spectral contents in Fig.~\ref{f:TR}(e) that feature waves at the carrier frequency only, and by the Radon transform in Fig.~\ref{f:TR}(f). Conversely, the directional gap for right-propagating waves causes an attenuation of the transmitted signal at $0.86\,\omega_r$, as shown by the red line of Fig.~\ref{f:TR}(i). This phenomenon is accompanied by a back-scattering of the coupled frequency $1.31\,\omega_r$, as indicated by the blue line in Fig.~\ref{f:TR}(i) and by the Radon transform in Fig.~\ref{f:TR}(j).
While this section has been dedicated to the response to excitation frequencies within the directional bandgaps, the reader can find details on the response of a metasurface excited at a veering point in~\ref{a:transm}.
\subsection{Surface-bulk wave conversion}
It is known that surface waves can convert into bulk waves upon interaction with a metasurface~\cite{Colquitt2017}. To evaluate how this phenomenon can influence the directional response of a modulated metasurface, we analyze the full wavefield in the substrate at different time instants. We consider the case of a left-propagating narrow-band signal with carrier frequency $\Omega=0.86\,\omega_r$. This case corresponds to the results in Fig.~\ref{f:TR}(i,j). The time-space evolution of the displacement field along the surface is illustrated in Fig.~\ref{f:WF}(a).
\begin{figure*}[!htb]
\centering
\includegraphics[scale=1]{Fig_SAW_Num3}
\caption{(a) Time-space evolution of the surface displacement for a left-propagating wave at $0.86\,\omega_r$. The dashed line indicates the beginning of the region that features resonators. The thick horizontal lines indicate the time instants of interest. (b) The wavefield at $\bar{t}=5$, showing how waves propagate along and below the surface. (c) The wavefield at $\bar{t}=20$. The arrows and letters indicate wave features of interest.}
\label{f:WF}
\end{figure*}
The wavefields corresponding to time instants $\bar{t}=5$ and $\bar{t}=20$ are shown in Fig.~\ref{f:WF}(b,c), respectively. In particular, the wavefield at $\bar{t}=20$ presents several interesting features. First, it is clearly visible that the transmitted and reflected surface waves have different wavelength contents, as a result of the frequency conversion shown in Fig.~\ref{f:TR}(i). This is an example of conversion by reflection due to spatio-temporal modulations~\cite{Nassar2017eml}. The conversion does not take place exactly at the edge of the resonators region, but rather at a location within the resonator array.
If we focus our attention on the reflected waves, we can also see that not all waves are reflected along the surface. As indicated by the arrow pointing towards the bottom-right of Fig.~\ref{f:WF}(c), a part of the scattered field is converted into waves that propagate towards the bulk. It would be interesting to quantify the surface-to-bulk wave conversion mechanism and determine the penetration length of the fundamental wave into the metasurface. These aspects, which have practical implications for the design of surface wave converters and filters, deserve a separate treatment.
\section{Conclusions}
\label{s:concl}
We have provided a detailed analytical and numerical account of the non-reciprocal propagation of surface waves of the Rayleigh type in a dynamically modulated metasurface. We have first bridged the gap between the single-resonator dynamics and wave-resonator interactions, by providing a detailed description of the dynamics of a time-modulated resonator. We have then developed an analytical framework to describe the dispersion properties of spatio-temporally varying metasurfaces, and illustrated their asymmetric features.
By means of numerical simulations, we have demonstrated the occurrence of non-reciprocal surface wave attenuation, frequency conversion by reflection and by transmission. We have also shown that surface waves interacting with the modulated metasurface can leak as bulk waves into the substrate. Our findings and the tools we have provided can serve as guidelines for future experiments on the topic, and can play an important role in developing practical designs of SAW devices with unprecedented wave manipulation capacity.
\section*{Acknowledgments}
AP acknowledges the support of DICAM at the University of Bologna. PC acknowledges the support of the Research Foundation at Stony Brook University. CD acknowledges support from the National Science Foundation under EFRI Grant No.\ 1741565. The authors wish to thank Lorenz Affentranger and Yifan Wang for useful discussions.
| proofpile-arXiv_065-326 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Missing proofs}
Here we provide a proof of \Cref{prop:matroid-update} for
updating the optimal solution of a weighted matroid:
\begin{proof}
Consider running the {\sc Greedy} algorithm in parallel on
both $\mathcal{M}_{\mid E - x}$ and $\mathcal{M}$ and call these
executions $\mathcal{E}^-, \mathcal{E}$ respectively.
In case $(I + x) \in \mathcal{I}$, the downward-closed property of
$\mathcal{M}$ guarantees that both executions will make identical
decisions on elements other than $x$ and element $x$ will be
included in the optimal solution of $\mathcal{M}$, hence $I^* = I + x$.
For the other case, suppose first that $x = y$, i.e. $x$ is
the min-weight element on $C$. At the time $x$ is inspected,
all other elements of $C$ have already been inspected and
added to the current solution, hence $x$ is not included
since it would violate independence. Therefore, both
executions proceed making identical decisions in every step
and arrive to the same solution $I^* = (I + x) - x = I$.
Now suppose that $x \neq y$. At the time element $x$ is
considered in $\mathcal{E}$, it can be safely included in the solution. The
reason is that if adding $x$ resulted in a circuit $C'$, then
$C' \subsetneq C$ violating the minimality of $C$. The next
step at which the two executions will diverge again is when
considering $y$ --- if they diverged at a previous step it
would again mean that $x$ is part of a circuit $C' \subsetneq
C$ --- at this point $\mathcal{E}$ ignores $y$.
Finally, suppose that the two executions diverge at a later
step on an element $e$ with $w(e) < w(y)$. Denote by $J$ the
current solution $\mathcal{E}^-$ is maintaining and thus $(J
+ x) - y$ is the current solution of $\mathcal{E}$.
There are two reasons the executions might diverge:
\begin{itemize}
\item $(J + e) \in \mathcal{I}$ but $( (J + x) - y ) + e \notin
\mathcal{I}$.
In this case, there must exist circuit
$C' \subseteq ( (J + x) - y ) + e$ such that $x, e
\in C'$ and $y \notin C'$. Therefore, by \Cref{prop:circuits}
there exists circuit $C''$ such that $e \in C'' \subseteq
(C' \cup C) - x$. This is a contradiction because $C''$ is a
circuit of $J + e$ which was assumed to be independent.
\item $(J + e) \notin \mathcal{I}$ but $( (J + x) - y ) + e \in
\mathcal{I}$.
This case is similar and the proof is omitted.
\end{itemize}
\end{proof}
\section{A note on revenue monotonicity of VCG}
A revenue monotonicity result similar to ours is proven for
VCG in matroid markets in \cite{drs09}. We noticed that one
of the propositions used in the proof of that theorem is
incorrect. Here we provide a counter-example and offer an
alternative proof using our \Cref{lem:nonmatroid}.
Proposition 2.9 of \cite{drs09} claims that {\em ``A
downward-closed set system $(U, \mathcal{I})$ with $\mathcal{I} \neq
\emptyset$ is a matroid if and only if for every pair $A,B$
of maximal sets in $\mathcal{I}$ and $y \in B$, there is some $x
\in A$ such that $A\backslash\{x\} \cup \{y\} \in \mathcal{I}$''}.
Here we notice that the ``backward'' direction of this
proposition does not hold.
Consider the following counter-example.
Let $U = \{ a, b, c, d, e \}$ and define $\mathcal{I}_3$ to be the
independent sets of the uniform rank $3$ matroid on the
4-element subset $\{ a, b, c, d \}$. Now let $\mathcal{I}$ be the
downwards-closed closure of $\mathcal{I}_3 \cup \{ \{ a, c, e\},
\{b, d, e\} \}$. We claim $\mathcal{S} = (U, \mathcal{I})$ violates the above
proposition.
\begin{itemize}
\item $(U, \mathcal{I})$ is {\em not} a matroid: This is easy to
see as $I = \{ a, c, e \}, J = \{d, e\}$ violate the
exchange property.
\item Nevertheless, for every pair $A, B$ maximal sets in
$\mathcal{I}$ and $y \in B$, there is some $x \in A$ such that $A
\backslash \{x\} \cup \{y\} \in \mathcal{I}$.
First, notice that is suffices to show this for all $y \in B
\backslash A$. Otherwise, if $y \in A \cap B$ then set $x = y$
in which case $A \backslash \{x\} \cup \{y\} = A \in \mathcal{I}$.
A second observation is that if both $A, B$ are contained in
$\{a, b, c, d\}$ then the statements holds; after all the
forward direction of the proposition holds and the
restriction of $\mathcal{S}$ on that 4-element subset {\em
is} a matroid.
Finally, notice that $\mathcal{S}$ is symmetric under a
permutation that swaps the roles of $a \leftrightarrow b$
and $c \leftrightarrow d$.
The following table summarizes a case analysis on the choice
of $A, B, y$ and provides a choice of $x$ for each that satisfy
the aforementioned property. The cases that are missing are
equivalent to one of the cases in the table under symmetry.
\begin{equation*}
\begin{array}{|c|c|c||c|}
\hline
A & B & y & x\\
\hline
\{a, c, e\} & \{b, d, e\} & b & e\\
\hline
\{a, c, e\} & \{a, c, d\} & d & e\\
\hline
\{a, c, e\} & \{b, c, d\} & b & e\\
\hline
\{a, c, d\} & \{a, c, e\} & e & d\\
\hline
\{b, c, d\} & \{a, c, e\} & a & b\\
\hline
\{b, c, d\} & \{a, c, e\} & e & c\\
\hline
\end{array}
\end{equation*}
\end{itemize}
We now turn into providing a proof of the ``only if''
direction of Theorem 4.1 in \cite{drs09}.
We use the notation ${\mathbbm{1}}[S]$ for any subset $S \subseteq
U$ of bidder to mean the bidding profile where every bidder
in $S$ bids $1$ and every bidder in $U \backslash V$ bids
$0$.
\begin{theorem}[{``only if'' direction of \cite[Theorem
4.1]{drs09} }]
\label{thm:corrected}
Let $(U, \mathcal{I})$ be a downward-closed set system that is not
a matoid, then there exists a set $V \subseteq U$ and an
element $x \in V$ such that the revenue of VCG on bid
profile ${\mathbbm{1}}[V-x]$ exceed the revenue of VCG on bid
profile ${\mathbbm{1}}[V]$.
\end{theorem}
\begin{proof}
By \Cref{lem:nonmatroid}, there exist $I, J \in \mathcal{I}$ with
properties
(\ref{lem:nonmatroid:propK})-(\ref{lem:nonmatroid:propMaximum}).
Let $V = I \cup J$ and let $x$ be an arbitrary element of $I
\backslash J$. We will prove that the revenue of VCG on
${\mathbbm{1}}[V]$ is less than the revenue of VCG on ${\mathbbm{1}}[V -
x]$.
Consider the VCG payment of every bidder
in the bid profile ${\mathbbm{1}}[V]$. Since $I$ is a
maximum cardinality element of $\mathcal{I}|_V$
(\Cref{lem:nonmatroid}, property
(\ref{lem:nonmatroid:propMaximum})),
we may choose $I$ as the set of winners. Let
$W$ denote the intersection of all elements
of $\mathcal{I}|_V$ that have cardinality $|I|$.
By property (\ref{lem:nonmatroid:propK}) of
\Cref{lem:nonmatroid}, $I-J \subseteq W$.
Every element of $W$ pays zero, because
for $y \in W$ the maximum cardinality
elements of $\mathcal{I}|_{V-y}$ have size $|I|-1$,
hence $y$ could bid zero and still belong to
a winning set. On the other hand, every element
$y \in I-W$ pays 1, because by the definition
of $W$ there is a set $K \in \mathcal{I}|_V$ such
that $|K|=|I|$ but $y \not\in K$. If $y$
lowers its bid below 1, then $K$ rather
than $I$ would be selected as the set of
winners, hence $y$ must pay 1 in the VCG
mechanism. Finally, bidders not in $I$ pay
zero because they are not winners. The VCG
revenue on bid profile ${\mathbbm{1}}[V]$ is therefore
$|I\backslash W|$.
Now recall that $x$ denotes an arbitrary element
of $I\backslash J$, and consider the VCG payment of every
bidder in the bid profile ${\mathbbm{1}}[V-x]$. Since
$V-x$ is a proper subset of $V$,
$(V-x,\mathcal{I}|_{V-x})$ is a matroid.
The rank of this matroid is $|I|-1$,
since \Cref{lem:nonmatroid}, property
(\ref{lem:nonmatroid:propK}) implies
that $\mathcal{I}|_{V-x}$ contains no sets
of size $|I|$. We may assume that $I-x$
is chosen as the set of winners of VCG.
Let $J'$ denote a superset of $J$ that
is a basis of $(V-x,\mathcal{I}|_{V-x})$.
If $y$ is an element of $(I\backslash J')-x$, the set
$I-x-y$ has strictly fewer elements
than $J'$ so the exchange
axiom implies there is some $z \in J'$
such that $I-x-y+z \in \mathcal{I}$.
This set $I-x-y+z$ is a basis of
$(V-x,\mathcal{I}|_{V-x})$ that does not
contain $y$, hence the VCG payment of any
$y \in (I\backslash J')-x$ is $1$. Now consider
any $y \in I\backslash W$. By the definition
of $W$, there is a set
$K \in \mathcal{I}|_V$ such
that $|K|=|I|$ but $y \not\in K$.
Then $K-x$ is a basis of
$(V-x,\mathcal{I}|_{V-x})$ but
$y \not\in K$, implying that
$y$'s VCG payment is 1.
We have shown that in the bid profile
${\mathbbm{1}}[V]$, the bidders in $I\backslash W$ pay $1$
and all other bidders pay zero, whereas
in the bid profile ${\mathbbm{1}}[V-x]$, the
bidders in $I\backslash W$ still pay $1$ and, in
addition, the bidders in $(I\backslash J')-x$ pay $1$.
Furthermore the set $(I\backslash J')-x$ is non-empty.
To see this, observe that $|J'| = |I|-1 = |I-x|$,
but $J' \neq I-x$
because then $J$ would be a subset of $I$,
contrary to our assumption that $I,J$
satisfy \Cref{lem:nonmatroid}, property
(\ref{lem:nonmatroid:propNonEmpty}). Hence,
$I-x$ contains at least one element
that does not belong to $J'$, meaning
$(I \backslash J') -x$ is nonempty. We have thus
proven that the VCG revenue
of ${\mathbbm{1}}[V-x]$ exceeds the VCG revenue of
${\mathbbm{1}}[V]$ by at least $|I-J'-x|$, which is
at least $1$.
\end{proof}
\section{Introduction}
In a seminal paper nearly forty years ago~\cite{myerson81},
Roger Myerson derived a beautifully precise characterization
of optimal (i.e., revenue maximizing) mechanisms for Bayesian
single-parameter environments. One way this result has been
critiqued over the years is by noting that auctioneers may
have incorrect beliefs about bidders' values; if so, the
mechanism recommended by the theory will actually be
suboptimal.
In this paper we evaluate this critique by examining revenue
guarantees for optimal mechanisms when a subset of bidders'
value distributions are misspecified, but the auctioneer
doesn't know which of the distributions are incorrect. Our
model is inspired by the literature on {\em semi-random
adversaries} in the theoretical computer science literature,
particularly the work of Bradac et al.~\cite{BGSZ19} on
robust algorithms for the secretary problem. In the model
we investigate here, the auctioneer is given (not
necessarily identical) distributions for each of $n$
bidders. An unknown subset of the bidders, called the {\em
green bidders}, draw their values independently at random
from these distributions. The other bidders, called the
{\em red bidders}, draw their values from distributions
other than the given ones.
The question we ask in this paper is, ``When can one
guarantee that the expected revenue of the optimal mechanism
for the given distributions is at least as great as the
expected revenue that would be obtained by excluding the red
bidders and running an optimal mechanism on the green subset
of bidders?'' In other words, can the presence of bidders
with misspecified distributions in a market be worse (for
the auctioneer's expected revenue) than if those bidders
were absent? Or does the increased competition from
incorporating the red bidders always offset the revenue loss
due to ascribing the wrong distribution to them?
We give a precise answer to this question, for
single-parameter feasibility environments. We show that the
answer depends on the structure of the feasibility
constraint that defines which sets of bidders may win the
auction. For matroid feasibility constraints, the revenue of
the optimal mechanism is always greater than or equal to the
revenue obtained by running the optimal mechanism on the set
of green bidders. For any feasibility constraint that is not
a matroid, the opposite holds true: there is a way of
setting the specified distribution and the true
distributions such that the revenue of the optimal mechanism
for the specified distributions, when bids are drawn from
the true distributions, is {\em strictly less} than the
revenue of the optimal mechanism on the green bidders only.
The economic intuition behind this result is fairly easy to
explain. The matroid property guarantees that the winning
red bidders in the auction can be put in one-to-one
correspondence with losing green bidders who would have won
in the absence of their red competitors, in such a way that
the revenue collected from each winning red bidder offsets
the lost revenue from the corresponding green bidder whom he
or she displaces. When the feasibility constraint is not a
matroid, this one-to-one correspondence does not always
exist; a single green bidder might be displaced by two or
more red bidders each of whom pays almost nothing. The
optimal mechanism allows this to happen at some bid
profiles, because the low revenue received on such bid
profiles is compensated by the high expected revenue that
would be received if the red bidders had sampled values from
elsewhere in their distributions. However, since the red
bidders' distributions are misspecified, the anticipated
revenue from these more favorable bid profiles may never
materialize.
Our result can be interpreted as a type of revenue
monotonicity statement for optimal mechanisms in
single-parameter matroid environments. However it does not
follow from other known results on revenue monotonicity, and
it is illuminating to draw some points of distinction
between our result and earlier ones. Let us begin by
distinguishing {\em pointwise} and {\em setwise} revenue
monotonicity results: the former concern how the revenue
earned on individual bid profiles varies as the bids are
increased, the latter concern how (expected) revenue varies
as the set of bidders is enlarged.
\begin{itemize}
\item VCG mechanisms are neither pointwise
nor setwise revenue monotone in general,
but in single-parameter matroid feasibility
environments, VCG revenue satisfies
both pointwise and setwise monotonicity. In fact,
Dughmi, Roughgarden, and Soundararajan~\cite{drs09}
observed that VCG revenue obeys setwise
monotonicity {\em if and only if} the
feasibility constraint is a matroid.
The proof of this result in~\cite{drs09}
rests on a slightly erroneous characterization
of matroids, and one (small) contribution of
our work is to correct this minor error by
substituting a valid characterization
of matroids, namely \Cref{lem:nonmatroid}
below.
\item Myerson's optimal mechanism is not
pointwise revenue monotone, even for single-item
auctions. For example, consider using Myerson's optimal
mechanism to sell a single item
to Alice whose value is uniformly distributed
in $[0,4]$ and Bob whose value is uniformly
distributed in $[0,8]$. When Alice
bids 0 and Bob bids 5, Bob wins and pays 4.
If Alice increases her bid to 4, she wins but pays
only 3.
\item However, Myerson's optimal mechanism is
{\em always} setwise revenue monotone in
single-parameter environments with downward-closed
feasibility constraints, regardless of
whether the feasibility constraint is a matroid.
This is because the mechanism's expected revenue
is equal to the expectation of the maximum, over
all feasible sets of winners, of the winners'
combined ironed virtual value. Enlarging the
set of bidders only enlarges the collection
of sets over which this maximization is performed,
hence it cannot decrease the expectation of the
maximum.
\end{itemize}
Our main result is analogous to the setwise revenue
monotonicity of Myerson revenue, except that we are
considering monotonicity with respect to the operation of
enlarging the set of bidders {\em by adding bidders whose
value distributions are potentially misspecified.} We show
that the behavior of Myerson revenue with respect to this
stricter notion of setwise revenue monotonicity holds under
matroid feasibility constraints {\em but not under any other
feasibility constraints}, in contrast to the traditional
setwise revenue monotonicity that is satisfied by Myerson
mechanisms under arbitrarily downward-closed constraints.
\section{Revenue Monotonicity on Matroid Markets}
\label{sec:matroid-monotonicity}
We extend the standard single-parameter environment to allow
for bidders with misspecified distributions. Formally, the
$n$ bidders are partitioned into sets
\rdkdelete{$G \cup R = \{1,
\ldots, k\} \cup \{k + 1, \ldots, n\}$}
$G$ and $R$; the former are
called {\em green} and the latter {\em red}.
The color of
each bidder (green or red) is not revealed to the mechanism designer at
any point. Green bidders sample their values from their
respective distribution $F_i$ but red bidders are
sampling $v_i \sim F_i'$ for some $\{F_i'\}_{i \in R}$
which are completely unknown to the mechanism designer
and can be adversarially chosen.
In this section we are interested in studying the
behavior of Myerson's optimal mechanism when designed under
the (wrong) assumption that $v_i \sim F_i$ for all $i \in
[n]$. Specifically, we ask the question of whether the
existence of the red bidders could harm the expected revenue
of the seller compared to the case where the seller was able
to identify and exclude the red bidders, thus designing the
optimal mechanism for the green bidders alone. The following
definition makes this notion of revenue monotonicity more
precise.
\begin{definition}[RMMB]
\label{def:rmmb}
Consider a single-parameter, downward-closed market $\mathcal{M} =
(E, \mathcal{I})$ of $|E| = n$ bidders.
A mechanism $\mathcal{A}$ is {\em Revenue Monotone under
Misspecified Bidders (RMMB)} if for any \madelete{regular}
distributions $F_1, \ldots, F_n$, any number $1 \leq k \leq n$ of
green bidders and any fixed misspecified bids $\vec{b}'_R \in \mathbb{R}^R$
of the red bidders:
\begin{equation}
\label{ineq:rmmb}
\Exp{ \text{Rev}(
\mathcal{A}(b_G, b_R) ) } \geq
\Exp{ \text{Rev}( \mathcal{A}(b_G)) }
\end{equation}
\noindent
where both expectation are taken over $b_G \sim
\prod_{i \in G} F_i$.
\end{definition}
An alternative definition of the revenue monotonicity property
allows red bidders to have stochastic valuations drawn from
distributions $F_i' \neq F_i$ instead of fixed bids. We note
that the two definitions are equivalent: if $\mathcal{A}$ is
RMMB according to \Cref{def:rmmb} then inequality
(\ref{ineq:rmmb}) holds pointwise for any fixed misspecified
bids and thus would also hold in expectation. For the other
direction, if inequality (\ref{ineq:rmmb}) holds in
expectation over the red bids, regardless of the choice
of distributions $\{F_i' \mid i \in R\}$ then we may specialize
to the case when each $F_i'$ is a point-mass distribution with a
single support point $b_i$ for each $i \in R$, and then
\Cref{def:rmmb} follows.
\begin{comment}
In this part we are going to prove that for single item
auctions having Misspecified Bidders cannot decrease our
revenue if we compare it with the optimal mechanism given
only truthful bidders.Before we prove this theorem we
provide an intuition why the revenue monotonicity is not
obvious in such a setting.\newline Adding some Misspecified
Bidders to an single item optimal auction could decrease the
expected revenue of this mechanism because the payment could
be based on a fake distribution. These fake distributions
and bids could either determine the winner of the auction or
affect it's payment. The revenue comes from the $\phi(x)$
function which takes into account a fake distribution. So
the revenue of such a mechanism could be affected by the
Misspecified Bidders but we prove that this cannot happen.
\newline\newline
\end{comment}
In what follows we assume bidders always submit bids that
fall within the support of their respective distribution.
Green bidders obviously follow that rule and red bidders
should do as well, otherwise the mechanism could recognize
they are red and just ignore them.
Consider first the simpler case of selling a single item.
This corresponds to a uniform rank $1$ matroid market.
Intuitively when the item is allocated to a green bidder,
the existence of the red bidders is not problematic and in
fact could help increase the critical bid and thus the
payment of the winner. On the other hand, when a red bidder
wins one has to prove that they are not charged too little
and thus risk bringing the expected revenue down.
\begin{comment}
Also, before we move to the theorem, we clarify that if a
player bids out of its support then it's obvious that it was
a player with fake distribution and because we cannot define
its virtual value at this point we discard it.Still we can
prove that the revenue is not less than the one we would
have if we had only green players . \newline
\begin{theorem}
\label{sec2:thm1}
Suppose one sells an item to n bidders using the optimal auction for independent values sampled from distributions $F_1,\ldots,F_n$.The first k bids come from the distributions that the players advertise, while the distributions $F_k+1,\dots, F_n$ are fake and their bids are independent from them,as their value is fixed.The revenue of this auction is not less than the revenue that we would take if we had only the first k distributions.
\end{theorem}
\end{comment}
Let $m = \max( \max_{i \in G} \phi_i(b_i), 0 )$ be the
random variable denoting the highest non-negative virtual
value in the set of green bidders. Let also $X$ be an
indicator a random variable which is 1
if the winner belongs to $G$ and $Y$
denote an indicator random variable which is 1 if the winner
belongs to $R$. For the mechanism MyerOPT have:
\begin{align}
\label{sec23:eq1}
\Exp{ \text{revenue from green bidders} } &= \Exp{m \cdot X}\\
\label{sec23:eq2}
\Exp{ \text{revenue from red bidders} } &\geq \Exp{m \cdot Y}
\end{align}
\noindent
where (\ref{sec23:eq1}) follows from Myerson's lemma and
(\ref{sec23:eq2}) follows from the observation that the
winner of the optimal auction never pays less than the
second-highest virtual value. To see why the latter holds,
let $\phi_s$ be the second highest
virtual value, $r$ be the red winner and $g$ is the green player
with the highest virtual value.
The critical bid of the red winner is at least
$\phi_r^{-1} (\phi_s)
\geq \phi_r^{-1} (\phi_g(b_g))
\geq \phi_g(b_g)$ where we applied the fact that $x
\geq \phi(x)$ to the virtual value function
$\phi = \phi_r$ and the value $x = \phi_r^{-1} (\phi_g(b_g))$.
Summing (\ref{sec23:eq1}) and (\ref{sec23:eq2}) and
using the fact that $X + Y = 1$ whenever $m > 0$, we find:
\begin{align*}
\Exp{\text{Revenue from all bidders } 1, \dots, n}
&\geq \Exp{m \cdot X} + \Exp{m \cdot Y}\\
&= \Exp{m}\\
&= \Exp{\text{revenue of MyerOPT on } G}
\end{align*}
We therefore concluded that Myerson's optimal mechanism is
RMMB in the single-item case. We are now ready to
generalize the above idea to any matroid market.
\begin{comment}
\begin{definition}
Let $F_a(x) = 1 - (1+x)^{-a}$ be the distribution on $[0,
+\infty)$ with $\phi(x) = ( (a-1)x - 1 ) / a$.
\end{definition}
\begin{definition}
MyerOPT Mechanism on a set system $\mathcal{M}$: Collect bids
$b_i$, compute $\phi(b_i)$ and allocate to the bidders in an
optimal basis of $\mathcal{M}$ according to weights $\phi_i(b_i)$.
Charge Myerson payment, i.e.~$p_e = \max_{C: e \in C}
\min_{f \in C} b_f$.
\end{definition}
\end{comment}
\begin{theorem}
\label{thm:rmmb-matroid}
Let $\mathcal{M} = (E, \mathcal{I})$ be any matroid market. Then
$\text{MyerOPT}$ in $\mathcal{M}$ is RMMB
\end{theorem}
\begin{proof}
Call $G$ the set of green bidders and $R$
the set of red bidders.
Let $(x,p)$ denote the allocation and payment
rules for the mechanism $\text{MyerOPT}$ that runs
Myerson's optimal mechanism on all $n$ bidders,
using the given distribution of each.
Let $(x',p')$ denote the allocation and payment
rules for the mechanism $\text{MyerOPT}_G$ that runs
Myerson's optimal mechanism in the bidder set $G$ only.
For a set $S
\subseteq [n]$, let $T_S$ be the random variable
denoting the independent subset of $S$ that maximizes
the sum of ironed virtual values. In other words,
$T_S$ is the set of winners chosen by Myerson's
optimal mechanism on bidder set $S$.
By Myerson's Lemma, the revenue of
$\text{MyerOPT}_G$ satisfies:
\begin{equation}
\label{eq0}
\Exp{\sum_{i \in G} p'_i(\vec b)} =
\Exp{ \sum_{i \in G} x'_i(\vec b) \cdot \phi_i(b_i) }
\end{equation}
By linearity
of expectation, we can
break up the expected revenue of
$\text{MyerOPT}$ into two terms as follows:
\begin{equation}
\label{eq1}
\Exp{ \sum_{i \in [n]} p_i(\vec b) } = \Exp{ \sum_{i \in G} p_i(\vec b) } +
\Exp{ \sum_{i \in R} p_i(\vec b) }
\end{equation}
The first term on the right side of~\eqref{eq1}
expresses the revenue original from the green bidders.
Using Myerson's Lemma, we can equate this revenue
with the expectation of the
green winners' combined virtual value:
\begin{equation}
\label{eq2}
\Exp{ \sum_{i \in G} p_i(\vec b) } =
\Exp{ \sum_{i \in G} x_i(\vec b) \cdot \phi_i(b_i) } .
\end{equation}
To express the revenue coming from the red bidders in terms
of virtual valuations, we provide the argument that follows.
One way to derive $T_{G+R}$ from $T_G$ is to start with
$T_G$ and
sequentially add elements of $T_{G+R} \cap R$ in arbitrary
order while removing at each step the least weight element in the
circuit that potentially forms (repeated application of
\Cref{prop:matroid-update}). Let $e$ be the
$i$-th red element we're adding. If no circuit forms after
the addition, then
$e$ pays the smallest value in its support which is a
non-negative quantity. Otherwise, let $C$ be the
unique circuit that forms after that addition. Let $f$ be the
minimum weight element in $C$ and let $b_f$ be the associated
bid made by player $f$. Notice that $f$ must be green;
by assumption, every red element we're adding is part of the
eventual optimal solution so it cannot be removed at any
stage of this process.
The price charged to $e$ is their critical bid which we
claim is at least $\phi_e^{-1}(\phi_f(b_f))$. The reason is
that $e$ is part of circuit $C$ and $f$ is the min-weight
element of that circuit. The min-weight element of a circuit
is never in the max-weight independent set\footnote{This is a
consequence of the optimality of the {\sc Greedy} algorithm
since the min-weight element of a circuit is the last to be
considered among the elements of the circuit and its
inclusion will violate independence.} so if bidder $e$ bids
any value $v$ such that $\phi_e(v) < \phi_f(b_f)$ they will
certainly {\em not} be included in the set of winners, $T_{G+R}$.
By \Cref{prop:phi-inequality} it follows that $\phi_e^{-1}(
\phi_f(b_f) ) \geq \phi_f(b_f)$ thus $p_e(\vec b) \geq
\phi_f(b_f)$.
The above reasoning allows us ``charge'' each red bidder's
payment to a green player's virtual value in $T_G \setminus
T_{G+R}$:
\begin{align} \nonumber
\Exp{ \sum_{i \in R} p_i(\vec b) }
& \geq \Exp{ \sum_{i \in T_G \setminus T_{G+R}} \phi_i(b_i)} \\
& = \Exp{ \sum_{i \in G} (x'_i(\vec b) - x_i(\vec b)) \cdot \phi_i(b_i) }
\label{eq3}
\end{align}
The second line is justified by observing that for $i \in G$,
$x'_i(\vec b) = x_i(\vec b)$ unless $i \in T_G \setminus T_{G+R}$,
in which case $x'_i(\vec b) - x_i(\vec b) = 1$.
Combining Equations (\ref{eq0})-(\ref{eq3}) we get:
\begin{align*}
\Exp{ \sum_{i \in [n]} p_i(\vec b) } & \geq
\Exp{ \sum_{ i \in G } x_i(\vec b) \cdot \phi_i (b_i) }
+ \Exp{ \sum_{i \in G} (x'_i(\vec b) - x_i(\vec b)) \cdot \phi_i(b_i)} \\
& = \Exp{ \sum_{i \in G } x'_i(\vec b) \cdot \phi_i (b_i) }
= \Exp{ \sum_{i \in G} p'_i(\vec b) }
\end{align*}
In other words, the expected revenue of $\text{MyerOPT}$
is greater than or equal to that of $\text{MyerOPT}_G$.
\end{proof}
\section{General Downward-Closed Markets}
When the market is not a matroid, the existence of red
bidders can do a lot of damage to the revenue of the
mechanism as shown in the following simple example.
\begin{example}
\label{ex:nonmatroid}
Consider a 3-element downward-closed set system on
$E = \{a, b, c\}$ with maximal feasible sets: $\{a, b\}$ and $\{c\}$.
Let $c$ be a green bidder with a deterministic value of $1$
and $a, b$ be red bidders each with a specified value distribution
given by the following cumulative distribution function $F(x) = 1 - (1
+ x)^{1-N}$ for some parameter $N$.
Note that the associated virtual value function is:
\[
\phi(x) = x - \frac{1-F(x)}{f(x)}
= x - \frac{(1+x)^{1-N}}{(N-1) (1+x)^{-N}}
= x - \tfrac{1+x}{N-1} = \left( 1 - \tfrac{1}{N-1} \right) x
- \tfrac{1}{N-1} .
\]
For this virtual value function we have
$\phi^{-1}(0) = \frac{1}{N-2}$,
$\phi^{-1}(1) = \frac{N}{N-2}$.
Consider the revenue of Myerson's mechanism when
the red bidders, instead of following their
specified distribution, they each bid $\phi^{-1}(1)$ --- and
the green bidder bids $1$, the only support point of their
distribution. The set $\{a, b\}$ wins over $\{c\}$ since the
former sums to a total virtual value of $2$ over the latter's
virtual value $1$ so bidders $a, b$ pay their critical bid.
To compute that, notice that each of the bidders $a, b$
could unilaterally decrease their bid to any $\varepsilon >
\tfrac{1}{N-2}$ and they would still win the auction since
the set $\{a, b\}$ would still have a total virtual value greater
than $1$. Therefore, each of $a, b$ pays
$\tfrac{1}{N-2}$ for a total revenue of $\tfrac{2}{N-2}$.
On the other hand, the same mechanism when run on the set
$\{c\}$ of only the green bidder, always allocates an item
to $c$ and collects a total revenue of $1$.
Letting $N \rightarrow \infty$ we see that the former
revenue tends to zero while the latter remains $1$,
violating the revenue monotonicity property of
\Cref{def:rmmb} by an unbounded multiplicative factor.
\end{example}
To generalize the above idea to any non-matroid set system
we need the following lemma.
\begin{lemma}
\label{lem:nonmatroid}
A downward-closed set system $\mathcal{S} = (E, \mathcal{I})$ is {\em not}
a matroid if and only if there exist $I, J \in \mathcal{I}$ with the following
properties:
\begin{enumerate}
\item \label{lem:nonmatroid:propK}
For every $K \in \mathcal{I}|_{I \cup J}$, if $|K| \geq |I|$
then $K \supseteq I \backslash J$.
\item \label{lem:nonmatroid:propNonEmpty}
$|J \backslash I| \geq 1$.
\item \label{lem:nonmatroid:propMaximum}
$I$ is a maximum cardinality element of $\mathcal{I}|_{I \cup J}$.
\end{enumerate}
\end{lemma}
\begin{proof}
For the forward direction, suppose $\mathcal{S}$ is {\em
not} a matroid and let $V$ be a minimum-cardinality subset
of $E$ that is not a matroid. Since $\mathcal{I}|_V$ is
downward-closed and non-empty, it must violate the exchange
axiom. Hence, there exist sets $I,J \in \mathcal{I}|_V$ such that
$|I| > |J|$ but $J+x \not\in \mathcal{I}$ for all $x \in I
\backslash J$. Note
that $V = I \cup J$, since otherwise $I \cup J$ is a
strictly smaller subset of $E$ satisfying the property that
$(I \cup J, \mathcal{I}|_{I \cup J})$ is not a matroid.
Observe that $J$ is a maximal element of $\mathcal{I}_V$. The
reason is that $V = I \cup J$, so every element of $V
\backslash J$ belongs to $I$. By our assumption on the pair
$I,J$, there is no element $y \in I$ such that $J + y \in
\mathcal{I}|_V$. Since $\mathcal{I}|_V$ is downward-closed, it follows that
no strict superset of $J$ belongs to $\mathcal{I}|_V$.
We now proceed to prove that $I, J$ satisfy the required
properties of the lemma:
\begin{enumerate}
\item[(\ref{lem:nonmatroid:propK})]
Let $K \in \mathcal{I}_V$ with $|K| \geq |I|$. It follows that
$|K| > |J|$, but $J$ is maximal in $\mathcal{I}|_V$, so $K$ and $J$
must violate the exchange axiom. Thus, $\mathcal{I}|_{K \cup J}$ is
not a matroid. By the minimality of $V$, this implies $K
\cup J = V$ hence $K \supseteq I \backslash J$.
\item[(\ref{lem:nonmatroid:propNonEmpty})]
If $J \backslash I = \emptyset$ then $J \subseteq I$ which
contradicts the fact that $I, J$ violate the exchange axiom.
\item[(\ref{lem:nonmatroid:propMaximum})]
Suppose there exists $I' \in \mathcal{I}|_V$ with $|I'| > |I|$, then
by property (\ref{lem:nonmatroid:propK}) we have $I'
\supseteq I \backslash J$. Remove elements of $I \backslash
J$ from $I'$ one by one, in arbitrary order, until we reach
a set $K \in \mathcal{I}_V$ such that $|K| = |I|$. This is possible
because after the entire set $I \backslash J$ is removed
from $I'$, what remains is a subset of $J$, hence has
strictly fewer elements than $I$. The set $K$ thus
constructed has $|K| = |I|$ but $K \not\supseteq I
\backslash J$, violating property
(\ref{lem:nonmatroid:propK}).
\end{enumerate}
For the ``only if'' direction, supposing that $\mathcal{S}$
is a matroid, we must show that no $I,J \in \mathcal{I}$ satisfy
all three properties. To this end, suppose $I$ and $J$
satisfy (\ref{lem:nonmatroid:propNonEmpty}) and
(\ref{lem:nonmatroid:propMaximum}). Since $\mathcal{S}|_{I
\cup J}$ is a matroid, there exists $K \supseteq J$ such
that $K \in \mathcal{I}|_{I \cup J}$ and $|K| = |I|$. By property
(\ref{lem:nonmatroid:propNonEmpty}), we know that no
$|I|$-element superset of $J$ contains $I-J$ as a subset.
Therefore, the set $K$ violates property
(\ref{lem:nonmatroid:propK}).
\end{proof}
We are now ready to generalize \Cref{ex:nonmatroid} to every non-matroid set
system.
\begin{theorem}
\label{thm:rmmb-non-matroid}
For any $\mathcal{M} = (E, \mathcal{I})$ which is {\em not} a matroid,
MyerOPT is {\em not} RMMB.
\end{theorem}
\begin{proof}
Consider a downward-closed $\mathcal{M} = (E, \mathcal{I})$ which is {\em
not} a matroid. We are going to show there exists a
partition of players into green and red sets and a choice of
valuation distributions and misspecified red bids such that the
RMMB property is violated.
Let $I, J \subseteq E$ be the subsets whose existence is guaranteed
by \Cref{lem:nonmatroid}. Define $G = J$ to be the set of green bidders,
$R = I \backslash J$ to be the set of red bidders. All other bidders are irrelevant
and can be assumed to be bidding zero.
Set the value of each green bidder to be deterministically
equal to 1. For each red bidder $r$,
the specified value distribution has
the same cumulative distribution function
$F(x) = 1 - (1+x)^{1-N}$ defined in \Cref{ex:nonmatroid}.
\begin{comment}
Note that the associated virtual value
function is:
\[
\phi(x) = x - \frac{1-F(x)}{f(x)}
= x - \frac{(1+x)^{1-N}}{(N-1) (1+x)^{-N}}
= x - \tfrac{1+x}{N-1} = \left( 1 - \tfrac{1}{N-1} \right) x
- \tfrac{1}{N-1} .
\]
For this virtual value function we have
$\phi^{-1}(0) = \frac{1}{N-2}$,
$\phi^{-1}(1) = \frac{N}{N-2}$.
\end{comment}
Now let's consider the expected revenue of Myerson's
mechanism when every bidder in $R$ bids
$\phi^{-1}(1)$.\footnote{Members of $G$ bid as well, but it
hardly matters, because their bid is always 1 --- the only
support point of their value distribution --- so the
auctioneer knows their value without their having to submit
a bid.} Every bidder's virtual value is 1, so the mechanism
will choose any set of winners with maximum cardinality
which, according to \Cref{lem:nonmatroid}, property
(\ref{lem:nonmatroid:propMaximum}), is $|I|$.
For example, the set of winners could be $I$\footnote{In
general, the mechanism might choose any set $W$ of winners
such that $|W| = |I|$. The way to handle this case is
similar to the one used in the proof of \Cref{thm:corrected}
in the Appendix.}.
A consequence of \Cref{lem:nonmatroid}, property
(\ref{lem:nonmatroid:propK}) is that
for
every red bidder $r$ there
is no set of bidders disjoint from $\{r\}$ with
combined virtual value greater than $|I|-1$.
Thus each red bidder pays $\phi^{-1}(0)$.
Elements of $I \cap J$ correspond to green
bidders who win the auction and pay 1, because
a green bidder pays 1 whenever they win. There are $|I \cap J|$
such bidders. Thus, the Myerson revenue is
$|I \cap J| + \frac{1}{N-2} |I \backslash J|$. The optimal
auction on the green bidders alone charges each
of these bidders a price of 1, receiving revenue
$|J| = |I \cap J| + |J \backslash I|$.
This exceeds $|I \cap J| + \frac{1}{N-2} |I \backslash J|$
as long as
\begin{equation} \label{eq:N-2}
(N - 2) \cdot |J\backslash I| > |I \backslash J| .
\end{equation}
This inequality is satisfied, for example,
when $N = |I \backslash J| + 3$, because $J \backslash I$ has
at least one element (\Cref{lem:nonmatroid}, property
(\ref{lem:nonmatroid:propNonEmpty})).
\end{proof}
\begin{comment}
, it's an adaptation of your 3-bidder example with using the characterization of the matroid.
By Proposition 2.9 of [Dughmi, Roughgarden, Sundararajan], since M is not a matroid, there must exist bases A, B and y \in B s.t. for all x \in A: A \ {x} \cup {y} \notin I (1). Let all bidders in A be red with distributions F_a(x) where a= |A| + 2, and bids b_R = (a+1) / (a-1) . Bidder y is green with deterministic value 1 and everyone else is green with deterministic value 0.
First notice that MyerOPT on the green bidders allocates to y and charges a posted price of 1.
In the presence of the red bidders, φ(b_R) = 1. Notice that A is the only maximum weight basis because of (1) so MyerOPT allocates to A charges each of the red bidders φ^{-1}(0) = 1 / (a-1) so total revenue is (a-2) / (a-1) < 1 .
\end{comment}
\begin{comment}
\begin{theorem}
If $(U,\mathcal{I})$ is a downward-closed
set system that is not a matroid, then
there exists a set $V \subseteq U$ and
an element $x \in V$ such that the
revenue of VCG on bid profile
${\mathbbm{1}}[V-x]$ exceeds the revenue
of VCG on bid profile ${\mathbbm{1}}[V]$.
\end{theorem}
\end{comment}
\begin{comment}
\end{comment}
\begin{comment}
\begin{theorem} \label{thm:myer}
If $(U,\mathcal{I})$ is a downward-closed
set system that is not a matroid, then
there exists a partition of $U$ into
two sets $R,G$, and a specification of
a distribution $F_x$ for each $x \in U$,
and a bid profile $b_R = (b_r)_{r \in R}$,
such that if $b_G$ is a random sample from
the product distribution $\prod_{g \in G} F_g$,
the expected revenue of Myerson's
optimal mechanism on the random bid profile
$(b_R,b_G)$ is strictly less than its
expected revenue on the random bid profile
$(0,b_G)$.
\end{theorem}
\begin{proof}
The proof is an adaptation of the preceding
proof about the failure of revenue monotonicity
for VCG mechanisms. Let $V,I,J,W$ be defined as
in that proof. We will define the set
of green bidders to be $G = J$, and the
set of red bidders is $R = I-J$.
\end{proof}
\end{comment}
\section{Open Questions}
The previous section concluded with a proof that for any
non-matroid system, the ratio $r =
\frac{\Exp{\text{Rev}(G)}}{\Exp{\text{Rev}(G \cup R)}}$ for
Myerson's optimal mechanism can be greater than $1$. An
interesting question is whether that ratio can be made
arbitrarily large as in \Cref{ex:nonmatroid}. If the sets
$I, J$ in the above proof are such that $I \cap J =
\emptyset$, then the ratio can be made unbounded with the
same construction. We do not know if possibly another choice
of red/green bidders and their distributions can give an
unbounded ratio for all non-matroid system.
A broader question our work leaves unanswered is whether it's
possible to design other mechanisms (potentially
non-truthful) that, in the presence of
red and green bidders in non-matroid downward-closed market, can always
guarantee a {\em constant} approximation to Myerson's
revenue on the green bidders alone. For instance, in
\Cref{ex:nonmatroid}, one could possibly consider randomized
mechanisms that ignore a random bidder in the set $\{a,
b\}$ before running Myerson's auction.
\section{Preliminaries}
\subsection{Matroids}
\label{sec.matroids}
Given a finite ground set $E$ and a collection $\mathcal{I}
\subseteq 2^E$ of subsets of $E$ such that $\emptyset \in
\mathcal{I}$, we call $\mathcal{M} = (E, \mathcal{I})$ a {\em set system}. $\mathcal{M}$
is a {\em downward-closed} set system if $\mathcal{I}$ satisfies
the following property:
\begin{enumerate}
\item[(I1)] {\bf (downward-closed axiom)} If $B \in \mathcal{I}$
and $A \subseteq B$ then $A \in \mathcal{I}$.
\end{enumerate}
\noindent
Furthermore, $\mathcal{M}$ is called a {\em matroid} if it satisfies
both (I1) and (I2):
\begin{enumerate}
\item[(I2)] {\bf (exchange axiom)} If $A, B \in \mathcal{I}$ and
$|A| > |B|$ then there exists $x \in A \backslash B$ such
that $B + x \in \mathcal{I}$\footnote{We use the shorthand $B + x$
(resp.~$B - x$) to mean $B \cup \{x\}$ (resp.~$B \backslash
\{x\}$) throughout the paper.}.
\end{enumerate}
In the context of matroids, sets in (resp.~not in) $\mathcal{I}$
are called {\em independent} (resp.~{\em dependent}). An
(inclusion-wise) maximal independent set is called a {\em
basis}. A fundamental consequence of axioms (I1), (I2) is
that all bases of a matroid have equal cardinality and this
common quantity is called the {\em rank} of the matroid. A
{\em circuit} is a minimal dependent set. The set of all
circuits of a matroid will be denoted by $\mathcal{C}$. The
following is a standard property of $\mathcal{C}$.
\begin{proposition}[{\cite[Proposition 1.4.11]{oxley}}]
\label{prop:circuits}
For any $\mathcal{C}$ which is the circuit set of a matroid
$\mathcal{M}$, let $C_1, C_2 \in \mathcal{C}, e \in C_1 \cap C_2$
and $f \in C_1 \backslash C_2$. Then there exists $C_3 \in
\mathcal{C}$ such that $f \in C_3 \subseteq (C_1 \cup C_2) -
e$.
\end{proposition}
For any set system $\mathcal{M} = (E, \mathcal{I})$ and any given $S
\subseteq E$, define $\mathcal{I}_{\mid S} = \mathcal{I} \cap 2^S$ and
call $\mathcal{M}_{\mid S} = (S, \mathcal{I}_{\mid S})$ the {\em
restriction} of $\mathcal{M}$ on $S$. Notice that restrictions
maintain properties (I1), (I2) if they were satisfied
already in $\mathcal{M}$.
In what follows, we provide some examples of common matroids.
The reader is invited to check that they indeed satisfy
(I1), (I2). For a more in-depth study of matroid theory, we
point the reader to the classic text of Oxley \cite{oxley}.
\paragraph{Uniform matroids}
When $\mathcal{I} = \{ S \subseteq E : |S| \leq k \}$ for some
positive integer $k \leq |E|$, then $(E, \mathcal{I})$ is called a
{\em uniform (rank $k$)} matroid.
\paragraph{Graphic matroids}
Given a graph $G = (V, E)$ (possibly containing parallel edges
and self-loops) let $\mathcal{I}$ include all subsets of edges
which do {\em not} form a cycle, i.e. the subgraph $G[S] =
(V, S)$ is a forest. Then $(E, \mathcal{I})$ forms a matroid called {\em
graphic} matroid. \macomment{Oxley calls those matroids
cyclic and then defines graphic matroids to be any matroid
isomorphic to a cyclic matroid. I think this is not an
important distinction for our purposes .}
Graphic matroids capture many of the properties of general
matroids and notions like bases and circuits have their
graphic counterparts of spanning trees and cycles
respectively.
\paragraph{Transversal matroids}
Let $G = (A \cup B, E)$ be a simple, bipartite graph and
define $\mathcal{I}$ to include all subsets $S \subseteq A$ of
vertices for which the induced subgraph on $S \cup B$
contains a matching covering all vertices in $S$. Then $(A,
\mathcal{I})$ is called a {\em transversal} matroid.
If $\mathcal{M} = (E, \mathcal{I})$ is equipped with a weight function $w : E
\rightarrow \mathbb{R}^+$ it is called a {\em weighted} matroid.
The problem of finding an independent set of maximum sum of
weights\footnote{An equivalent formulation asks for a basis
of maximum total weight.}
is central to the study of matroids. A very simple greedy
algorithm is guaranteed to find the optimal solution and in
fact matroids are exactly the downward-closed systems for
which that greedy algorithm is always guaranteed to find the
optimal solution.
\paragraph{{\sc Greedy}}
Sort the elements of $E$ in non-increasing order of weights
$w(e_1) \geq w(e_2) \geq \ldots \geq w(e_n)$. Loop through
the elements in that order adding each element to the current
solution as long as the current solution remains an
independent set.
\begin{lemma}[{\cite[Lemma 1.8.3.]{oxley}}]
Let $\mathcal{M} = (E, \mathcal{I})$ be a weighted downward-closed set
system. Then {\sc Greedy} is guaranteed to return an
independent set of maximum total weight for every weight
function $w : E \rightarrow \mathbb{R}^+$ if and only $\mathcal{M}$ is a
matroid.
\end{lemma}
In what follows we're going to assume without loss of
generality that the function $w$ is one-to-one, meaning that
no two element have the same weight. All proofs can be
adapted to work in the general case using any deterministic
tie breaking rule.
The following proposition provides a convenient way for
updating the solution to an optimization problem under
matroid constraints when new elements are added. A proof is
included in the Appendix.
\begin{proposition}
\label{prop:matroid-update}
Let $\mathcal{M} = (E, \mathcal{I})$ be a weighted matroid with weight
function $w : E \rightarrow
\mathbb{R}^+$. Consider the max-weight independent set $I$ of
the restricted matroid $\mathcal{M}_{\mid E - x}$. Then the
max-weight independent set $I^*$ of
$\mathcal{M}$ can be obtained from $I$ as follows: if $(I + x)
\in \mathcal{I}$ then $I^* = I + x$, otherwise, $I^* = (I + x) -
y$ where $y$ is the minimum-weight element in the unique
circuit $C$ of $I + x$.
\end{proposition}
\subsection{Optimal Mechanism Design}
We study auctions modeled as a {\em Bayesian
single-parameter environment}, a standard mechanism design
setting in which a {\em seller} (or mechanism designer)
holds many identical copies of an item they want to sell. A
set of $n$ bidders (or players), numbered $1$ through $n$,
participate in the auction and each bidder $i$ has a
private, non-negative value $\varv_i \sim F_i$, sampled
(independently across bidders) from a distribution $F_i$
known to the seller.
Abusing notation, we'll use $F_i$ to also denote the
cumulative distribution function and $f_i$ to denote the
probability density function of the respective distribution.
The value of each bidder expresses
their valuation for receiving one item. Let $V_i$ be the
support of distribution $F_i$ and define $V = V_1 \times
\ldots \times V_n$. For a vector $\vec{v} \in V$, we use the
standard notation $\vec v_{-i} = (\varv_1, \ldots,
\varv_{i-1}, \varv_{i+1}, \ldots, \varv_n)$ to express the
vector of valuations of all bidders {\em except} bidder $i$.
When the index set $[n]$ is partitioned into two sets
$A,B$ and we have vectors $\vec{v}_A \in \mathbb{R}^A, \, \vec{w}_B \in \mathbb{R}^B$,
we will abuse notation and let $(\vec{v}_A,\vec{w}_B)$ denote the
vector obtained by interleaving
$\vec{v}_A$ and $\vec{w}_B$, i.e.~$( \vec{v}_A,\vec{w}_B)$
is the vector $\vec{u} \in \mathbb{R}^n$
specified by
\[ u_i = \begin{cases} v_i & \mbox{if } i \in A \\
w_i & \mbox{if } i \in B. \end{cases} \]
Similarly, when $\vec{v} \in V$, $i \in [n]$, and $z \in \mathbb{R}$,
$(z,\vec v_{-i})$ will denote the vector obtained by replacing the
$i^{\mathrm{th}}$ component of $\vec{v}$ with $z$.
A {\em feasibility constraint} $I \subseteq 2^{[n]}$ defines all
subsets of bidders that can be simultaneously declared
winners of the auction. We will interchangeably denote
elements of $I$ both as subsets of $[n]$ and as vectors
in $\{0, 1\}^n$. Of special interest are feasibility
constraints which define the independent sets of a matroid. We will
sometimes use the phrase {\em matroid market} to indicate this
fact. Matroid markets model many real world applications.
For example when selling $k$ identical copies of an item, the market is
a uniform rank $k$ matroid. Another example is kidney
exchange markets which can be modeled as transversal
matroids (\cite{Roth05}).
In a {\em sealed-bid auction}, each bidder $i$ submits a
{\em bid} $b_i \in V_i$ simultaneously to the mechanism.
Formally, a {\em mechanism} $\mathcal{A}$ is a pair $(x, p)$ of an
allocation rule $x : V \rightarrow I$ accepting the bids and
choosing a feasible outcome and a {\em payment rule} $p : V
\rightarrow \mathbb{R}^n$ assigning each bidder a monetary payment
they need to make to the mechanism. We denote by $x_i(\vec
b)$ (or just $x_i$ when clear from the context) the $i$-th
component of the 0-1 vector $x(\vec b)$ and similarly for
$p$. An allocation rule is called {\em monotone} if the
function $x_i(z, \vec b_{-i})$ is monotone non-decreasing in
$z$ for any vector $\vec b_{-i} \in V_{-i}$ and any bidder
$i$.
We assume bidders have {\em quasilinear utilities}
meaning that bidder's $i$ utility for winning the auction and
having to pay a price $p_i$ is $u_i = \varv_i - p_i$ and $0$ if
they do not win and pay nothing. Bidders are selfish agents aiming to
maximize their own utility.
A mechanism is called {\em truthful} if bidding $b_i =
\varv_i$ is a {\em dominant strategy} for each bidder, i.e.
no bidder can increase their utility by reporting $b_i \neq
\varv_i$ regardless the values and bids of the other
bidders. An allocation rule $x$ is called {\em
implementable} if there exists a payment rule $p$ such that
$(x, p)$ is truthful. Such mechanisms are well understood
and easy to reason about since we can predict how the
bidders are going to behave. In what follows we focus our
attention only on truthful mechanisms and thus use the terms
value and bid interchangeably.
A well known result of Myerson (\cite{myerson81}) states that a given
allocation rule $x$ is implementable if and only if $x$ is
monotone. In case $x$ is monotone, Myerson gives an explicit
formula for the unique\footnote{Unique up to the normalizing
assumption that $p_i = 0$ whenever $b_i = 0$.} payment rule
such that $(x, p)$ is truthful.
In the single-parameter setting we're studying, the payment
rule can be informally described as follows: $p_i$ is equal
to the minimum $b_i$ that bidder $i$ has to report such that
they are included in the set of winners --- we'll refer to
such a $b_i$ as the {\em critical bid} of bidder $i$.
The mechanism designer, who is collecting all the payments,
commonly aims to maximize her {\em expected revenue} which
for a mechanism $\mathcal{A}$ is defined as
$\text{Rev}(\mathcal{A}) = \mathbb{E}_{b_i \sim F_i} \left[
\sum_{i \in [n]} p_i \right]$.
\begin{lemma}[{\cite{myerson81}}]
\label{lem:myerson}
For any truthful mechanism $(x, p)$ and any bidder $i \in [n]$:
\[ \Exp{ p_i } = \Exp{
\phi_i(b_i) \cdot x_i(b_i, \vec{b_{-i}}) } \]
\noindent
where the expectations are taken over $b_1, \ldots, b_n \sim
F_1, \ldots, F_n$, the function $\phi_i(\cdot)$ is defined
as
\[ \phi_i(z) = z - \frac{1 - F_i(z)}{f_i(z)} \]
\noindent
and $\phi_i(b_i)$ is called the {\em virtual value} of bidder $i$.
\end{lemma}
The importance of this lemma is that it reduces the problem
of revenue maximization to that of virtual welfare
maximization. More specifically, consider a sequence of
distributions $F_1, \ldots, F_n$ which have the property that all $\phi_i$
are monotone non-decreasing (such distributions are called
{\em regular}). In this case, the allocation rule that
chooses a set of bidders with the maximum total virtual
value (subject to feasibility constraints) is monotone (a
consequence of the regularity condition) and thus
implementable. We'll frequently denote this
revenue-maximizing mechanism by MyerOPT.
More precisely, the MyerOPT mechanism works as follows:
\begin{itemize}
\item Collect bids $b_i$ from every bidder $i \in [n]$.
\item Compute $\phi_i(b_i)$ and discard all bidders whose
virtual valuation is negative.
\item Solve the optimization problem $S^* = \text{argmax}_{ S \in I }
\sum_{i \in S} \phi_i(b_i)$.
\item Allocate the items to $S^*$ and charge each bidder $i
\in S^*$ their critical bid.
\end{itemize}
Handling non-regular distributions is possible using the
standard technique of {\em ironing}. Very briefly, it works
as follows. So far, we've been expressing $x, p$ and $\phi$
as a function of the random vector $\vec v$. It is
convenient to switch to the quantile space and express them
as a function of a vector $\vec q \in [0, 1]^n$ where for a
given sample $z$ from $F_i$ we let $q_i = \Pr_{b_i \sim F_i}
[ b_i \geq z ]$. Another way to think of this is, instead of
sampling values, we sample quantiles $q_i$ distributed
uniformly at random in the interval $[0, 1]$ which are then
transformed into values $v_i(q_i) =
F_i^{-1}(1-q_i)$\footnote{In general, $v_i(q_i) = \min \left
\{ v \mid F_i(v) \geq q_i \right\}$.}. Let $R_i(q_i) = q_i
\cdot v_i(q_i)$ and notice that $\phi_i(v_i(q_i)) = \left.
\tfrac{dR_i}{dq} \right|_{q = q_i}$. Now, since $v_i(\cdot)$
is a non-increasing function we have that $\phi_i(\cdot)$ is
monotone if and only if $R$ is concave.
Now, suppose that $F_i$ is such that $R_i$ is not concave.
One can consider the concave hull of $\overline{R_i}$ of
$R_i$ which replaces $R_i$ with a straight line in every
interval that $R_i$ was not following that concave hull. The
corresponding function $\overline{\phi_i}( \cdot ) =
\tfrac{d\overline R_i}{dq}$ is called {\em ironed virtual
value function}.
\begin{lemma}[{\cite[Theorem 3.18]{HartlineBook}}]
For any monotone allocation rule $x$ and any virtual value
function $\phi_i$ of bidder $i$, the expected virtual
welfare of $i$ is
upper bounded by their expected ironed virtual value
welfare.
\[
\Exp{ \phi_i(v_i(q_i)) \cdot x_i( v_i(q_i),
\vec v_{-i}(\vec q) ) }
\leq
\Exp{ \overline{\phi_i}(v_i(q_i)) \cdot x_i( v_i(q_i),
\vec v_{-i}(\vec q) ) }
\]
Furthermore, the inequality holds with equality if the
allocation rule $x$ is such that for all bidders $i$,
$x_i'(q) = 0$ whenever $\overline{R_i}(q) > R_i(q)$.
\end{lemma}
As a consequence, consider the monotone allocation rule
which allocates to a feasible set of maximum total ironed virtual
value. On the intervals where $\overline{R_i}(q) > R_i(q)$,
$\overline{R_i}$ is linear as part of the concave hull so
the ironed virtual value function, being a derivative of a
linear function, is a constant. Therefore, the allocation
rule is not affected when $q$ ranges in such an interval.
A crucial property of any (ironed) virtual value function
$\phi$ corresponding to a distribution $F$ is that
$z \geq \phi(z)$ for all $z$ in the support of $F$.
This is obvious for $\phi$ as defined in \Cref{lem:myerson}.
We claim it also holds for ironed virtual value functions:
if $z$ lies in an interval where $\overline{\phi} = \phi$ it
holds trivially. Otherwise, if $z \in [a, b]$ for some
interval where $\phi$ needed ironing (i.e.~$\overline{R}(q)
> R(q)$ in the quantile space), we have: $z \geq a \geq
\phi(a) = \overline{\phi}(a) = \overline{\phi}(z)$. We've
thus proven:
\begin{proposition}
\label{prop:phi-inequality}
Any (possibly non-regular) distribution $F$ having an
ironed virtual value function $\overline{\phi}$ satisfies:
\[ z \geq \overline{\phi}(z) \]
for any $z$ in the support of $F$.
\end{proposition}
\begin{remark}
For simplicity, in the remainder of the paper we'll use
$\phi$ and $\overline{\phi}$ interchangeably and we will refer to
$\phi$ as virtual value function. The reader should keep in
mind that if the associated distribution is non-regular,
then {\em ironed} virtual value functions should be used
instead.
\end{remark}
\subsection{Related Work}
{\em Semi-random models} are a class of models
studied in the theoretical computer science literature
in which the input data is partly generated by random
sampling, and partly by a worst-case adversary.
Initially studied in the setting of
graph coloring~\cite{BlumSpencer} and
graph partitioning~\cite{FeigeKilian,mmv12},
the study of semi-random models has since been
broadened to
statistical estimation~\cite{diakonikolas2019robust,lai2016agnostic},
multi-armed bandits~\cite{lykouris2018stochastic},
and secretary problems~\cite{BGSZ19}.
Our work extends semi-random models into the
realm of Bayesian mechanism design. In particular,
our model of green and red bidders resembles in a sense that of
Bradac el al.~\cite{BGSZ19} for the secretary problem which
served as inspiration for this work. In both settings,
green players/elements behave randomly and independently
while red players/elements behave adversarially. In the
secretary model of~\cite{BGSZ19},
red elements can choose arbitrary arrival times
while green elements' arrival
times are i.i.d.~uniform in $[0,1]$ and
independent of the red arrival times.
Similarly, in our setting red bidders
can set their bids arbitrarily whereas
green bidders sample their bids from known
distributions, independently
of the red bidders and one another.
Our work can be seen as part of a general framework of {\em
robust mechanism design}, a research direction inspired by
Wilson~\cite{wilson1987}, who famously wrote,
\begin{quotation} Game theory has a great advantage in
explicitly analyzing the consequences of trading rules
that presumably are really common knowledge; it is deficient
to the extent it assumes other features to be common
knowledge, such as one agent’s probability assessment about
another’s preferences or information. I foresee the
progress of game theory as depending on successive
reductions in the base of common knowledge required to
conduct useful analyses of practical problems. Only by
repeated weakening of common knowledge assumptions will the
theory approximate reality. \end{quotation} This {\em
Wilson doctrine} has been used to justify more robust
solution concepts such as dominant strategy and ex post
implementation. The question of when these stronger solution
concepts are required in order to ensure robustness was
explored in a research program initiated by Bergemann and
Morris~\cite{bergemann05} and surveyed
in~\cite{bm-survey-robust}. Robustness and the Wilson
doctrine have also been used to justify
prior-free~\cite{Goldberg01} and
prior-independent~\cite{hr09} mechanisms as well as
mechanisms that learn from
samples~\cite{bubeck2019multi,cole-roughgarden,singlesample,huang2018making,morgenstern-roughgarden}.
A different approach to robust mechanism design assumes
that, rather than being given the bid distributions, the
designer is given constraints on the set of potential bid
distributions and aims to optimize a minimax objective on
the expected revenue. For example Azar and
Micali~\cite{AzarMicali13} assume the seller knows only the
mean and variance of each bidder's distribution, Carrasco et
al.~\cite{carrasc18} generalize this to sellers that know
the first $N$ moments of each bidder's distribution, Azar et
al.~\cite{AMDW13} consider sellers that know the median or
other quantiles of the distributions, Bergemann and
Schlag~\cite{bergemann2011robust} assume the seller is given
distributions that are known to lie in a small neighborhood
of the true distributions, and
Carroll~\cite{carroll2017robustness} introduced a model in
which bids are correlated but the seller only knows each
bidder's marginal distribution
(see~\cite{gravin2018separation,bei2019correlation} for
further work in this correlation-robust model).
Another related subject is that of {\em revenue
monotonicity} of mechanisms --- regardless of the existence
of adversarial bidders. Dughmi et al.~\cite{drs09} prove a
result very close in spirit to ours. They consider the VCG
mechanism in a single-parameter downward-closed environment
and prove that it is revenue monotone if and only if the
environment is a matroid akin to our Theorems
\ref{thm:rmmb-matroid} and \ref{thm:rmmb-non-matroid}.
Rastegari et al.~\cite{RCLB07} study revenue monotonicity
properties of mechanisms (including VCG) for Combinatorial
Auctions. Under some reasonable assumptions, they prove that
no mechanism can be revenue monotone when bidders have
single-minded valuations.
| proofpile-arXiv_065-327 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{intr}
In the last few years, several studies have established the existence
of a statistical excess of line-of-sight companion galaxies around
high redshift quasars. Although it has been suggested
that these objects belong to clusters or groups which are physically
associated to the quasars (\cite{hin91}; \cite{tys86}), in order to be
detected at such high redshifts they should be undergoing strong
luminosity evolution. This seems unlikely on the light of the
recent data on galaxy evolution obtained through the study of
absorption-selected galaxy samples (\cite{ste95}), which shows that
the most plausible (and often the unique) interpretation
for many of these observations is the existence of a magnification
bias caused by gravitational lensing (see the reviews
\cite{sch92}; \cite{sch95}; \cite{wu95}).
The density of a population of flux-limited background sources
(e.g. QSOs) behind a gravitational lens is affected by the lens
magnification $\mu$ in two opposite ways.
One of the effects is purely geometrical: as the angular
dimensions of a lensed patch of the sky are expanded by a
factor $\mu$, the physical size of a region observed through a
fixed angular aperture will be smaller than in the absence of the lens.
Because of this, the QSO surface density will
decrease by a factor $\mu$ with respect to the unlensed background density
\cite{Na89}).
On the other hand, the lens will magnify faint quasars (which would not
have been detected otherwise ) into the sample and increase the number
of detected QSOs (\cite{can81}; \cite{sch86}, etc.).
If the slope of the quasar number-counts cumulative distribution is
steep enough, this effect would dominate over the angular
area expansion and there would be a net excess of QSOs behind the lens.
Foreground galaxies trace the matter overdensities acting as lenses and
thus there will be a correlation between the position in the sky of these
galaxies (or other tracers of dark matter as clusters) and the background
quasars. This QSO-galaxy correlation is characterized by the overdensity
factor $q$ (\cite{sch89}), which is defined as the ratio of the
QSO density behind a lens with magnification $\mu$ to the unperturbed
density on the sky. Its dependence on the effective slope $\alpha$ of
the QSO number counts distribution (which has the form
$n(>S)\propto S^{-\alpha}$, or $n(<m)\propto 10^{0.4\alpha m}$)
and the magnification $\mu$ can be expressed as (\cite{Na89})
\begin{equation}
q \propto \mu^{\alpha-1}
\end{equation}
We see that the value of q critically depends on the number counts slope
of the background sources. For instance, if the number counts are shallow
enough, ($\alpha < 1$), there would be negative QSO-galaxy associations.
It is clear that in order to detect strong, positive QSO-galaxy
correlations due to the magnification bias, we have to use QSO
samples with very steep number counts slopes.
\cite{bo91} have shown that for a QSO sample which
is flux-limited in two bands (with uncorrelated fluxes),
$\alpha$ is substituted by $\alpha_{eff}$, the sum of the number
counts-flux slopes in those bands. This effect is called
'double magnification bias'. Since $\alpha_{eff}$ is
usually $>1$ for samples which are flux-limited in both the
optical and radio bands (e.g. radio-loud QSOs), a strong
positive QSO-galaxy correlation should be expected for them.
It is important to understand when QSO samples may be affected by
the double magnification bias. The usual identification procedure for
a X-ray or radio selected QSO sample involves setting up a flux
threshold in the corresponding band and obtaining follow-up optical
images and spectra of the QSO candidates. The observer is limited
in this step by several circumstances (e.g. the sensitivity of the detectors
or the telescope diameter), and even if the QSO sample was not intended to
be optically selected, in the practice there will be an
optical flux threshold for the QSOs to enter the sample.
Therefore the existence of an explicit and homogeneus flux-limit
in the optical band is not as essential for the presence of the
magnification bias as the vaue of the effective slope of the unperturbed
number counts.
If this slope is steep enough, the effect should be detectable even
in incomplete samples, and often more strongly than in complete
catalogues: within such samples, the optically
brightest QSOs (i.e., those more likely to be lensed) are usually
the first to be identified, as they are easier to study
spectroscopically or through direct imaging.
At small angular scales, ($\theta\lesssim$ few $\arcsec$) the existence of
QSO-galaxy correlations is well documented for several QSO samples obtained
with different selection criteria (\cite{web88}; see also \cite{ha90} and
\cite{wu95} for reviews). As expected due to the double magnification
bias effect, the correlations are stronger for radio-loud quasars
(\cite{tho94}). In the cases where no correlation is
found, e.g. for optically-selected and relatively faint quasars,
the results are still consistent with the magnification bias effect and
seem to be due to the shallowness of the QSO number counts distribution
at its faint end (\cite{wu94}).
\cite{ha90} reviewed the studies on QSO-galaxy correlations
(on both small and large scales). After assuming that the galaxies
forming the excess are physical companions to the QSO, they showed that
while the amplitude of the radio-quiet QSO-galaxy correlation quickly
declines at $z\gtrsim 0.6$, the inferred radio-loud QSO-galaxy
correlation function steadily increases with redshift, independently
of the limiting magnitude of the study. It should be noted that
such an effect will be expected, if a considerable part of the galaxy
excess around radio-loud QSOs is at lower redshifts.
If a foreground group is erroneously considered to be physically
associated with a QSO, the higher the redshift of the QSO, the
stronger the 3-D clustering amplitude that will be inferred. This source
of contamination should be taken into account carefully when carrying
out these studies, as there is evidence that part of the detected excess
around high redshift radio-loud QSOs is foreground and related with the
magnification bias.
The association of quasars with foreground galaxies on scales
of several arcmin may arise as a consequence of lensing by the
large scale structure as proposed by \cite{ba91} (see also
\cite{sch95} and references therein). Several authors have also
investigated QSO-cluster correlations: \cite{roho94}, \cite{ses94}
and \cite{wuha95}. There are not many studies of large scale QSO-galaxy
correlation, mainly because of the difficulty in
obtaining apropriate, not biased, galaxy samples. Besides, the
results of these studies may seem contradictory, as they radically
differ depending on the QSO type and limiting magnitude.
\cite{boy88} found a slight anticorrelation between the positions of
optically selected QSOs and COSMOS galaxies. When they cross-correlated
the QSOs with the galaxies belonging to clusters, the anticorrelation
become highly significant ($4\sigma$) on $4\arcmin$ scales. Although
this defect was interpreted as due to the presence of dust in
the foreground clusters which obscured the quasars, the recent
results of \cite{mao95} and \cite{lrw95} have imposed limits on the
amounts of dust in clusters which
seem to contradict this explanation. It seems more natural,
taking into account the rather faint flux-limit of the QSOs of
\cite{boy88} to explain this underdensity as a result of the
magnification bias effect.
\cite{smi95} do not find any excess of foreground, $B_J< 20.5$ APM
galaxies around a sample of $z>0.3$ X-ray selected QSOs. It should be
noted that although the objects in this sample are detected in two bands
(X-ray and optical), the expected excess should not be increased by
the double magnification bias effect, because the X-ray
and optical fluxes are strongly correlated (\cite{boy96}). For these QSOs,
the overdensity should be roughly the same as for the optically selected
ones.
On the other hand, there have been strong evidences of positive large
scale associations of foreground galaxies with high redshift
radio-loud QSOs. As e.g., between the radio-loud quasars from the 1Jy
catalogue (\cite{sti94}) and the galaxies from the Lick (\cite{fug90},
\cite{bar93}), IRAS (\cite{bbs96}, \cite{bar94}) and APM (\cite{ben95a})
catalogues. In the latter paper, we found that APM galaxies are
correlated with the 1Jy QSO positions, but did not have enough statistics
to reach the significance levels reported in the present paper.
Unlike the COSMOS
catalogue, the APM catalogue provides magnitudes in two filters, $O$(blue)
and $E$(red). When we considered only the reddest, $O-E>2$ galaxies,
the overdensity reached a significance level of $99.1\%$.
Afterwards, \cite{ode95} showed (using a similar, but
much more processed catalog, the APS scans of POSS-I) that the
galaxies which trace the high-density cluster and filamentary regions
have redder $O-E$ colors than the galaxies drawn from low-density
interfilamentary regions. If the fields containing Abell clusters were
removed from the sample of \cite{ben95a}, the results did not change
significantly, so it seems that the detected effect was
caused by the large scale structure as a whole.
\cite{fort} confirmed the existence of gravitational lensing
by the large scale structure by directly detecting the large
foreground invisible matter condensations responsible for the
magnification of the QSOs through the polarization produced by
weak lensing in several 1Jy fields.
These differences in the nature of the QSO-galaxy associations
for the several QSO types seem to arise quite naturally when we
take into account the effects of the double magnification bias.
There is not any strong correlation between the radio and optical
fluxes for radio-loud quasars, so for these objects the overdensity
will be $\propto \mu^{\alpha_{opt}+\alpha_{rad}-1}$ (\cite{bo91}),
where $\alpha_{opt}$ and $\alpha_{rad}$ are the number-counts slope
in the optical and in radio respectively. If we assume that
$\alpha_{opt}$ is roughly the same for radio-loud and optically
selected QSOs (although this is far from being clear), the overdensity
of foreground galaxies should be higher for the radio-loud QSOs. For
optically and X-ray selected samples (because of the X-ray-optical
flux correlation ), $\alpha_{eff}$ and therefore the overdensity, will
be smaller.
In any case, it is difficult to compare conclusively the published
analyses of QSO-galaxy correlations.
They have been performed using galaxy samples obtained with
different filters and magnitude limits, and which therefore
may not trace equally the matter distribution because of
color and/or morphological segregation or varying depths.
Besides, the QSO samples differ widely in their redshift
distributions. As the magnification of a background source depends on
its redshift, QSO samples with different redshift distributions will
not be magnified equally by the same population of lenses.
It would thus be desirable, in order to disentangle these factors
from the effects due to the magnification bias caused by gravitational
lensing, to reproduce the mentioned studies using galaxy samples
obtained with the same selection criteria and QSO samples which
are similarly distributed in redshift. This is the purpose of the
present paper: we shall study and compare the distribution of
COSMOS/UKST galaxies around two QSO samples, one radio-loud and the
other radio-quiet with practically identical redshift distributions.
It is also interesting to mention in this context the results of
\cite{web95}. These authors observed that a sample of flat-spectrum
radio-loud quasars from the Parkes catalogue (\cite{pks}) displays a
wide range of $B_J-K$ colours, apparently arising from the presence
of varying amounts of dust along the line of sight. Optically selected
quasars do not present such a scatter in $B_J - K$ and lie at the lower
envelope of the $B_J - K$ colours, so Webster and collaborators
suggested that
the selection criteria used in optical surveys miss the reddest
quasars. Although there seems to be several other, more plausible,
reasons for this phenomenon ( see for instance \cite{boy96}), it is
not adventurous to suppose that it may be partially related to the
differences between the foreground projected environments of
radio-loud and optically selected quasars. If a considerable part of
the absorbing dust belongs to foreground galaxies, a greater density
of these galaxies would imply more reddening.
The structure of the paper is the following: Section 2 describes the
selection procedures of both QSO samples and galaxy fields
and discuss the possible bias which could have been introduced in the data.
Section 3 is devoted to the discussion of the statistical methods
used in the literature for the study of this problem and applies them
to our data. Section 4 discusses the results obtained in Sec 3.
Section 5 lists the main conclusions of our work.
\section{The Data}\label{data}
As we have explained above, the aim of our paper is the study of the
distribution of foreground galaxies
around two QSO samples, one radio-loud and the other radio quiet,
which have similar redshift distributions. It would also be interesting
to find out if these differences in the foreground galaxy density
occur for the QSO samples of \cite{web95}. This, as
we have mentioned, could be related to the differential reddening of
the radio-loud quasars. Therefore, in order to form a radio-loud sample
suitable for our purposes but as similar as possible to the one
used by \cite{web95}, we collect all the quasars from the PKS catalogue
which are listed in the \cite{ver96} QSO catalogue and obey the
following constraints: a) flux $>$ 0.5Jy at 11cm; b) $-45 < \delta
< 10$ and
c) galactic latitude $|b| > 20$. So far, we do not constrain the radio
spectral index of the quasars. This yields a preliminary sample of 276
quasars.
The galaxy sample is taken from the ROE/NRL COSMOS/UKST Southern Sky
object catalogue, (see \cite{yen92} and references therein). It contains
the objects detected in the COSMOS scans of the glass copies of the
ESO/SERC blue survey plates. The UKST survey is organized in $6\times6$
square degree fields on a 5 degree grid and covers the zone
$-90 < \delta <0$ excluding a $\pm 10$ deg zone in the galactic plane.
COSMOS scans cover only $5.4\times 5.4$ square degrees of a UKST field.
The scan pixel size is about 1 arcsec. Several parameters are supplied
for each detected object, including the centroid in both sky and plate
coordinates, $B_J$ magnitude and the star-galaxy image
classification down to a limiting magnitude of $B_J\approx21$.
We are going to study the galaxy distribution in $15 \arcmin $ radius
fields centered on the quasars of our sample. Due to several factors
as vignetting and optical distorsions, the quality of the object
classification and photometry in Schmidt plate based surveys
degrades with increasing plate radius. Therefore,
we constrain our fields to be at a distance from the plate center of
$r=\sqrt{\Delta x^2 +\Delta y^2} < 2.5$ degrees. Besides, to avoid
the presence of fields which extend over two plates we further
restrict our sample to have $|\Delta x_k|, |\Delta y_k| < 2.25$ degrees,
where $\Delta x_k$ and $\Delta y_k$ are the distances, in the $\alpha$
and $\delta$ directions respectively, from the center of the plate
( because of the UKST survey declination limits, this also
constrains our quasar sample to have $\delta < 2.25$).
After visually inspecting all the fields, several of them (6)
are excluded from the final sample
because they present meteorite traces. We end up with 147 circular
fields with a $15\arcmin$ radius
centered on an equal number of Parkes Quasars. This subsample
of radio-loud quasars is, as far as we know, not biased towards
the presence of an excess of foreground galaxies, which is the essential
point for our investigation.
In order to avoid contamination from galaxies physically
associated with the quasars, we also exclude three $z<0.3$
quasars from our radio-loud sample (\cite{smi95} point out that only
$5\%$ of $B_J < 20.5$ galaxies have $z>0.3$),
which is finally formed by 144 fields.
We have excluded a region of $12\arcsec$ around the
positions of the quasars (which may have an uncertainty up to
$\pm 5$ arcsec).
This is done to avoid the possibility of counting the quasar as a galaxy
(there are six cases in which the quasar is classified as an
extended object) and because of the great number of 'blended'
objects at the quasar position. Most of these pairs of objects are
classified as 'stars' when deblended, but taking into account
the pixel resolution, it would be desirable to examine the original
plates or, better yet, perform high resolution CCD imaging in order
to properly deblend and classify these objects as many of them could
be galaxies.
The optically selected sample is taken from the Large Bright Quasar
Survey as in \cite{web95}. This prism-selected catalogue contains 1055
quasars brighter than $B_J \approx 18.6$ on several equatorial and
southern fields (for details see \cite{hew90}).
In order to form an optically selected subsample of quasars
we have begun by choosing the 288 quasars from the LBQS catalogue
which were closest in redshift to our final sample of 144 PKS quasars.
We impose to them exactly the same constraints in sky and plate
position as to the radio-loud quasar fields. Finally we
visually examined the fields and excluded six of them because of
meteorite
traces. The resulting number of fields is 167. As the LBQS extends over
relatively small areas of the sky, several of these fields overlap.
We have
checked that their exclusion from the statistical tests performed below
does not affect significantly the result, so we leave them in the
sample.
The resulting redshift distribution for both QSO samples is plotted
in Fig 1. A Kolmogorov-Smirnov test cannot distinguish between them
at a 94.5\% significance level. We merge all the fields
in each sample into two 'superfields' which contain all the objects
classified as galaxies with $B_J<20.5$. This is a reasonable limiting
magnitude, and has been already used by other investigators
(\cite{smi95}). The PKS merged field
contains 15235 galaxies whereas the LBQS field only contains 14266.
This is a difference of $24\%$ in the average object background density,
well over a possible Poissonian fluctuation, and seems to be caused
by the presence of misclassified stars in our galaxy sample at low
galactic latitudes. The Parkes fields extend over
a much wider range of galactic latitudes $(|b| > 20^o)$ than the LBQS
ones, which are limited to high galactic latitudes $(|b|>45^o)$ and
thus much less affected. In fact, we measure
the existence of a significant anticorrelation
between the absolute value of the galactic latitude $|b_k|$ of the fields
and the total number of objects in each field $N_{gal}$. The correlation
is still stronger between $N_{gal}$ and sec$(90-|b|$), as shown in Fig. 2,
with a correlation coefficient $\rho=0.4, (p>99.99\% )$.
This contamination should be randomly distributed over the field
and would lower the significance of any possible correlation and
make it more difficult to detect. In order to check this, we have
correlated the overdensity $n_{in}/n_{out}$ of objects within the inner
half of every
individual field, ($n_{in}$ is the number of objects within the
inner half of the field surface and $n_{out}$ is the number of
objects in the outer half) with sec$(90^o-|b|)$, as can be seen
in Fig. 3. If anything, there might be a slight anticorrelation
(the Spearman's rank correlation test only
gives a significance of $80\%$) in the sense that the fields
with more star contamination are the ones which show less excess of
galaxies in the inner half of the field. This is what could
be expected if there were a
genuine galaxy excess close to the QSO positions; this excess should
be diluted by the presence of stars randomly distributed with respect
to the QSOs. Excluding the fields with $|b|\leq 30^o$, as in
\cite{smi95}, does not change significantly the main results,
as we show in Fig 4.
Because of this contamination by stars, there is a slight bias in
the data which makes harder to detect QSO-galaxy correlations for the PKS
QSOs than for the LBQS ones. We have also checked that there are no other
correlations between $N_k$ and $q_k$ and other possible relevant
parameters as the plate or sky position of the fields.
\section {Statistical Analysis}\label{stats}
The study of QSO-galaxy correlations due to the
magnification bias effect is complicated by several circumstances.
The amplitude of the correlation function $w_{qg}$ is expected to
be rather weak, and strongly dependent on the limiting magnitude
of the galaxies and the QSOs. Besides, the shape of $w_{qg}$ as
a function of $\theta$ is unknown ( it seems that the interesting theoretical
estimation of \cite{ba94} has not been confirmed empirically
by \cite{bbs96}).
In the past, several methods have been used to detect and statistically
establish the existence of these correlations.
One of the most simple and widespread approaches has consisted in counting
the number of galaxies $N_{in}$ in a somehow arbitrarily
defined region centered on the quasars and comparing the value found
with its statistical expectation, which is measured either from the outer
regions of the fields or from some other comparison fields which are
assumed to have the same density of galaxies on average. The significance
can be inferred empirically (\cite{ben95a})
or just by considering that N has a poissonian
distribution with $\sqrt{N}$ rms. This last assumption seems to hold well
in some cases, when the number of individual fields is very large,
but for other samples, usually smaller, the rms is found to be
$\alpha\sqrt{N}$, where $\alpha\approx 1.1-1.5$ (\cite{ben95b}).
A shortcoming of this method is that it does not extract
all the information contained in the fields as it only explores the
distribution of galaxies around the quasar in a small number of scales,
and often uses the rest of the field just to determine the average density.
Besides, if the correlation scale is comparable with the dimensions of
the fields, the average density measured on the fields would be increased
with respect to the unperturbed one and thus an artificially lowered
signification
will be obtained.
Another method, the rank-correlation test was used in \cite{bar93},
\cite{bar94}. All the individual galaxy fields are merged into a
single 'superfield', which is subsequently divided into
$N_{bins}$ annular intervals of equal surface. A Spearman's rank
order test is applied to determine if the number of
galaxies in each bin $n_i,( i=1,N_{bins})$ is anticorrelated with the
bin radius $r_i$. This test does not require any assumption about the
underlying probability distribution of the galaxies and takes into account
all the information contained in the fields. However it has several drawbacks.
The rings have all equal surface, so we end up with more
bins in the outer parts of the fields, which are less sensitive
from the point of view of detecting the correlation. Besides, the method
only 'senses' the relative ordering of $n_i$ and $r_i$ over the whole field.
For instance if
$w_{qg}(\theta)$ is very steep and goes quickly to zero, there will be only
a few bins clearly over the average in the central region, and the
correlation coefficient could then be dominated by the more numerous
outer bins with nearly no excess galaxies.
The value of the correlation coefficient and its significance, depend thus
critically on the number of chosen bins and the dimension of the
fields. However, this test can still be useful if the correlation
scale is similar to the scale of the field.
Recently, Bartsch et al. (1996) have introduced the weighted-average test.
They define the estimator $r_g$ as
\begin{equation}
r_g={1\over N}\sum^N_{j=1}g(\theta_j),
\end{equation}
where N is the total number of galaxies in the merged field, and $\theta_j$
are the radius from the QSO of each galaxy. They show, under certain
general assumptions, that if the galaxies
in the merged field are distributed following a known QSO-galaxy
correlation function $w_{gq}(\theta)$, for $g(\theta) \propto w_{gq}(\theta)$
the quantity $r_g$ is optimized to distinguish such a distribution of
galaxies from a random one. They take
$w_{gq}(\theta)=(0.24+h\theta/deg)^{-2.4}$ from the theoretical results
of \cite{ba94} ($h=H_o/100$ Mpc km s$^{-1}$),
and show with simulated fields that this method
is slightly more sensitive than Spearman's
rank order test. However,
when they study the correlation between IRAS galaxies and the QSOs from
the 1Jy catalogue with the weighted-average test they obtain a much higher
significance for their result than using the rank order test.
They conclude that although the IRAS galaxies do not seem to be
clustered around the QSOs following Bartelmann's correlation function,
the weighted average method seems to be a much more efficient estimator than
the rank order test.
This is not surprising if we consider that, when calculating the
estimator $r_g$ (as long as we use a steep enough form for $g(\theta)$)
this test gives a much higher weight to the galaxies closer to the QSO,
that is, to the regions where the excess signal-to-noise is higher. An
extreme case would be to use a top hat function with a given width
$\theta_o$ as $g(\theta)$ (which would be equivalent to counting
galaxies in a central circle of dimension $\theta_o$). This
arbitrariness in the choice of $g(\theta)$ when we do not know
the real shape of the QSO-galaxy correlation is a drawback
of this method. Another problem is that the probability distribution of
$r_g$ is unknown a priori. Because of that, the significance has to be
determined using simulations, and as we have seen before, the real galaxy
distribution is not always easy to know and model.
Nevertheless, when we know theoretically the correlation, this test
should be optimal, and it may also be useful in many other cases.
We have applied a variant of the rank order test to study the
distribution of galaxies around our PKS and LBQS samples. We also use
the Spearman's rank order test as statistical estimator (in the
implementation of \cite{nrec}), but instead of fixing a number of
bins and dividing the field in rings of equal surface as in \cite{bar94},
the variables to be correlated will be $w(\theta_i)$ and $\theta_i$,
where $w(\theta_i)$ is the value of the empirically determined
angular correlation function in rings of equal width and
$\theta_i$ is the distance of the bins from the QSO. Now, in general,
each ring will have a different number of galaxies, but the values
of $\theta_i$ are going to be uniformly distributed in radius,
and thus we will not give more weight to the outer regions of the field.
As a statistical estimator we shall use $Z_d$, the number of
times by which $D$, the so-called sum squared difference of ranks, deviates
from its null-hypothesis expected value. $D$ has an approximately
normal distribution and is defined as
\begin{equation}
D=\sum^N_{i=1}(R_i-S_i)^2
\end{equation}
where $R_i$ is the rank of the radius of the i-th ring and $S_i$ is
the rank of the density in that same ring. Trying to avoid the
dependence of the result on the concrete number of bins, we have
followed this procedure: we have chosen a minimal ring width
($0.4\arcmin$) in order to have at least $\approx 20$ galaxies in the first
ring, and a maximal width ($0.75\arcmin$)
so that there are at least 20 rings within the field. Then we perform
8 different binnings changing the ring width in steps of $0.05\arcmin$,
estimate $Z_d$ for each of them and calculate its average $<Z_d>$.
This estimator should be very robust as it does not depend so strongly
on the concrete value obtained for a binning, and the significance can be
estimated directly from the value of $Z_d$ without the need of simulations.
The value for the PKS field is $<Z_d>=2.33\sigma$, $p=99.0\%$ and for the
LBQS field $<Z_d>=-0.68\sigma$, $p=75.2\%$.
We have also confirmed this by estimating $<Z_d>$ for $10^5$ simulations
with randomly distributed galaxies for each of both fields: the empirical
significance for the PKS field is $p=99.01\%$ whereas the LBQS field gives
$p=72.46\%$. This similarity in the values of the probabilities also
confirms that the distribution of the galaxies in the fields is practically
Poissonian. The QSO-galaxy correlation function for the PKS and LBQS
sample is shown in Fig. 4 and 5 respectively. Error bars are poissonian
and the bin width is $0.75\arcmin$. In Fig. 4 we also show, without
error bars, the correlation function obtained for the PKS fields with
$|b|>30^o$
In order to further test our results, we have also applied Bartsch et al.
(1996) test to our data using Bartelmann's $g(\theta)$ and have
estimated the significance with $10^5$ simulated fields for each
sample with the same number of galaxies as the real fields randomly
distributed.
Strictly speaking this is an approximation, as the galaxies are clustered
among themselves, but we have studied the number of galaxies on rings
of equal surface (excluding a few central rings) and their distribution
is marginally consistent with being Gaussian with a rms $\approx \sqrt{N}$,
what is not surprising if we take into account the great number of fields
contributing to each bin.
The existence of a positive QSO-galaxy correlation for the PKS sample
is detected at a significance level of $98.85 \%$.
On the other hand, when we apply this test to the LBQS
merged field, a slight anti-correlation is found at a level of
$88.85\%$. These results are comparable to the ones obtained with the
previous test. We have also tried other arbitrarily chosen variants of
the function $g(\theta)$ to see the dependence of significance of
the PKS excess on the concrete shape of $g(\theta)$: a Gaussian
with a $2\arcmin$ width (analogous to a box of this size) yields $p=99.66\%$
and a $\propto \theta^{-0.8}$ law ( the slope of the
galaxy-galaxy correlation function) gives $p=99.5\%$.
We see that for our galaxies, the shape of $g(\theta)$ proposed
by Bartelmann is not optimal, and the significance depends
sensitively on the shape of the function. However, tinkering with the form
of $g(\theta)$ may be dangerous, as it could lead to creating an
artificially high significance if we overadjust the shape of the function
to the data.
Thus, it seems that there is a positive QSO-galaxy correlation
in the PKS fields, and what appears to be a slight anticorrelation in the
LBQS ones. In order to measure how much these two radial distributions
differ, we have performed a series of binnings as the one
described above for our test and defined $q_{i}$ in each ring as
$q_i\propto n^{PKS}_i/n^{LBQS}_i$, where $n^{PKS}_i$ is the number
of objects in each ring of the PKS field and $n^{LBQS}_i$ is the number of
objects in the same bin of the LBQS field, and normalize by the mean
density of each field.
We apply the rank order test to all the resulting sequences of
$q_i$ and bin radii $r_i$ as described
above and find that $<Z_d>=2.77$, $p=99.7\%$. $10^5$ simulations of field
pairs give a significance of $p=99.74$. This means that the
density of foreground galaxies around the radio-loud quasars is higher
than in front of the optically selected sample, and is anticorrelated with
the distance from the QSOs at a $99.7\%$ significance level.
\section{Discussion}\label{disc}
As shown above, we have confirmed the existence
of large scale positive correlations between high-redshift radio-loud
QSOs and foreground galaxies, whereas for optically selected QSOs with
the same redshift distribution the correlation is null or even negative.
Can these results be explained by the double magnification bias
mentioned in the introduction? In order to answer this question
the value of the number-counts distribution slopes in expression (1)
must be determined. These slopes can be estimated from the
empirical distribution functions of our QSO samples.
The cumulative number-radio flux distribution for the PKS QSOs is plotted
in Fig. 6. A linear squares fit gives an approximate slope
$\alpha^{PKS}_{rad} \approx 1.6$. The histogram of the distribution
of $B_J$ magnitudes for the LBQS and the PKS QSOs is plotted in Fig 7a.
For the PKS QSOs we do not use the magnitudes quoted in
\cite{ver96} as they have been obtained with different filters and
photometric systems. Instead, we have obtained $B_J$ magnitudes
from the ROE/NRL COSMOS/UKST catalog, which should be reasonably
homogeneous and accurate for $16<B_J<20.5$, apart from the intrinsic
variability of the QSOs. Fig. 7a shows that PKS QSOs extend over
a wider range of
magnitudes than the LBQS ones, which have $B_J \lesssim 18.6$.
In Fig 7b we show
the cumulative distributions of both QSO samples, $N(<B_J)$ as a
function of $B_J$. The LBQS distribution (crosses) can be
well aproximated by a power law $\propto 10^{0.4\alpha^{LBQS}_{opt}}$
with $\alpha^{LBQS}_{opt}\approx 2.5$.
The PKS distribution (filled squares) is more problematic
and cannot be approximated reasonably by a single power law. Although
at brighter magnitudes it seems to have a slope similar to the LBQS ones,
it quickly flattens and has a long tail towards faint magnitudes.
Due to the incompleteness of the PKS sample, this distribution
can be interpreted in two ways: either the flattening
is caused by the growing incompleteness at fainter optical magnitudes and
the slope of the underlying unperturbed distribution for the radio
loud QSOs is the same as for LBQS ones, or the distribution function
is intrisically shallow, and we are approximately observing its true form.
Fortunately this is not a critical question; as it will be shown below,
the difference between the slope values obtained in both cases is not
enough to change significantly our main conclusions about the causes
of the overdensity. Then, we roughly estimate the optical
slope for the PKS distribution function with a linear
squares fit between $16 < B_J < 17.75$ which yields
$\alpha^{PKS}_{opt}\approx 1.9$.
These slopes imply an overdensity of galaxies
around the LBQS and PKS QSOs
\begin{eqnarray}
&q^{LBQS} =\mu^{\alpha_{opt}^{LBQS}-1}\approx \mu^{1.5}\nonumber\\
&\\
&q^{PKS} =\mu^{\alpha_{opt}^{PKS}+\alpha_{rad}^{PKS}-1}\approx \mu^{2.5}\nonumber
\end{eqnarray}
That is, $q^{PKS}/q^{LBQS}\approx\mu$. At e.g. $\theta=2\arcmin$,
for the LBQS we found $q^{LBQS}=0.968\pm0.063$. This yields a value for
the magnification $\mu = 0.98 \pm 0.04$. Then for the PKS QSOs, the
overdensity should be $\approx 0.95 \pm 0.1$. However at $\theta=2\arcmin$,
we measure $q_{PKS}=1.164\pm 0.061$. If we assume that the intrinsic
PKS $B_J$ number-counts slope is the same as for the LBQS QSOs,
$\alpha_{opt}^{PKS}=2.5$, we still cannot make both overdensities
agree with a same magnification.
In order to obtain these results with 'pure' gravitational lensing, a
slope $\alpha_{PKS}^{opt}> 4$ would be needed. For
smaller scales, the situation does not change, since $q^{PKS}/q^{LBQS}$
is still higher. Therefore, we must conclude that it is unlikely
that the multiple magnification bias alone explains the results found.
As mentioned above, some authors have explained the optically
selected QSO-cluster anticorrelations as due to the existence of dust
in clusters (\cite{mao95} and references therein). What would be
the expected overdensity when we consider the combined effects of
magnification and dust absorption? Let's consider a patch of the sky
which has an average magnification of $\mu$ for background sources and an
average flux extinction of $\tau$ for a given optical band, i.e.
the observed flux $S$ from the background sources in that band would be
$S\approx(1-\tau)S_o$, where $S_o$ is the flux that we would measure in the
absence of absorption. If we consider that the radio emission suffers no
attenuation by the dust, the overdensity estimations for our samples would be
\begin{eqnarray}
&q^{PKS}=\mu^{\alpha_{opt}^{PKS}+\alpha_{rad}^{PKS}-1}(1-\tau)^
{\alpha_{opt}^{PKS}}
\approx \mu^{2.5}(1-\tau)^{1.9}\nonumber\\
&\\
&q^{LBQS}=\mu^{\alpha_{opt}^{LBQS}-1}(1-\tau)^{\alpha_{opt}^{LBQS}}
\approx \mu^{1.5}(1-\tau)^{2.5}\nonumber
\end{eqnarray}
Therefore, from our results on scales of $2\arcmin$ we find
$\mu\approx 1.139$, and $\tau\approx 0.089$. This extinction is
consistent with the results of \cite{mao95}.
Although these values should be considered only as rough estimations,
they show that considering dust absorption together with the multiple
magnification bias produces new qualitative effects in the
behavior of the overdensities of the different QSO types.
The strength of the overdensity is attenuated in both samples
of QSOs, but the effect is stronger for the LBQS QSOs,
which have a steeper optical counts slope. If we consider that
the dust approximately traces the matter potential wells acting as lenses,
i.e. that there is a strong correlation between magnification and
extinction, the QSOs which are more magnified are also the ones which are
more absorbed. However, if the product $\mu(1-\tau)$ is greater than
unity, the net effect for each QSO will be a flux increment.
An alternative explanation is the existence of the bias
suggested by \cite{rm92} and \cite{mao95}. They interpret that
the avoidance of foreground clusters by optically selected QSOs is
probably a selection effect due to the difficulty in identyfing quasars
in crowded fields. In that case, apart from the slight
QSO-galaxy anticorrelation generated by this effect,
the LBQS samples would avoid the zones where the lensing by the large
scale structure is stronger and thus their average magnification $\mu$
would be smaller than that of the PKS, which would not be affected by
this selection bias. Besides, if dust and magnification are correlated,
radio-loud QSOs would also be more reddened on average than optically
selected QSOs.
Regarding flat-spectrum QSOs, if we set an additional constraint for
our QSOs, $\gamma > -0.5$, where $\gamma$ is the slope of the
spectral energy distribution, the resulting sample of 107 $z>0.3$ QSOs
should be a fair representation of the radio-loud sample used by
\cite{web95} for the study of reddening in QSOs. We apply again our rank
correlation test to the field obtained by merging these 107 fields and
conclude that the COSMOS/UKST galaxies are correlated with flat-spectrum
QSOs at a $98.5\%$ level. The QSO-galaxy correlation function is plotted
in Fig. 8 with $0.75 \arcmin$ bins. The value of the overdensity
is similar, but as we have now fewer fields, the significance level is lower.
Nevertheless, if we take into account the small
amounts of dust allowed by the observations of \cite{mao95}, it seems very
unlikely that all the reddening measured by \cite{web95} for the PKS QSOs
is due to dust absorption by foreground galaxies, although in some cases
this effect could contribute considerably, as it has been shown by
\cite{sti96}. This question could be clarified by cross-correlating
the reddening of the QSOs with the density of foreground galaxies on
small scales.
\section{Conclusions}
We have studied the clustering of galaxies from the ROE/NRL
COSMOS/UKST catalogue up to 15 $\arcmin$ scales around two QSO
samples with $z>0.3$. One of them contains 144 radio-loud QSOs from
the Parkes Catalogue, and the other is formed by 167 optically selected
QSOs obtained from the Large Bright Quasar Survey.
There is a $\approx 99.0\%$ significance level excess of COSMOS
$B_J<20.5$ galaxies around the PKS QSOs, whereas there is a slight
defect of galaxies around the LBQS QSOs. We have compared the distribution
of galaxies around both samples, and
found that there is an overdensity around the PKS sample with respect
to the LBQS sample anticorrelated with the distance from the QSO at a $99.7\%$
significance level. Whilst this result could be thought to agree
qualitatively with the theoretical predictions of the multiple
magnification bias effect, we show that it is difficult to
explain it through gravitational lensing effects alone, and dust in the
foreground galaxies and/or selection effects in the detection of LBQS
QSOs must be considered.
Finally, we have established that the lines of sight to
PKS flat-spectrum QSOs go through significantly higher foreground
galaxy densities than the directions to LBQS quasars. This may be
related, at least partially, with the reddening of the PKS
QSOs observed by \cite{web95}.
\acknowledgements
The authors acknowledge financial support from the Spanish DGICYT,
project PB92-0741. NB acknowledges a Spanish M.E.C. Ph.D. fellowship.
The authors are grateful to Tom Broadhurst, Jos\'e Luis Sanz
and Ignacio Ferreras for carefully reading the manuscript and
making valuable suggestions, and Sebastian Sanchez for useful
comments. They also thank D.J. Yentis for his help.
| proofpile-arXiv_065-328 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction.}
\smallskip
A characteristic feature of the classical equations of General Relativity is the
property of General Covariance; i.e that the equations are covariant under
differentiable re-definitions of the space-time co-ordinates. In the first of a
series of papers investigating a class of covariant equations which
Jan Govaerts and the first author, which we called `Universal Field Equations'
~{\cite{fai}--\cite{5}} we floated the idea that these equations could be employed
as a model for space time co-ordinates. It is one object of this paper to explore
this idea in somewhat greater depth. This is a purely classical discussion of
a way of describing a co-ordinate system which is sufficiently flexible to
admit the general class of functional redefinitions implied by covariance.
It has nothing to do with quantum effects
like the concept of a minimum compactification radius due to T duality which
rules out the the notion of an infinitely precise point in space time. Here the
discussion will remain entirely classical and will explore the idea that the
space-time co-ordinates in $D$ dimensions may be represented by `flat' co-ordinates
in $D+1$ dimensions, which transform under the conformal group in
$D+1$ dimensions. There are, however two ways to implement general covariance;
one by the use of covariant derivatives, and the other by exploting properties of
determinants. In a second application the `Universal Field Equations'
may be regarded as describing membranes, by reversing the roles of fields and
base-co-ordinates. Then the covariance of fields becomes the reparametrisation invariance
of the new base space.
\section{Multifield UFE}
Suppose $X(x_i)^a,\ a=1,\dots ,D ,\ i=1,\dots ,D+1 $ denotes a set of $D$ fields,
in $D+1$ dimensional space. They may be thought of as target space co-ordinates
which constitute a mapping from a $D+1$ dimensional base space co-odinatized by the
independent variables $x_i$. Introduce the notation $\displaystyle{X^a_i=\frac{\partial X^a}
{\partial x_i},\ X^a_{ij}=\frac{\partial^2 X^a}{\partial x_i\partial x_j}}$. In addition, let
$J_k$ denote the Jacobian
$\displaystyle{\frac{\partial (X^a,X^b,\dots,X^D)}{\partial(x_1,\dots,\hat x_k\dots ,x_{D+1})}}$
where $x_k$ is the independent variable which is omitted in $J_k$.
Now suppose that the vector field $X^a$ satisfies the equations of motion
\begin{equation}
\sum_{i,k}J_iJ_kX^a _{ik}=0.
\label{eqmo}
\end{equation}
This is a direct generalisation of the Bateman equation to $D$ fields in $D+1$
dimensions, \cite{fai}, and may be written in terms of the determinant of a bordered
matrix where the diagonal blocks are of dimensions $D\times D$ and $D+1\times D+1$
respectively as
\begin{equation}
\det\left\|\begin{array}{cc} 0&\frac{\partial X^a}{\partial x_k}\\
\frac{\partial X^b}{\partial x_j}&\sum\lambda_c\frac{\partial^2 X^c}{\partial x_j\partial x_k}\end{array}\right\|=0.
\label{deqmo}
\end{equation}
The coefficients of the arbitrary constant parameters $\lambda_c$ set to
zero reproduce the $D$ equations (\ref{eqmo}). The solutions of these equations
can be verified to possess the property that any functional redefinition of
a specific solution is also a solution; i.e. the property of general covariance.
A remarkable feature of (\ref{eqmo}) is that the equations admit infinitely many
inequivalent Lagrangian formulations. Suppose ${\cal L}$ depends upon
the fields $X^a$ and their first derivatives $X^a_j$ through the Jacobians subject
only to the constraint that ${\cal L}(X^a,\ J_j)$ is a homogeneous function of the
Jacobians, i.e.
\begin{equation}
\sum_{j=1}^{D+1}J_j\frac{\partial {\cal L}}{\partial J_j}={\cal L}.
\label{hom}
\end{equation}
Then the Euler variation of ${\cal L}$ with respect to the field $X^a$ gives
\begin{eqnarray} \label{euler}
& &\frac{\partial{\cal L}}{\partial X^a}-\frac{\partial}{\partial x_i}\frac{\partial{\cal L}}{\partial X^a_i}\nonumber\\
&=&
\frac{\partial{\cal L}}{\partial X^a}-\frac{\partial}{\partial x_i}\frac{\partial{\cal L}}{\partial J_j}\frac{\partial J_j}
{\partial X^a_i}\\
&=&\frac{\partial{\cal L}}{\partial X^a}-\frac{\partial^2{\cal L}}{\partial X^b\partial J_j}\frac{\partial J_j}
{\partial X^a_i}X^b_i-\frac{\partial{\cal L}}{\partial J_j}\frac{\partial^2 J_j}{\partial X^a_i\partial X^b_k}
X^b_{ik}-\frac{\partial^2{\cal L}}{\partial J_j\partial J_k}\frac{\partial J_j}{\partial X^a_i}\frac{\partial J_k}
{\partial X^b_r}X^b_{ir}.\nonumber
\end{eqnarray}
The usual convention of summing over repeated indices is adhered to here.
Now by the theorem of false cofactors
\begin{equation}
\sum_{j=1}^{D+1}\frac{\partial J_k}{\partial X^a_j}X^b_j = \delta_{ab}J_k.
\label{falseco}
\end{equation}
Then, exploiting the homogeneity of ${\cal L}$ as a function of $J_k$ (\ref{hom}),
the first two terms in the last line of (\ref{euler}) cancel, and the term
$\displaystyle{\frac{\partial{\cal L}}{\partial J_j}\frac{\partial^2 J_j}{\partial X^a_i\partial X^b_k}
X^b_{ik}}$ vanishes by symmetry considerations. The remaining term,
$\displaystyle{\frac{\partial^2{\cal L}}{\partial J_j\partial J_k}\frac{\partial J_j}{\partial X^a_i}
\frac{\partial J_k}{\partial X^b_r}X^b_{ir}}$, may be simplified as follows.
Differentiation of the homogeneity equation (\ref{hom}) gives
\begin{equation}\sum^{D+1}_{k=1} \frac{\partial^2{\cal L}}{\partial J_j\partial J_k}J_k = 0.
\label{hom1}
\end{equation}
But since $\sum_k J_kX^a_k=0,\ \forall a$, together with symmetry,
this implies that the linear equations (\ref{hom1}) can be solved by
\begin{equation}
\frac{\partial^2{\cal L}}{\partial J_i\partial J_j}= \sum_{a,b}X^a_i d^{ab}X^b_j,
\label{hom2}
\end{equation}
for some functions $d^{ab}$. Inserting this representation into (\ref{euler}) and
using a similar result to (\ref{falseco});
\begin{equation}
\sum_{j=1}^{D+1}\frac{\partial J_j}{\partial X^a_k}X^b_j = -\delta_{ab}J_k.
\label{false}
\end{equation}
Then, assuming $d^{a,b}$ is invertible, as is the
generic case, the last term reduces to $\sum_{i,k}J_iJ_kX^a _{ik}$,
which, set to zero is just the equation of motion (\ref{eqmo})\footnote
{This calculation without the $X^a$ dependence of the Lagrangian already can be
found in \cite{fai}; the new aspect here is the extension to include the fields
themselves, following the single field example of \cite{jam}.}
\subsection{Iteration}
This procedure may be iterated; Given a transformation described by the equation
(\ref{eqmo}), from a base space of $D+2$ dimensions with co-ordinates $x_i$ to to a
target space of $D+1$ with co-ordinates $Y_j$ which in turn are used as a
base space for a similar transformation to co-ordinates $X_k,\ k=1\dots D$ the
mapping from $D+1$ dimensions to $D$ is given in terms of the determinant of a
bordered matrix of similar form to (\ref{deqmo}), where the diagonal blocks are
of dimensions $D\times D$ and $D+2\times D+2$ respectively;
\begin{equation}
\det\left\|\begin{array}{cc} 0&\frac{\partial X^a}{\partial x_k}\\
\frac{\partial X^b}{\partial x_j}&\sum\lambda_j\frac{\partial^2 X^j}{\partial x_j\partial x_k}
\end{array}\right\|=0.
\label{feqmo}
\end{equation}
The equations which form an overdetermined set are obtained by requiring that
the determinant vanishes for all choices of $\lambda_j$
Further iterations yield the multifield UFE, discussed in \cite{4}, and the
Lagrangian description is given by a iterative procedure.
\subsection{\bf Solutions.}
There are various ways to approach the question of solutions. Consider
the multifield UFE;
\begin{equation}
\label{1}
{\rm det}\,\left\|
\begin{array}{cccccc}
0&\ldots&0&X^1_{x_1}&\ldots&X^1_{x_d}\\
\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\
0&\ldots&0&X^n_{x_1}&\ldots&X^n_{x_d}\\
X^1_{x_1}&\ldots&X^n_{x_1}&\sum_{i=1}^n\lambda_iX^i_{x_1x_1}&\ldots&\sum_{i=1}^n
\lambda_iX^i_{x_1x_d}\\
\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\
X_{x_d}&\ldots&X_{x_d}&\sum_{i=1}^n\lambda_iX^i_{x_1x_d}&\ldots&\sum_{i=1}^n
\lambda_iX^i_{x_dx_d}
\end{array}\right \|=0,
\end{equation}
where $\lambda_1,\dots,\lambda_n$ are arbitrary constants, and the functions
$X^1,\dots,X^n$ are independent of $\lambda_i$. The equations which result
from setting the coefficients of the monomials of degree $d-n$ in $\lambda_i$
in the expansion of the determinant to zero form an overdetermined set, but,
as we shall show, this set possesses many nontrivial solutions.
\noindent
The equation (\ref{1}) may be viewed as a special case of the Monge-Amp\`ere
equation in $d+n$ dimensions, namely
\begin{equation}
\label{2}
{\rm det}\, \left \|{\partial^2 u\over
\partial_{y_i}\partial_{y_j}}\right \|^{d+n}_{i,j=1}=0 .
\end{equation}
Equation (\ref{1}) results from the restriction of $u$ to have the form
\begin{equation}
u(y_k)=u(x_1,\dots,x_d,\lambda_1,\dots,\lambda_n)=\sum_{i=1}^n\lambda_iX^i,
\label{three}
\end{equation}
where we have set
\begin{equation}
y_i=x_i,\ i=1,\dots,d,\ \ y_{j+d}=\lambda_{j},\ j=1,\dots,n.
\label{monge}
\end{equation}
Now the Monge-Amp\`ere equation is equivalent to the statement that there
exists a functional dependence among the first derivatives $u_{y_i}$ of $u$ of the form
\begin{equation}
F(u_{y_1},\dots,u_{y_{d+n}})=0,
\label{monge2}
\end{equation}
where $F$ is an arbitrary differentiable function. Methods for the solution of
this equation are known~\cite{ren,dbf}.
Returning to the target space variables $X^j$, this relation becomes
\begin{equation}
\label{4}
F\left(\underbrace{\sum_{i=1}^n\lambda_iX^i_{x_1}}_{\omega_1},\dots,
\underbrace{\sum_{i=1}^n\lambda_iX^i_{x_d}}_{\omega_d},\ X^1,\ldots,X^n\right)=0.
\end{equation}
\section{Exact Solutions of the UFE}
\subsection{Implicit Solutions}
The general representation of a solution of this set of constraints which do
not depend upon the parameters $\lambda^i$ evades us; however there are two
circumstances in which a solution may be found. In the first case a class of
solutions in implicit form may be obtained by taking $F$ to be linear in the
first $d$ arguments $\omega_i$.
Then
\begin{equation}
F=\sum^d_{i=1}f_i(X^1,\ldots,X^n)\omega_i=0.
\label{linear}
\end{equation}
It can be proved that this is the generic situation for the cases of two and
three fields. In general, provided there are terms linear in $\lambda_i$ in
$F$, as the $X^i$ do not depend
upon $\lambda_i$, one expects that as a minimal requirement the
terms in $F$ linear in $\lambda_i$ will vanish for a solution.
Equating each coefficient of $\lambda^i$ in (\ref{linear}) to zero we obtain
the following system of partial differential equations
\begin{equation}
\sum^d_{i=1}f_i(X^1,\ldots,X^n)X^j_{x_i}=0,\ \ j=1,\dots,n.
\label{system}
\end{equation}
The general solution of these equations may be represented in terms of $n$ arbitrary
smooth functions $R^j$, where
\begin{equation}
R^j(f_dx_1-f_1x_d,\dots,f_dx_{d-1}-f_{d-1}x_d,\ X^1,\dots,X^n)=0.
\end{equation}
The solution of these equations for $X^i$ gives a wide class of solutions to
the UFE.
\subsection{Explicit Solution.}
There is a wide class of explicit solutions to the UFE. They are simply given by
choosing $X^j(x_1,\dots,x_d)$ to be a homogeneous function of $x_j$ of weight zero, i.e.
\begin{equation}
\sum_{k=1}^dx_k\frac{\partial X^j}{\partial x_k}=0,\ \ j=1,\dots,n.
\label{explicit}
\end{equation}
The proof of this result depends upon differentiation of (\ref{explicit}) with respect
to the $x_i$.
A particularly illuminating example is the case of spherical polars;
in $d=3,\ n=2$ take
\begin{equation}
X^1=\phi =\arctan\left(\frac{x_3}{\sqrt{x_1^2+x_2^2}}\right) ;\
X^2=\theta =\arctan\left(\frac{x_2}{x_1}\right).
\label{sphere}
\end{equation}
Then these co-ordinates satisfy (\ref{feqmo}).
\section{Conclusions}
A wide class of solutions to the set of UFE which are generally covariant
has been obtained. In order to adapt the theory to apply to possible
integrable membranes, it is necessary to interchange the roles of dependent
and independent variables, so that general covariance becomes reparametrisation
invariance of the base space~\cite{3}. In order to invert the dependent and
independent variables in this fashion, it is necessary first to augment the
dependent variables by some additional $d-n$ fields $Y_k(x_i)$, then consider
the $x_i$ as functions of $X_j,\ i=1\dots n$ Although in principle $x_i$ could
also depend upon the artificial variables $Y_k,\ k=1\dots d-n$, we make the
restriction that this does not occur. (See \cite{3} for further details)
In this case the
variables $x_j$ play the role of target space for an $n$-brane, dependent upon
$n$ co-ordinates $X^j$. Since it is fully reparametrisation invariant, it may
play some part in the further understanding of string theory, but
this is by no means clear.
\section{Acknowledgement}
Renat Zhdanov would like to thank the "Alexander von Humboldt Stiftung"
for financial support.
\newpage
| proofpile-arXiv_065-329 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{}
Radiation hydrodynamics \cite{Pomraning} is a subject of great interest in
astrophysics, cosmology and plasma physics. However, the numerical
methods proposed to solve the transfer equation for the specific
radiation intensity $\cal{I}$ are in many cases computationally too
expensive. Therefore, one usually considers the equations for the moments of
$\cal{I}$ up to a given order $m$ \cite{Lever,Anile}. Due to the
dependence of the equation for the moment $m$ on the moment $m+1$ one needs to
introduce a closure relation. If only the energy density $u$ ($m=0$) and
the energy density flux $\vec{J}$ ($m=1$) are considered, one must introduce a
closure relation for the pressure tensor ${\bf P}$ ($m=2$). The usual
procedure is to introduce the so-called Eddington factor $\chi$ defined by:
\begin{equation}
{\bf P} = u \left [\frac{1-\chi}{2} {\bf U} +
\frac{3\chi-1}{2} {\vec{n}~\vec{n}} \right ],
\label{Edfact}
\end{equation}
where ${\bf U}$ is the identity matrix, ${\vec{n}} := \frac{\vec{
f}}{f}$ and $\vec{f}$ the normalized energy flux defined as ${\vec f}
:= \frac{\vec{J}}{cu}$. In the limit of isotropic radiation
(Eddington limit), $\chi(f=0)=1/3$, while in the free streaming case
$\chi(f=1)=1$. A number of different expressions for the Eddington
factor have been introduced in the literature \cite{Lever} by
interpolating between this limiting
cases. Some of them have been obtained from maximum entropy
principles. For instance, in \cite{Anile,Kremer} radiation under an
energy flux is studied by exploiting the entropy inequality,
i.e. by maximizing a generalized flux-dependent entropy under a set
of constraints, and an Eddington factor given by
\begin{equation}
\label{Edinf}
\chi = \frac{5}{3} - \frac{2}{3} \sqrt{4-3f^2}
\end{equation}
is obtained. The same result is recovered in \cite{QK} from a
information theoretical formalism, whereas different versions of this
formalism have been used in \cite{Todos,Fu} to obtain other variable
Eddington factors.
Thus, we observe that a flux dependent pressure tensor with the form
\begin{equation}
\label{presion}
{\bf P}=a(u,J^2) {\bf U} + b(u,J^2) \vec{J} \vec{J},
\end{equation}
has been widely employed in radiative transfer. In order to obtain such a
dependence, which apparently departs from local equilibrium, some
authors \cite{QK,Kremer,Fu} have consider a flux-dependent
generalized entropy and Gibbs relation. In addition,
(\ref{presion}) also appears in the study of an
ultrarelativistic ideal gas under an energy flux by means of
Information Theory \cite{QK}.
However, in spite of what is claimed by the authors in \cite{QK,Anile,Kremer},
(\ref{Edinf}) can be obtained for the two simple cases of radiation and
an ultrarelativistic gas without abandoning the local equilibrium
hypothesis. The equations of state and entropies appearing in
\cite{QK,Anile,Kremer} may be recovered as well.
First of all, let us notice that these systems are submitted to an {\it energy
flux}, and not to a {\it heat flux},
because the condition of null global velocity has not been imposed.
Therefore, it is not difficult to show that the considered situation
corresponds to equilibrium (i.e., a purely advective energy
flux), in contradiction to what is assumed in
\cite{QK,Anile,Kremer}. In fact, due to the symmetry of the energy
momentum tensor of a relativistic system (i.e. $T^{\mu \nu}=T^{\nu
\mu}$), the energy flux $\vec{J}$ verifies
\begin{eqnarray}
\label{Flux}
\vec{J}=c^2 \vec{P}.
\end{eqnarray}
This property can also be obtained directly for a system of ideal
relativistic particles, with energy $\epsilon_i=\sqrt{m^2c^4+p_i^2c^2}$ and
velocity $\vec{v_i}=c \frac{\vec{p_i}}{\sqrt{p_i^2 + m^2 c^2}}$. The
total energy flux is
\begin{equation}
\label{Micro1}
\vec{J}=\sum \epsilon_i \vec{v_i} =:\sum \vec{j_i},
\end{equation}
and introducing the expresions for $\epsilon_i$ and $\vec{v_i}$, it can be
readily verified that
\begin{equation}
\label{Micro}
\vec{j_i}= \epsilon_i \vec{v_i} =c^2 \vec{p_i}.
\end{equation}
Therefore, equation (\ref{Flux}) holds for this system.
Following (\ref{Flux}), the equations of state for a system submitted
to an energy flux $\vec{J}$ (without any additional restriction on
the particle flow) must be the same as the equilibrium equations of
state of a moving system with momentum $\vec{P}$, which can be obtained
by simply performing a Lorentz boost to an equilibrium system at rest.
Therefore, the systems considered in \cite{QK,Anile,Kremer}
are nothing else but moving equilibrium systems.
The distribution functions of both cases can be obtained by the use of
Lorentz transformations as follows. The rest frame ($K_0$)
equilibrium distribution function can be written as:
\begin{equation}
\label{f0}
f=\frac{g}{e^{\alpha_0+\beta_0 \epsilon_0}+a},
\end{equation}
where $g$ is the degeneracy, $\alpha_0=-\beta_0\mu_0$ and
$a=-1$ for bosons, $a=1$ for fermions, $a=0$ for particles
obeying Boltzmann's statistics and $a=-1, \mu_0=0$ for photons and
phonons. We consider the cases of radiation
and a classical ideal ultrarelativistic gas, so $\epsilon_0=p_0c$.
An observer at rest in a frame $K$ moving with momentum $-\vec{P}$ and
velocity $-\vec{V}$ with respect to the $K_0$ frame measures an
energy $\epsilon=pc$ for a particle with momentum $p$ (and velocity
$\vec{c}$) that verifies:
\begin{equation}
\label{Lorentz}
\epsilon_0=\gamma\left(\epsilon-\vec{V}\vec{p}\right).
\end{equation}
Substitution of (\ref{Lorentz}) in (\ref{f0}) gives
\begin{equation}
\label{f1}
f=\frac{g}{e^{\alpha+\beta \epsilon+\vec{I}\vec{p}c^2}+a},
\end{equation}
where we have defined $\beta:=\gamma \beta_0$ and
$\vec{I}:=-\beta\vec{V}/c^2$. Note that
$(\beta,\vec{I}c)$ is the so-called coldness 4-vector.
If we now use (\ref{Micro}) we obtain
\begin{equation}
\label{f}
f=\frac{g}{e^{\alpha+\beta \epsilon+\vec{I}\vec{j}}+a}.
\end{equation}
Now, we can recover the distribution function used in
\cite{QK} for radiation by simply setting $a=-1$,
$\alpha=0$, $\epsilon=pc$, $\vec{j}=\epsilon \vec{c}$, $g=2$:
\begin{equation}
\label{frad}
f=\frac{2}{e^{\beta pc+\vec{I}\vec{c}pc}-1},
\end{equation}
whereas for the classical ultrarelativistic gas we obtain the
distribution function proposed in \cite{QK} setting $a=0$.
Once this distribution function is fully justified, the whole
procedure in \cite{QK} holds.
Thus, the results obtained in \cite{QK,Anile,Kremer} are recovered and, in
particular, defining the pressure tensor as the mean value of the operator
\begin{equation}
\hat{P}_{\alpha \beta} := V^{-1} \sum_{i=1}^{N} p_i^{\alpha} v_i^{\beta},
\end{equation}
and using (\ref{Edfact}), (\ref{Edinf}) is obtained.
However, the physical interpretation given by this derivation is
completely different to that given in \cite{QK,Anile,Kremer}.
Clearly, the distribution of particles is anisotropic in the frame $K$
due to the additional vectorial constraint (the global momentum
$\vec{P}$). The distribution function (\ref{frad}) allows the study of
anisotropic equilibrium radiation (being the anisotropy due to the
relative motion) but not the study of nonequilibrium situations. We
propose the following heuristic argument to understand the physical
situation. Although in \cite{QK,Anile,Kremer} different methods were used
to arrive to the equations of state of nonequilibrium radiation
submitted to an energy flux, the authors never imposed the constraint
of no global motion of the system. Therefore, when they made use of the
condition of maximum entropy, they found an equilibrium moving system
because equilibrium situations have the maximum entropy and the
moving system verifies the imposed constraint of non zero energy
flux.
Let us remark another interesting feature of (\ref{frad}) related to
the physical meaning of temperature in this moving system.
The distribution function (\ref{frad}) can also be viewed as a Planck
distribution with an effective $\beta_{ef}$ given by
\begin{equation}
\beta_{ef} := \beta + I c \cos \theta = \beta (1 - \frac{V}{c} \cos \theta).
\end{equation}
This expression is used, for instance, in cosmology in the study of
the Cosmic Microwave Backgroung Radiation (CMBR) in order to take
into account the relative movement between the Earth and the
reference frame defined by the CMBR.
By averaging over the angular dependence with the distribution
function (\ref{frad}), it is obtained that
\begin{equation}
< \beta_{ef} > = \beta \left (1 - \frac{I^2}{\beta^2} \right) =
\frac{\beta}{\gamma^2} = \frac{\beta_0}{\gamma},
\end{equation}
so it is possible to define an effective mean temperature given by
\begin{equation}
T_{ef} := \frac{1}{k_B \beta_{ef}} = \gamma T_0,
\end{equation}
where $T_0:=\frac{1}{k_B \beta_0}$.
Therefore, $T_{ef}$ is found to simply be the Lorentz transformation
of $T_0$, according to Ott's transformation law \cite{Neuge}. This
gives a simple interpretation for Ott's temperature, whose physical
bases were controverted along the sixties \cite{Yuen}.
In \cite{Essex}, it has been argued that in some situations it is not
possible to apply the methods of nonequilibrium thermodynamics for radiation.
This is the case, for example, for two planar
surfaces fixed at different temperatures $T_1$ and $T_2$ which exchange energy
through a radiation field. Photons travelling in one direction are
characterized by $T_1$, and the ones travelling in the opposite
direction by $T_2$, so it is not possible to assign a single
temperature to radiation. Based on these arguments, recently
\cite{Critica} has criticized the thermodynamical methods used in
\cite{Anile,Kremer} in their analysis of anisotropic radiation.
Hovewer, we have proved that the system considered in
\cite{Anile,Kremer} is, in fact, an equilibrium moving system and
therefore the criticism does not hold in this case. Let
us remark that in the system considered by Essex \cite{Essex} the
distribution function is characterized by a double peak, whose
relative heights depend on direction, while ours
has a single peak whose position varies with direction. Thus, it is
possible to define an angle-dependent effective temperature.
In addition, in \cite{Critica} the validity of expressions like
(\ref{presion}) for the pressure tensor has been questioned, both for
gases and for radiation. We have seen that in the cases of equilibrium
moving radiation or an ultrarelativistic gas, the pressure tensor adopts
an anisotropic form due to the presence of an additional vectorial
constraint (i.e. $\vec{J}$). We think that these simple problems can
serve as a guide to more complicated nonequilibrium situations.
Therefore, it seems a plausible possibility that for a
nonequilibrium system submitted to an energy flux and zero mass flow,
the pressure tensor also depend on the energy flux, like in
equilibrium. If that is the case, the dependence
must have the form in (\ref{Edfact}) because, from a purely algebraic
point of view, the most general tensor that may be built up in
presence of a vector $\vec{J}$ must have the form $a(J^2){\bf
U}+b(J^2)\vec{J}\vec{J}$ and according to the definition for the
pressure tensor $tr {\bf P} = u$. However, this question remains
open, and such a form for the pressure tensor
is not free of difficulties, as pointed out in \cite{Critica}.
Taking into account these criticisms, and the fact that expressions
of the form (\ref{presion}) are widely used in radiation
hydrodynamics, the convenience of finding a consistent thermodynamic
scenario for these systems arises.
We should also note that some variable Eddington factors $\chi$ have been
proposed \cite{Todos} using maximum entropy principles without
a careful interpretation of the generalized flux-dependent
entropies that naturally appear in the formalism.
A plausible framework to understand these nonequilibrium
flux-dependent entropies appearing in radiation transfer may be
Extended Irreversible Thermodynamics (EIT) \cite{Ext}.
According to EIT, both temperature and thermodynamic
pressure should be modified in nonequilibrium situations, if a
generalized flux-dependent entropy function is considered. Up to
second order in the fluxes, one has
\begin{equation}
s(u,v,\vec{J})=s_{eq} (u,v) + \alpha(u,v) \vec{J}\cdot \vec{J}
\end{equation}
and if pressure and temperature are defined, as usual, by the
derivatives of the entropy function, one can easily obtain
flux-dependent equations of state:
\begin{equation}
\frac{1}{\theta}= \frac{1}{T} + \frac{\partial \alpha}{\partial u}
\vec{J}\cdot \vec{J},
\end{equation}
\begin{equation}
\frac{\pi}{\theta}= \frac{p}{T} + \frac{\partial \alpha}{\partial v}
\vec{J}\cdot \vec{J},
\end{equation}
where $T$ is the kinetic or local-equilibrium temperature and $p$ the
local-equilibrium pressure and $\theta$ and $\pi$ their generalized
flux-dependent counterparts. In addition, the resulting pressure
tensor was supposed in \cite{QK} to adopt the form:
\begin{equation}
\bf{P}= \pi {\bf U} + \psi \vec{J} \vec{J},
\end{equation}
where $\psi$ is determined by the requirement that $tr {\bf P}= u$.
\section*{Acknowledgements}
We thank Prof. D. Jou and Prof. Casas-V\'azquez from the
Autonomous University of Barcelona for their suggestions.\par
The authors are supported by doctoral scholarships from the
Programa de formaci\'o d'investigadors of
the Generalitat de Catalunya under grants FI/94-2.009 (R.D.) and
FI/96-2.683 (J.F.). We also
acknowledge partial financement support from the Direcci\'on General
de Investigaci\'on of the Spanish Ministry of Education and Science
(grant PB94-0718) and the European Union under grant ERBCHRXCT 920007.
| proofpile-arXiv_065-330 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The motion of a charged particle in a strong inhomogeneous
magnetic filed is a nontrivial problem displaying a variety
of very interesting phenomena ranging from chaos to phase
anholonomy.
Being of utmost importance in plasma physics, expecially in the
study of magnetic confinement, the subject has been worked out
in great detail in classical mechanics with special attention
to phenomenological implications as well as to formal
aspects. The canonical structure of the problem, in particular,
has been deeply investigated only in a relatively recent
time by R.G.\ Littlejohn \cite{Lj} revealing the appearance
of geometry induced gauge structures in the adiabatic motion of
classical charged particles.
Very few, on the other hand, is known about the behaviour of
quantum particles in strong inhomogeneous magnetic fields,
the reason being essentially that the techniques developed for classical
mechanics do not easily generalize to the quantum context.
Some work has been done for neutral spinning particles by M.V.\ Berry
\cite{Be86}, Y.\ Aharonov \& A.\ Stern \cite{A&S92} and R.G.\ Littlejohn
\& S.\ Weigert \cite{L&W93} in connections with geometrical phases, whereas
a quantum treatment for charged spinning particles is still missing.
It is the purpose of this paper to
present what may be probably called a
{\sl quantal guiding center theory} in which the coupling
between the spin and spatial degrees of freedom of a quantum charged
spinning particle moving in a strong inhomogeneous magnetic field is
systematically taken into account. This allows to extend to the quantum
domain the previous classical results. Our treatment, essentially
algebraic in nature, is a re-elaboration and---we believe---a simplification
of the technique originally proposed by R.G.\ Littlejohn in classical
mechanics. It is based on a
different choice of non-canonical variables adapting to classical as well as
quantum mechanics. Depending essentially on the canonical structure
the method applies indistinctly to the classical and the quantum theory.
We nevertheless focus on the quantum problem.
In order to better understand what is going on in the
strong-filed regime of a quantum particle moving in an external magnetic
filed it is better to first have in mind the main features of corresponding
the classical problem \cite{No63}. Let us therefore to briefly consider
a classical particle of mass $m$ and charge $e$ moving in a homogeneous
magnetic filed of intensity $B$. As is well known the trajectory of the
particle is represented by a spiral wrapping around a field line, as
sketched in Fig.\ref{fig1}a: the particle performs a uniform circular motion
of frequency $\omega_B=eB/mc$ and radius $r_B=mc|v_\perp|/eB$ ($|v_\perp|$
is the norm of the normal component of the velocity) in the directions normal
to the field, while the center of
the orbit, called the {\sl guiding center}, moves freely along a filed line.
Keeping fixed the initial condition, the stronger the magnetic field
the faster the rotation of the particle when compared
with the drift along the filed direction and smaller the portion of space
explored by the particle around the filed line. This indicates, to the one
side,
the presence of different time scales in the dynamics of the system and
gives, on the other hand, the reason why the motion in a very strong magnetic
filed may be studied along the same lines as that in a weakly inhomogeneous one.
\begin{figure}[t]
\centerline{\mbox{\epsfig{file=10fig1.ps}}}
\caption{Different time scale behaviours of a charged particle in
a strong magnetic field: {\sl a)} {\sl fast} rotation of the particle and
{\sl slow} guiding center drift in a homogeneous field; {\sl b)} in
the inhomogeneous case the guiding center drifts
away from the filed line {\sl very-slowly}.}
\label{fig1}
\end{figure}
Let us introduce now a small inhomogeneity in the field.
The picture of the motion should not change substantially. The particle,
in fact, keeps on rotating around its guiding center, the frequency and the
radius weakly depending now on the position, and the guiding center still
drifts along a field line. In this case, however, the guiding center do
not remains exactly on a single field line, it starts drifting very-slowly
in the directions normal to the filed.
Three different time scale behaviours of the system may therefore be
distinguished:
the {\sl fast rotation} of the particle around the guiding center,
the {\sl slow drift} of the guiding center along a magnetic field line and
the {\sl very-slow drift} of the guiding center in the direction normal to the
field.
The situation is sketched in Fig.\ref{fig1}b. The stronger the
magnetic field the cleaner the separation among the three degrees of
freedom.
An outlook to the canonical structure of the homogeneous case
makes immediately clear how the introduction of kinematical momenta
and guiding center operators allows to separate the three
degrees of freedom of the system. This is briefly reported in section 2
where the relevant notations of the paper are also set up.
After discussing the geometrical
complications involved in the adiabatic motion of a charged particle in
an inhomogeneous magnetic field, section 3, an appropriate set of
non-canonical operators generalizing the one used in the discussion of
the homogeneous problem is constructed in section 4. These are obtained as
formal power series in the magnetic length $l_B=\sqrt{\hbar c/eB}$
which appears naturally as the adiabatic parameter of the theory.
The Pauli Hamiltonian
describing the motion of the particle is then rewritten in terms of the new
adiabatic operators in sections 5 and 6, whereas the anholonomic effects
appearing in the adiabatic separation of the fast and slow/very-slow
degrees of freedom are discussed in section 7. Our results are summarized in
equations (\ref{Hend}), (\ref{A}) and (\ref{V}).
In the classical limit these reproduce correctly the classical theory.
Section 8 contains our conclusions.
\section{Canonical structure of the guiding center motion}
Magnetic interactions appearing essentially as modifications of
the canonical structure of a dynamical system it is worthwhile
to start by briefly discussing this peculiarity in the elementary
case of a quantum charged spinning particle in a homogeneous magnetic
filed. This allows us to immediately focus on the heart of the problem
establishing at the same time terminology and notations. We consider
therefore a spin-1/2 particle of mass $m$, charge $e$ and gyromagnetic
factor $g$ moving in space under the influence of the {\em homogeneous} filed
$\mbox{\boldmath$B$}({\vec x})=B\,\hat{\mbox{\boldmath$z$}}$. As in the inhomogeneous case, to be discussed later on,
the physical dimension of the magnetic field is reabsorbed
in the scale factor $B$, the inverse square root of which, opportunely
rescaled, will play the role of the adiabatic parameter of our theory.
Introducing an arbitrary choice of the vector potential $\mbox{\boldmath$a$}$ for
the dimensionless filed $\mbox{\boldmath$B$}({\vec x})/B$, $\mbox{rot}\,\mbox{\boldmath$a$}=\hat{\mbox{\boldmath$z$}}$, the motion
of the particle is described by the Pauli Hamiltonian
\begin{eqnarray}
{\cal H}={1\over2m}\Big(-i\hbar\mbox{\boldmath$\nabla$} -{eB\over c}\mbox{\boldmath$a$}\Big)^2
+g{\hbar e B\over mc}\hat{\mbox{\boldmath$z$}}\cdot\mbox{\boldmath$\sigma$}
\label{H2}
\end{eqnarray}
$\mbox{\boldmath$\nabla$}=(\partial_x,\partial_y,\partial_z)$ denoting the gradient operator and
$\mbox{\boldmath$\sigma$}=(\sigma_x,\sigma_y,\sigma_z)$ the matrix-valued vector
constructed by means of the three Pauli matrices. As well known the
solution of this simple problem was first obtained by Landau at the
beginning of the thirties and leads naturally to replace the
standard set of canonical operators $p_i=-i\hbar\partial_i$, $x^i$,
$i=1,2,3$, by the gauge invariant {\sl kinematical momenta}
$\pi_i=p_i-(eB/c)a_i$ and the {\sl guiding center operators}
$X^i=x^i+(c/eB)\varepsilon^{ij}\pi_j$. A very rapid computation yields
the nonvanishing commutation relation among the new variables
\begin{equation}
\begin{array}{ccc}
[\pi_2,\pi_1]=-i\displaystyle{\hbar eB\over c},\hskip0.7truecm &
[\pi_3, X^3]=-i\hbar, \hskip0.7truecm &
[ X^1, X^2]=-i\displaystyle{\hbar c\over eB},
\end{array}
\label{cr2.1}
\end{equation}
indicating $\pi_2$-$\pi_1$, $\pi_3$-$X^3$ and $X^1$-$X^2$ as couples
of conjugates variables. Moreover, the scale dependence of the
commutators (\ref{cr2.1}) allows to identify the three couple of
operators as describing respectively the {\sl fast}, the {\sl slow}
and the {\sl very-slow} degrees of freedom of the system (see eg.\
\cite{Ma96}). In terms of the new gauge invariant operators
Hamiltonian (\ref{H2}) rewrites in the very simple form ${\cal H}=
({\pi_1}^2+{\pi_2}^2+{\pi_3}^2)/2m+g\hbar eB\sigma_3/mc$.
The harmonic oscillator term $({\pi_1}^2+{\pi_2}^2)/2m$ takes
into account the rapid rotation of the particle around its
guiding center while the free term ${\pi_3}^2/2m$ the slow drift of
the guiding center along the straight magnetic field lines.
The very-slow variables $X^1$ and $X^2$ being constant
of motion, the guiding center do not move in
the directions normal to the field. Let us stress that
in the canonical formalism the spatial rotation of the particle around
its guiding center is taken into account by the phase space trajectory of a
couple of conjugate variables: the particle's velocity components
in the directions normal to the field: $\pi_1$ and $\pi_2$.
The presence of an external
magnetic field produces therefore a rotation of the canonical structure,
mixing up spatial coordinates and canonical momenta in new canonical operators
adapting to the different time scale behaviours of the particle!
In section 4 we will construct such ``adapted operators''---as power series
in the adiabatic parameter---for the motion of a quantum charged spinning
particle in an arbitrary magnetic filed. This allows to extend to quantum
mechanics the Hamiltonian approach to the classical guiding center motion
developed by R.G.\ Littlejohn \cite{Lj}.
The case of a magnetic filed with constant direction has been previously
considered in \cite{Ma96}.
Before to proceed some preparatory material is however necessary.
First of
all it is convenient to introduce dimensionless quantities by factorizing
the energy scale $\hbar \omega_B$, $\omega_B=eB/mc$, from the Hamiltonian.
This leads to redefine kinematical momenta and guiding center operators as
\begin{eqnarray}
& &\pi_i=-i{l_B}\partial_i-{l_B}^{-1}a_i({\vec x}) \label{km}\\
& &X^i=x^i+{l_B}\,\varepsilon^{ij}\pi_j. \label{gco}
\end{eqnarray}
${l_B}=\sqrt{\hbar c/eB}$ being the {\sl magnetic length}. The relevant
commutation relations may so be recast in the compact and very convenient
form
\begin{equation}
\begin{array}{lcl}
\!\!
\begin{array}{l}
[\pi_i,\pi_j]=\,i\varepsilon_{ij}\cr
[\sigma_i,\sigma_j]=\,i\varepsilon_{ijk}\sigma_k
\end{array} \mbox{\Huge\}}& &\mbox{{\sl fast}}\cr
[\pi_i,X^j]=\,-i{l_B} \delta_i^3\delta_3^j & &\mbox{{\sl slow}}\cr
[X^i,X^j]=\,-i{l_B}^2\varepsilon^{ij} & &\mbox{{\sl very-slow}}
\end{array}
\label{cr2.2}
\end{equation}
where the spin variables have also been considered.
As a second and
more serious task the geometrical structure responsible for the anholonomic
effects appearing in the adiabatic motion in a strong magnetic field has
to be discussed.
\section{Magnetism and geometric-magnetism}
The beautiful analysis of the adiabatic separation of {\sl fast}
and {\sl slow} degrees of freedom in a quantum system proposed by
M.V.\ Berry \cite{Be84-89}, H.\ Kuratsuji \& S.\ Iida \cite{K&I85},
J.\ Moody, A.\ Shapere \& F.\ Wilczek \cite{M&S&W86}, R.\ Jackiw \cite{Ja88}
and others,
has pointed out that in lowest order the reaction of the fast to
the slow dynamics is through a geometry-induced gauge structure
resembling that of (electro-)magnetism. This phenomenon has been
identified and found to be important in a variety of physical contexts
\cite{S&W89} and has been recently referred by M.V.\ Berry \& J.M.\
Robbins as {\sl geometric-magnetism} \cite{B&R93}.
A rather curious fact, first pointed out by R.G.\ Littlejohn in a series
of beautiful papers on the canonical structure of classical guiding center
motion \cite{Lj}, is that, in some circumstances, magnetism
itself may generate geometric-magnetism. The aim of the present section is
that of discussing the geometry involved in such ``magnetic-induced
geometric-magnetism''.
For shake of clearness it is useful to begin by briefly recalling
the geometrical character of the kinematical quantities characterizing
the motion of a particle in space. This will led to a rather intuitive
picture of the geometrical structure involved in the adiabatic motion of
a charged spinning particle in a strong magnetic field, allowing, at
the same time, to frame it in a general and rigorous context.
As well known the state of a particle moving in space is completely
characterized by its position ${\vec x}$ and its velocity ${\vec v}$, i.e.\
by a point in the {\sl tangent bundle} $T\!R^3$ of the three-dimensional
Euclidean space $R^3$. The flat parallel transport of $R^3$ makes it
natural to parameterize every fiber $T_{{\vec x}}R^3$ of the bundle by means of a
fixed reference frame in $R^3$, that is, to identify the tangent space in every
point ${\vec x}$ with the physical space itself. Such an identification is
certainly very useful in most circumstances, but it is a convention after
all. In principle we are free to choose arbitrarily the frame of $T_{{\vec x}}R^3$
in every ${\vec x}$, the parallel transport---and not the way in which we describe
it---being all that matters. This freedom of arbitrarily rotating the
reference frame of the tangent space in every point ${\vec x}$, a local SO(3)
symmetry, plays a crucial role in what follows. To visualize the situation,
therefore, we shall picture the Euclidean space as filled up with orthonormal
reference frames. To start with, we can imagine all them as combed
parallel to a single fixed frame $\{\hat{\mbox{\boldmath$x$}},\hat{\mbox{\boldmath$y$}},\hat{\mbox{\boldmath$z$}}\}$ in $R^3$
(see Fig.\ref{fig2}a),
but even in a flat geometry, this is not always the better choice.
\begin{figure}[t]
\centerline{\mbox{\epsfig{file=10fig2.ps}}}
\caption{Framing the tangent bundle $T\!R^3$ of the physical space: {\sl a)}
by means of a single fixed frame in $R^3$; {\sl b)} by using local reference
frames adapting to the magnetic field lines geometry.}
\label{fig2}
\end{figure}
\subsection*{The magnetic line bundle}
As qualitatively sketched above, the motion of
a charged-spinning particle in a strong magnetic filed is characterized
by the separation of time scales in the three degrees of freedom, making
the system amenable to a perturbative analysis. In the lowest order
approximation the particle performs a {\sl fast} rotation in the plane
normal to the field line at which its guiding center is located.
This is taken into account by the two components normal to the field
of the particle's velocity (to this order a couple of conjugate variables).
Disregarding the {\sl slow} drift of the guiding center along the filed
line and the {\sl very-slow} motion, therefore, the velocity of a particle
which guiding center is located in ${\vec x}$ is effectively constrained
to the plane $\mbox{\boldmath$\mu$}_{{\vec x}}$ generated by the vectors normal to the field
in ${\vec x}$. In every point of the space the magnetic filed $\mbox{\boldmath$b$}({\vec x})$
picks the complex line $\mbox{\boldmath$\mu$}_{{\vec x}}$ out of the tangent space $T_{{\vec x}}R^3$,
reducing the {\sl tangent bundle} $T\!R^3$ to a complex line bundle,
hereafter the {\sl magnetic line bundle} ${\cal M}$.\footnote{This subbundle
of $T\!R^3$ may be identified with the {\sl plane bundle} of B.\ Felsager
\& J.M.\ Leinaas \cite{F&L80}. See also the related paper of F.\ Gliozzi
\cite{Gl78}.}
It is then natural to use the local $SO(3)$ symmetry of the theory to adapt
the parameterization of $T\!R^3$ to the subbundle ${\cal M}$ by combing, say,
the $\hat{\mbox{\boldmath$z$}}$ direction of the frame of every $T_{{\vec x}}R^3$ according to the
direction of the field. We so smoothly introduce point dependent adapted
reference frames $\{\mbox{\boldmath$e$}_1,\mbox{\boldmath$e$}_2,\mbox{\boldmath$e$}_3\}$ in such a way that in every point
$\mbox{\boldmath$e$}_1({\vec x})$, $\mbox{\boldmath$e$}_2({\vec x})$ parameterize $\mbox{\boldmath$\mu$}_{{\vec x}}$ while $\mbox{\boldmath$e$}_3({\vec x})$
is aligned with $\mbox{\boldmath$b$}({\vec x})$ (see Fig2b). Such reference frames are commonly
used in the discussion of geometrically non trivial physical problems
such as in general relativity and are referred as {\sl anholonomic
frames}.
It is worthwhile to note that fixing $\mbox{\boldmath$e$}_3$ according to the
filed direction reduces the local $SO(3)$ symmetry of $T\!R^3$
into the local $SO(2)\equiv U(1)$ symmetry of ${\cal M}$.
The vectors $\mbox{\boldmath$e$}_1({\vec x})$ and $\mbox{\boldmath$e$}_2({\vec x})$ are in fact determined up
to the rotation
\begin{equation}
\begin{array}{lcr}
\mbox{\boldmath$e$}_1({\vec x})&\rightarrow&\mbox{\boldmath$e$}_1({\vec x})\,\cos\chi({\vec x})-\mbox{\boldmath$e$}_2({\vec x})\,\sin\chi({\vec x})\\
\mbox{\boldmath$e$}_2({\vec x})&\rightarrow&\mbox{\boldmath$e$}_1({\vec x})\,\sin\chi({\vec x})+\mbox{\boldmath$e$}_2({\vec x})\,\cos\chi({\vec x})
\end{array}
\label{ggt}
\end{equation}
$\chi({\vec x})$ being a point dependent angle. This residual ambiguity
will result in the gauge freedom of our theory.
\subsection*{Magnetic line bundle geometry}
We may now wonder how the vectors lying in ${\cal M}$ are transported
from point to point, that is, whether the geometry of the magnetic
line bundle is trivial or not. To this task we proceed in two steps.
Considering a vector $\mbox{\boldmath$w$}({\vec x})=w^\nu\mbox{\boldmath$e$}_\nu({\vec x})$, $\nu=1,2$, in $\mbox{\boldmath$\mu$}_{{\vec x}}$,
we first transport it from the point ${\vec x}$ to the infinitesimally closest
point ${\vec x}+d{\vec x}$ by means of the Euclidean parallel transport of $R^3$ and,
second, we project it onto the plane $\mbox{\boldmath$\mu$}_{{\vec x}+d{\vec x}}$. (i)~the Euclidean
parallel transport of $\mbox{\boldmath$w$}$ in ${\vec x}+d{\vec x}$ may be immediately evaluated as
\begin{eqnarray}
\mbox{\boldmath$w$}({\vec x}+d{\vec x})=\,\mbox{\boldmath$w$}({\vec x})-w^\nu\,(\mbox{\boldmath$e$}_\nu\cdot\partial_k\mbox{\boldmath$e$}_i)dx^k\, \mbox{\boldmath$e$}_i,
\nonumber
\end{eqnarray}
latin indices running over $1,2,3$, greek indices over $1,2$ and where
the sum over repeated indices is implied\footnote{This notation will be
employed throughout the rest of this paper.}. The three
quantities\footnote{The
vectors $\mbox{\boldmath$e$}_1$, $\mbox{\boldmath$e$}_2$ and $\mbox{\boldmath$e$}_3$ being orthonormal in every point ${\vec x}$,
$\mbox{\boldmath$e$}_i \cdot\mbox{\boldmath$e$}_j=\delta_{ij}$, these are the only independent quantities.}
$\mbox{\boldmath$e$}_1\cdot \partial_k\mbox{\boldmath$e$}_2$, $\mbox{\boldmath$e$}_1\cdot\partial_k\mbox{\boldmath$e$}_3$ and $\mbox{\boldmath$e$}_2\cdot\partial_k\mbox{\boldmath$e$}_3$
characterize the flat parallel transport of $R^3$ in the anholonomic frame
(it is in fact possible to make them vanishing by rotating the adapted frames
$\{\mbox{\boldmath$e$}_1 ,\mbox{\boldmath$e$}_2,\mbox{\boldmath$e$}_3\}$ back to fixed directions in every point).
(ii)~the projection onto $\mbox{\boldmath$\mu$}_{{\vec x}+d{\vec x}}$ yields then
\begin{eqnarray}
\mbox{\boldmath$w$}({\vec x}+d{\vec x})|_{\mbox{\boldmath$\mu$}}=\mbox{\boldmath$w$}({\vec x})- w^\mu\,(\mbox{\boldmath$e$}_1\cdot\partial_k\mbox{\boldmath$e$}_2)dx^k
\varepsilon_\mu^{\;\;\nu}\,\mbox{\boldmath$e$}_\nu,
\nonumber
\end{eqnarray}
indicating that the parallel transport along the infinitesimal path
connecting ${\vec x}$ to ${\vec x}+d{\vec x}$ produces the vector $\mbox{\boldmath$w$}$ to be rotated by the
infinitesimal angle $d\alpha=(\mbox{\boldmath$e$}_1\cdot\partial_k\mbox{\boldmath$e$}_2)dx^k$. When parallel
transported along a finite closed path $\mbox{\boldmath$\gamma$}$ the vector will therefore
return to the starting point rotated by the angle \cite{F&L80}
\begin{eqnarray}
\alpha_{\mbox{\boldmath$\gamma$}}=\oint_{\mbox{\boldmath$\gamma$}} (\mbox{\boldmath$e$}_1\cdot\partial_k\mbox{\boldmath$e$}_2)dx^k;
\nonumber
\end{eqnarray}
this quantity being in general different from zero, the geometry of
the magnetic line bundle results not flat. The operation of locally
projecting onto the plane $\mbox{\boldmath$\mu$}$ reduces the trivial $SO(3)$ local
symmetry of the theory to a non-trivial $SO(2)\equiv U(1)$ local
symmetry! This local structure is described by a magnetic-like
$U(1)$ gauge theory. The parallel transport of the magnetic line bundle
${\cal M}$, results in fact completely characterized by the vector
\begin{eqnarray}
{\cal A}_k=\mbox{\boldmath$e$}_1\cdot\partial_k\mbox{\boldmath$e$}_2,
\label{ggp}
\end{eqnarray}
the {\sl connection one-form} of ${\cal M}$. $\mbox{\boldmath${\cal A}$}$ appears in
the theory as a geometry-induced vector potential (to not be
confused with the vector potential $\mbox{\boldmath$a$}$ representing the real
magnetic field $\mbox{\boldmath$b$}$). A point dependent redefinition of the
local basis $\{\mbox{\boldmath$e$}_1({\vec x}),\mbox{\boldmath$e$}_2({\vec x})\}$ plays in fact the same role
of a gauge transformation, the rotation (\ref{ggt}) producing the
vector (\ref{ggp}) to transform according to ${\cal A}_k\rightarrow
{\cal A}_k+\partial_k\chi$. The associate geometry-induced magnetic filed
${\cal B}_k=\varepsilon_{kmn}{\cal B}_{mn}$, ${\cal B}_{mn}=\partial_m{\cal A}_n-
\partial_n{\cal A}_m$ the {\sl curvature two-form} of ${\cal M}$, may also
be considered. It is obviously a gauge invariant quantity and, being
the rotor of a vector field, satisfies the Bianchi identity
$\mbox{div}\,\mbox{\boldmath${\cal B}$}=0$.
While the geometry-induced vector potential $\mbox{\boldmath${\cal A}$}$ completely characterizes
the intrinsic geometry of the magnetic line bundle ${\cal M}$, the other
two quantities
\begin{equation}
\begin{array}{l}
{l_1}_k=\mbox{\boldmath$e$}_1\cdot\partial_k\mbox{\boldmath$e$}_3,\cr
{l_2}_k=\mbox{\boldmath$e$}_2\cdot\partial_k\mbox{\boldmath$e$}_3,
\end{array}
\end{equation}
describing the flat parallel transport of $R^3$ in the anholonomic frame
$\{\mbox{\boldmath$e$}_1,\mbox{\boldmath$e$}_2,\mbox{\boldmath$e$}_3\}$ may be seen a taking into account its extrinsic
geometry. Since the curvature of the tangent bundle $T\!R^3$ is zero
the three quantities $\mbox{\boldmath${\cal A}$}$, $\mbox{\boldmath$l_1$}$ and $\mbox{\boldmath$l_2$}$ are obviously not
independent, being related by the equivalent of the Gauss, Codazzi-Mainardi
and Ricci equations. The ladder, as an example, allows to re-express the
geometry-induced gauge field $\mbox{\boldmath${\cal B}$}$ entirely in terms of $\mbox{\boldmath$l_1$}$ and $\mbox{\boldmath$l_2$}$ as
\begin{eqnarray}
\mbox{\boldmath${\cal B}$}=\mbox{\boldmath$l_1$}\wedge\mbox{\boldmath$l_2$},
\label{Bext}
\end{eqnarray}
${\cal B}_{kl}=({l_1}_k{l_2}_l-{l_1}_l{l_2}_k)/2$,
$\wedge$ indicating the external product of $R^3$ \cite{F&L80}.
With respect to the point dependent rotation (\ref{ggt}) $\mbox{\boldmath$l_1$}$ and $\mbox{\boldmath$l_2$}$
transform as vectors ($\mbox{\boldmath$l_1$}\rightarrow\mbox{\boldmath$l_1$}\cos\chi-\mbox{\boldmath$l_2$}\sin\chi$,
$\mbox{\boldmath$l_2$}\rightarrow\mbox{\boldmath$l_1$}\sin\chi+\mbox{\boldmath$l_2$}\cos\chi$) making the gauge invariance
of $\mbox{\boldmath${\cal B}$}$ manifest.
\subsection*{Magnetic filed lines geometry}
Thought the geometry of a magnetic filed is completely characterized by two
independent function (e.g.\ the two independent components of the real magnetic
field $\mbox{\boldmath$b$}$, or of the geometry-induced magnetic field $\mbox{\boldmath${\cal B}$}$, etc.\ ) it may
be useful to look at the problem from different points of view. We may wonder,
as an example, how the intrinsic/extrinsic geometry of the line bundle
${\cal M}$ is related to the geometry of magnetic filed lines. To this task
we start by observing that the projection along the field direction
of the two vectors $\mbox{\boldmath$l_1$}$, $\mbox{\boldmath$l_2$}$ may be identified with the two
{\em second fundamental forms} of the embedding of the magnetic field lines in
$R^3$ \cite{Spi}. In every point of the space the curvature $k$ of the
magnetic filed line going through that point may so be expressed as
\begin{eqnarray}
k=\sqrt{(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$l_1$})^2+(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$l_2$})^2}.
\end{eqnarray}
In a similar way the projection along the filed direction of the
geometry-induced vector potential $\mbox{\boldmath${\cal A}$}$ have to be identified with
the {\em normal fundamental form} of the embedding of the field lines in $R^3$
(i.e.\ with the connection form induced by the Euclidean geometry
onto the normal bundle of every filed line) \cite{Spi}. Up to the gradient
of an arbitrary function, representing again the freedom of arbitrarily
rotating the reference frame in the normal planes, in every point of the
space the torsion $\tau$ of the magnetic field line going through that point
may be written as
\begin{eqnarray}
\tau=\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath${\cal A}$}.
\label{torsion}
\end{eqnarray}
Curvature and torsion completely characterize the geometry of every
single magnetic filed line and contain, in principle, all the informations
relative to the geometry of our problem. On the other hand we may also
wonder about the global properties of the foliation of $R^3$ in terms
of field lines.
Of particular relevance for the adiabatic motion of a particle
in an external magnetic field is the possibility of foliating space by
means of surfaces everywhere orthogonal to the field lines. By virtue
of Frobenius theorem this is controlled by the vanishing of the scalar
${\cal F}= \mbox{\boldmath$e$}_3\cdot\,\mbox{rot}\,\mbox{\boldmath$e$}_3$. In terms of the magnetic line bundle
geometry
\begin{eqnarray}
{\cal F}= \mbox{\boldmath$e$}_1\cdot\mbox{\boldmath$l_2$}-\mbox{\boldmath$e$}_2\cdot\mbox{\boldmath$l_1$} .
\label{Frobenius}
\end{eqnarray}
The magnetic filed lines
torsion $\tau$ and the Frobenius invariant ${\cal F}$ play a crucial
role in the description of the anholonomic effects appearing in the
adiabatic motion of a charged particle in a strong magnetic field.
\section{Adiabatic quantum variables}
We are now ready for the construction of a set of adiabatic operators
adapting to the different time scale behaviours of a quantum particle in a
strong, but otherwise arbitrary, magnetic field. Let us consider therefore
a spin-1/2 particle of mass $m$, charge $e$ and gyromagnetic factor $g$
moving in space under the influence of the {\em inhomogeneous} magnetic
field $\mbox{\boldmath$B$}({\vec x})= B\,\mbox{\boldmath$b$}({\vec x})$, the physical dimension of the filed being again
reabsorbed in the scale factor $B$. Denoting by $\mbox{\boldmath$a$}$ an arbitrary choice of
the vector potential, $\mbox{rot}\,\mbox{\boldmath$a$}=\mbox{\boldmath$b$}$, the dynamics of the system is
described by the Pauli Hamiltonian
\begin{eqnarray}
{\cal H}/\hbar\omega_B=
{1\over2}\,\pi_i\pi_i +
g\, b_i({\vec x})\sigma_i ,
\label{H_inhom}
\end{eqnarray}
where the kinematical momenta $\pi_i=-i{l_B}\partial_i-a_i({\vec x})/{l_B}$ have been
introduced. The inhomogeneity of the magnetic filed makes
Hamiltonian (\ref{H_inhom}) to depend on the position operators ${\vec x}$,
explicitly through spin term $g\, b_i({\vec x})\sigma_i$ and implicitly
through the commutation relations of the $\pi_i$s. In spite of the
simple quadratic dependence of (\ref{H_inhom}) on the kinematical
momenta, $\pi_1$ and $\pi_2$ are in fact no longer conjugate
variables and neither commute with $\pi_3$: the set of operators
$\{\pi_i,x^i;i=1,2,3\}$ fulfil the commutation relations
\begin{equation}
\begin{array}{ccc}
[\pi_i,\pi_j] =\, i b_{ij}({\vec x}), \hskip0.7truecm &
[\pi_i,x^j] = -i{l_B}\delta_i^j, \hskip0.7truecm &
[x^i ,x^j] =\, 0,
\end{array}
\label{cr4.1}
\end{equation}
$b_{ij}({\vec x})=\varepsilon_{ijk}b_k({\vec x})$ denoting the skew-symmetric two-form associated
to the field. In the lowest approximation we nevertheless expect the relevant
degree of freedom of the system to be taken into account by the two components
of the particle's velocity normal to the filed.
Considering the position operators $x^i$s as adiabatic parameters driving
the fast motion of the system we expect therefore the rapid rotation of
the particle around its guiding center to be separated from the slow and
very-slow motion by simply referring the kinematical momenta to the adapted
frames introduced in the previous section. For the shake of concreteness
we shall indicate by ${R_i}^j({\vec x})$ the point dependent
rotation bringing the fixed frame $\{\hat{\mbox{\boldmath$x$}},\hat{\mbox{\boldmath$y$}},\hat{\mbox{\boldmath$z$}}\}$ into the adapted frame
$\{\mbox{\boldmath$e$}_1({\vec x}),\mbox{\boldmath$e$}_2({\vec x}),\mbox{\boldmath$e$}_3({\vec x})\}$. This allows to decompose the field
$\mbox{\boldmath$b$}({\vec x})$ in terms of its norm $b=\sqrt{\mbox{\boldmath$b$}\cdot\mbox{\boldmath$b$}}$ and its direction
$\mbox{\boldmath$b$}/b={R_i}^j\hat{\mbox{\boldmath$z$}}_j$ as $b_i({\vec x})=b({\vec x}){R_i}^j({\vec x})\hat{\mbox{\boldmath$z$}}_j$.
Once the rotation has been performed the kinematical momentum along the
field direction decouples, up to higher order terms in the adiabatic parameter
${l_B}$, from the other two components. The commutator of these, on the other
hand, results to be proportional to $b({\vec x})$. Stated in a different way, in the
adapted frame the particle sees an effective magnetic filed of constant
direction and intensity $b({\vec x})$.
To make the velocity components normal to the filed in a couple of conjugate
operators it is now sufficient to rescale them by the point dependent factor
$b^{-1/2}({\vec x})$ (see \cite{Ma96}). We shall indicate by ${D_i}^j({\vec x})$ the
point dependent dilatation ${D_i}^j=\mbox{diag}(b^{1/2},b^{1/2},1)$ rescaling
the first and second components of a vector by $b^{1/2}$ and letting the third
one unchanged.
In order to construct operators adapting to the {\sl fast} time
scale behaviour of the system two point
dependent operations have therefore to be performed: (i) a rotation
${R_i}^j({\vec x})$ to the local adapted frame and (ii) a dilatation ${D_i}^j({\vec x})$
rescaling the normal components of the kinematical momenta.
The particle coordinates being not external parameters but dynamical
variables of the problem these operations will produce higher
order corrections in the various commutators. We shall therefore proceed
order by order in the adiabatic parameter ${l_B}$ by constructing sets of
adiabatic operators fulfilling the desired commutation relation up to
a given order in ${l_B}$: at the $n$-th order we shall look for a set
of operators $\{\Pi_i^{(n)}, X^i_{(n)};i=1,2,3\}$ fulfilling the conditions
\begin{itemize}
\item $\Pi_1^{(n)}$, $\Pi_2^{(n)}$ are a couple of conjugate
operators up to terms of order ${l_B}^n$,
\item $\Pi_3^{(n)}$, $X^1_{(n)}$, $X^2_{(n)}$, $X^3_{(n)}$
commute with $\Pi_1^{(n)}$, $\Pi_2^{(n)}$ up to terms of order ${l_B}^n$,
\item in the limit of a homogeneous filed, $\mbox{\boldmath$b$}({\vec x})\rightarrow
\hat{\mbox{\boldmath$z$}}$, the adiabatic kinematical momenta $\Pi_i^{(n)}$s and guiding center
operators $X^i_{(n)}$s should reduce to the expressions (\ref{km}) and
(\ref{gco}) respectively.
\end{itemize}
Our present task being that of separating the fast degree of freedom
from the slow and very-slow motion, we do not insist for the moment $X^3_{(n)}
-\Pi_3^{(n)}$ and $X^1_{(n)}-X^2_{(n)}$ to be couple of conjugate operators
as in the homogeneous case.
For computational proposes it is very convenient
to use a compact notation which does not distinguish among the physical dislike
directions along and normal to the field. This probably obscures a while
the physical contents of the various expression but greatly simplifies
formal manipulations. When necessary we will therefore expand
the notation in order to shad light on the physics involved. For the moment
we proceed in the opposite direction by introducing the point dependent
matrix
\begin{eqnarray}
{\beta_i}^j({\vec x})={D^{\mbox{\tiny-1}}_i}^k({\vec x})
{R^{\mbox{\tiny-1}}_k}^j({\vec x})
\label{beta}
\end{eqnarray}
representing the successive application of the two operations necessary
to construct the adapted kinematical momenta in the lowest order. This allows
to rewrite the skew-symmetric two-form $b_{ij}({\vec x})$ in terms of $\varepsilon_{kl}=
\varepsilon_{kl3}$ (representing a homogeneous filed directed along $\hat{\mbox{\boldmath$z$}}$)
\begin{eqnarray}
b_{ij}({\vec x})={\beta^{\mbox{\tiny-1}}_{\,i}}^k({\vec x})
{\beta^{\mbox{\tiny-1}}_{\,j}}^l({\vec x})\, \varepsilon_{kl}.
\end{eqnarray}
The matrix ${\beta_i}^j$ and this representation of the filed result to be
very useful in the construction of the adiabatic quantum variables.
\subsection*{Zero-order operators}
In order to construct the zero-order operators fulfilling the desired
conditions up to terms of order ${l_B}$ it is sufficient to operate the
rotation and the dilatation discussed above
\begin{eqnarray}
\Pi^{(0)}_i={1\over2}\Big\{{\beta_i}^{k},\pi_k\Big\} \label{P0}
\end{eqnarray}
the matrix ${\beta_i}^k$ being evaluated in ${\vecX_{(0)}}\equiv{\vec x}$. The
anticommutator $\{\;,\;\}$ is obviously introduced in order to make
the $\Pi^{(0)}_i$s Hermitian. A rapid computation confirms our deductions
yielding the commutation relations fulfilled by the zero-order adiabatic
operators as
\begin{eqnarray}
& &\Big[\Pi^{(0)}_i,\Pi^{(0)}_j\Big] =\, i\,\varepsilon_{ij}
-i\,{{l_B}\over2}\varepsilon_{ijh}\varepsilon^{hkl}
\left\{{\beta_k}^m\Gamma_{ml}^{\;\;\;n}, \Pi^{(0)}_n\right\},\nonumber\\
& &\Big[\Pi^{(0)}_i,X_{(0)}^j\Big] = -i\,{l_B}\,{\beta_i}^j, \label{cr4.2}\\
& &\Big[X_{(0)}^i,X_{(0)}^j\Big] =\, 0, \nonumber
\end{eqnarray}
where $\Gamma_{ki}^{\;\;\;j}=(\partial_k{\beta_i}^h)
{\beta^{\mbox{\tiny-1}}_{\,h}}^j$ and all the functions
are evaluated in ${\vecX_{(0)}}$. $\Pi^{(0)}_1$ and $\Pi^{(0)}_2$ are conjugate operators
up to ${\cal O}({l_B})$. The commutators depend on the derivative of magnetic
filed through the vector-valued matrix
\begin{eqnarray}
(\Gamma_{k})_{i}^{\;\;j}=
\pmatrix{\displaystyle-{1\over2}{\partial_k b\over b} & -{\cal A}_k & - b^{-1/2}\,{l_1}_k \cr
{\cal A}_k &\displaystyle -{1\over2}{\partial_k b\over b} & - b^{-1/2}\,{l_2}_k \cr
b^{1/2}\,{l_1}_k & b^{1/2}\,{l_2}_k & 0 }
\label{Gamma}
\end{eqnarray}
allowing to clearly distinguish the effects produced by a variation of
norm of the magnetic filed from that produced by a change of direction.
The ladder are entirely geometrical in character being taken into account
by the magnetic line bundle connection form $\mbox{\boldmath${\cal A}$}$ and by the two extrinsic
vectors $\mbox{\boldmath$l_1$}$ and $\mbox{\boldmath$l_2$}$.
\subsection*{First-order operators}
Whereas the construction of the zero-order operators is in some way suggested
by the physics of the problem, a more technical effort is required for higher
order terms. The form of the first-oder guiding center operators is
nevertheless suggested by the corresponding homogeneous expression
(\ref{gco}),
\begin{eqnarray}
X_{(1)}^i=X_{(0)}^i+
{{l_B}\over2}\varepsilon^{kl}\left\{{\beta_k}^i,\Pi^{(0)}_l\right\}, \label{X1}
\end{eqnarray}
the matrix ${\beta_k}^i$ being again evaluated in ${\vecX_{(0)}}$. We
immediately obtain the new commutation relation
\begin{eqnarray}
& &\Big[\Pi^{(0)}_i,\Pi^{(0)}_j\Big] =\, i\,\varepsilon_{ij}
-i\,{{l_B}\over2}\varepsilon_{ijh}\varepsilon^{hkl}
\left\{{\beta_k}^m\Gamma_{ml}^{\;\;\;n},\Pi^{(0)}_n\right\} +{\cal O}({l_B}^2),\nonumber\\
& &\Big[\Pi^{(0)}_i,X_{(1)}^j\Big] = -i\,{l_B}\, \delta_i^3 \,{\beta_3}^j+{\cal O}({l_B}^2),
\label{cr4.3}\\
& &\Big[X_{(1)}^i,X_{(1)}^j\Big] = -i\,{l_B}^2\,\varepsilon^{kl}{\beta_k}^i{\beta_l}^j
+{\cal O}({l_B}^3), \nonumber
\end{eqnarray}
indicating the ${\cal O}({l_B}^2)$ decoupling of the adiabatic guiding center
operators from $\Pi^{(0)}_1$ and $\Pi^{(0)}_2$.
All the functions are now evaluated in ${\vecX_{(1)}}$. Though
our analysis will be carried out up to ${\cal O}({l_B}^2)$, we also wrote
the the first nonvanishing contribution to the commutators
among the $X_{(1)}^i$s, which is of order ${l_B}^2$. Even if unimportant
in the present calculation, this allows us to visualize the very-slow
time scale of the system.
The construction of the first-order kinematical momenta is performed by
looking for order ${l_B}$ counterterms to be added to the $\Pi^{(0)}_i$'s.
These should be homogeneous second order polynomial in the $\Pi^{(0)}_i$'s with
coefficients depending on ${\vecX_{(1)}}$. A rather tedious computation
produces
\begin{eqnarray}
\Pi^{(1)}_i=\Pi^{(0)}_i+{l_B}\,c_{ij}^{klmn}\,
\Big\{{\beta_m}^h\Gamma_{hn}^{\;\;\;j},
\Big\{\Pi^{(0)}_k,\Pi^{(0)}_l\Big\}\Big\}, \label{P1}
\end{eqnarray}
where $c_{ij}^{klmn}={1\over24}\varepsilon_{ih}\varepsilon^{kh}
(2\delta_j^l+\delta_j^3\delta_3^l)\varepsilon^{mn}+
{1\over8}\delta_i^3\varepsilon^{kh}
(\delta_j^l+\delta_j^3\delta_3^l)\varepsilon_{hg}\varepsilon^{gmn}$
and all the functions are evaluated in ${\vecX_{(1)}}$.
When expanded these expressions do not look so complicated as a first
sight. We nevertheless insist in keeping this notation which greatly
simplifies the following manipulations. The commutation relations among
the first-order adiabatic variables are obtained as
\begin{eqnarray}
& &\Big[\Pi^{(1)}_i,\Pi^{(1)}_j\Big] =\, i\,\varepsilon_{ij}
- i\,{{l_B}\over4}\varepsilon_{ijk}\varepsilon^{kl}
\left\{ {\mbox{div}\,\mbox{\boldmath$b$}\over b}, \Pi^{(1)}_l \right\} +{\cal O}({l_B}^2), \nonumber\\
& &\Big[\Pi^{(1)}_i,X_{(1)}^j\Big] = -i\,{l_B}\,{\delta_i}^3\,{\beta_3}^j+{\cal O}({l_B}^2),
\label{cr4.4}\\
& &\Big[X_{(1)}^i,X_{(1)}^j\Big] = -i\,{l_B}^2\,\varepsilon^{kl}{\beta_k}^i{\beta_l}^j
+{\cal O}({l_B}^3). \nonumber
\end{eqnarray}
It is very interesting to observe that a monopole singularity, that
is a point of nonvanishing divergence, represents an obstruction in
the construction of the adiabatic operators. Being concerned
with real magnetic filed we nevertheless assume $\mbox{div}\,\mbox{\boldmath$b$}=0$
and carry on in our adiabatic analysis.
$\Pi^{(1)}_1$ and $\Pi^{(1)}_2$ are then conjugate operators commuting with all the
remaining variables up to terms of order ${l_B}^2$ and
the fast degree of freedom decouples from the slow and
very-slow motion up to terms of this order.
\subsection*{A non-canonical set of operators}
At least in principle it is possible to repeat this construction an arbitrary
number of times getting, as power series in ${l_B}$, a set of adiabatic
non-canonical operators $\{\Pi_i,X^i;i=1,2,3\}$ fulfilling the commutation
relations
\begin{equation}
\begin{array}{ccc}
[\Pi_i,\Pi_j]=\,i\,\varepsilon_{ij}, \hskip0.4truecm &
[\Pi_i,X^j] = -i\,{l_B} \,\delta_i^3\,{R^{\mbox{\tiny-1}}_{\,3}}^j,
\hskip0.4truecm &
[X^i,X^j] = -i\,{l_B}^2\,\varepsilon^{kl}\,b^{-1}\,{R^{\mbox{\tiny-1}}_{\,k}}^i
{R^{\mbox{\tiny-1}}_{\,l}}^j,
\end{array}
\label{cr4.5}
\end{equation}
all the functions being now evaluated in ${\vec X}$. These formal series
are in general---and have to be \cite{Be87}---not convergent, representing
anyway a very useful tool in the discussion of the adiabatic behaviour of
the system. The description of the problem to a given order $n$ in the
adiabatic parameter ${l_B}$ requires the knowledge of the first $n+1$ terms of
the adiabatic series, so that up to terms of order ${l_B}^2$ we may identify
the $\Pi_i$s and $X^i$s with the $\Pi^{(1)}_i$s and $X_{(1)}^i$s respectively.
An outlook to the commutation relation (\ref{cr4.5}) allows to clearly
identify the dependence of the canonical structure on the variation of norm
and direction of the magnetic filed. Whereas a suitable redefinition
of reference frames in $T\!R^3$ allows to separate the fast
degree of freedom from the others, the very-slow variables are made into
a couple of non-conjugate operators by an inhomogeneous intensity while
a variation of the filed direction even produces the mixing of
very-slow and slow variables.
The description of these by means of couples of conjugate operators requires
the introduction of curvilinear coordinates in space \cite{Ga59}, the so
called {\sl Euler potentials} \cite{St70}. We do not insist further on
this point for the moment observing that under the action of $\Pi_1$,$\Pi_2$
and $\Pi_3$, $X^1$, $X^2$, $X^3$ the Hilbert space of the system separates in
the direct sum of two subspaces describing respectively the rapid rotation
of the particle and the guiding center motion.
\section{Expanding the Hamiltonian}
The adiabatic operators $\vec\Pi$ and $\vec X$ constructed in the previous
section have been introduced in such a way to embody the expected features
of the motion of a quantum charged particle in a weakly-inhomogeneous
magnetic field. Their main advantage lies, in fact, in the very suitable
form assumed by the Pauli Hamiltonian when rewritten in terms of them.
To this task we have first to invert the power series expressing $\Pi_i$
and $X^i$ in terms of the operators $\pi_i$s and $x^i$s and,
second, to replace these in (\ref{H_inhom}). This yields
the Hamiltonian as a power series in the magnetic length ${l_B}$,
\begin{eqnarray}
{\cal H}={\cal H}^{(0)}+{l_B}{\cal H}^{(1)}+{l_B}^2{\cal H}^{(2)}+...\,
\label{a-exp}
\end{eqnarray}
allowing the adiabatic separation of the fast degree of freedom from the
slow/very-slow motion and the evaluation of approximate expressions
of the spectrum and of the wave functions of the system.
In order to get the $\pi_i$s and $x^i$s in terms of the $\Pi_i$s and
$X^i$s we first recall that $X^i=X_{(1)}^i+{\cal O}({l_B}^2)$. By rewriting
$X_{(1)}^i$ in terms of the $\Pi^{(0)}_i$s and $X_{(0)}^i= x^i$s, (\ref{X1}),
$\Pi^{(0)}_i$ in terms of the $\pi_i$s and $x^i$s, (\ref{P0}), and by solving with
respect to $x^i$, we then obtain $x^i$ as a function of the $\pi_i$s
and the $X^i$s, $x^i=x^i(\vec\pi,\vec X)$. This allows to rewrite $\Pi^{(0)}_i$
as a function of the $\pi_i$s and $X^i$s. Recalling finally that $\Pi_i
=\Pi^{(1)}_i+{\cal O}({l_B}^2)$ and using (\ref{P1}) we immediately get
$\Pi_i$ in terms of the $\pi_i$s and $X^i$s, $\Pi_i=\Pi_i(\vec\pi,
\vec X)$. The inversion of this relation, order by order in ${l_B}$,
allows to get $\pi_i$ and $x^i$ in terms of the adiabatic operators.
The computation gives
\begin{eqnarray}
\pi_i&=&{1\over2}\Big\{{\beta^{\mbox{\tiny-1}}}_i^{\,j},\Pi_j\Big\}
+{{l_B}\over2}\,
\mbox{c}_{jh}^{klmn} \Big\{
{\beta^{\mbox{\tiny-1}}}_i^{\,j}
\beta_m^{\,o}\Gamma_{on}^{\;\;\;h},
\Big\{\Pi_k,\Pi_l\Big\}\Big\}+{\cal O}({l_B}^2), \label{pi}\\
x^i &=& X^i -{l_B}\,\varepsilon^{kl}\beta_k^{\,i}\Pi_l+{\cal O}({l_B}^2), \label{x}
\end{eqnarray}
where $\mbox{c}_{ij}^{klmn}={1\over2}\delta_i^n\delta_j^k\varepsilon^{ml}-
2c_{ij}^{klmn}$. As a useful check the commutation relations (\ref{cr4.1})
may be reobtained by means of the (\ref{cr4.5}).
The substitution of (\ref{pi}), (\ref{x}) in the Pauli Hamiltonian
(\ref{H_inhom}) yields immediately the first two terms of the
adiabatic expansion (\ref{a-exp}),
\begin{eqnarray}
{\cal H}^{(0)}/\hbar\omega_B&=&\,
{1\over2}{\beta^{\mbox{\tiny-1}}}_i^{\,k}{\beta^{\mbox{\tiny-1}}}_i^{\,l}
\;\Pi_k\Pi_l +g\,b_i\,\sigma_i \label{H0}\\
{\cal H}^{(1)}/\hbar\omega_B&=&
{\beta^{\mbox{\tiny-1}}}_i^{\,p}{\beta^{\mbox{\tiny-1}}}_i^{\,q}
{\tilde c}_{pj}^{klmn}
\beta_m^{\,o}\Gamma_{on}^{\;\;\;j}
\Big\{\Pi_k\Pi_q\Pi_l\Big\}
-g\,\varepsilon^{kl}\,\beta_k^{\,h}\,(\partial_hb_i)\,\sigma_i\, \Pi_l \label{H1}\\
... & & \nonumber
\end{eqnarray}
where the notation $\Big\{\Pi_k\Pi_q\Pi_l\Big\}= \Pi_k\Pi_q\Pi_l +
\Pi_l\Pi_q\Pi_k$ has been introduced.
In order to get some more physical insight in this expressions we now
abandon our compact notation in favour of a more transparent one.
By recalling the definition (\ref{beta}) of ${\beta_i}^j({\vec x})$, (\ref{Gamma})
of and $\Gamma_{ij}^{\;\;\;k}({\vec x})$ and the explicit expression of the
inhomogeneous dilatation ${D_i}^j({\vec x})= \mbox{diag}(b^{1/2}({\vec x}),b^{1/2}({\vec x}),1)$,
we rewrite everything in terms of the magnetic field and of other quantities
capable of a direct physical interpretation. The full expansion of the zero
order Hamiltonian (\ref{H0}) gives
\begin{eqnarray}
{\cal H}^{(0)}/\hbar\omega_B=\, {1\over2}\,{\Pi_3}^2+ b\, \left[J
+g\, (\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$\sigma$})\right],
\label{H0x}
\end{eqnarray}
where $J$ represents the harmonic oscillator Hamiltonian constructed by means
of the canonical variables $\Pi_1$ and $\Pi_2$, $J=({\Pi_1}^2+{\Pi_2}^2)/2$,
and the norm of the magnetic field $b(\vec X)$ is evaluated in the
adiabatic guiding center operators $\vec X$.
We observe that while the $\Pi_1$-$\Pi_2$ degree of freedom decouples to
this order from the slow and very-slow variables the spin does not.
The separation, up to higher order terms, of the fast motion
(rotation + spin) requires in fact a subsidiary zero order transformation
which we will perform in the next section.
For the moment let us observe that, up to the spin term, the zero order
Hamiltonian (\ref{H0x}) precisely embodies the expected behaviour of the
system: the canonical couple of operators $\Pi_1$-$\Pi_2$ takes into account
the {\sl fast} rotation of the particle around its guiding center, while
the non-canonical variables $\Pi_3$-$X^3$ describe the slow motion along
the magnetic field lines by means of an effective ``kinetic energy + potential
energy'' Hamiltonian. The norm of the magnetic filed $b(\vec X)$ plays the role
of the effective potential.
As long as ${\cal O}({l_B}^2)$ terms are ignored the very-slow dynamical
variables $X^1$-$X^2$ appear only as adiabatic parameters driving the slow
motion, whereas a more accurate analysis indicates them as taking into
account the very-slow drift in the directions normal to the field \cite{Ma96}.
More complicated appears the full expression of the first order
Hamiltonian (\ref{H1}). The replacement of ${\beta_i}^j({\vec x})$ and
$\Gamma_{ij}^{\;\;\;k}({\vec x})$ by means of (\ref{beta}) and (\ref{Gamma})
yields in fact the expression
\begin{eqnarray}
& &\;\;\;\;\;{\cal H}^{(1)}/\hbar\omega_B\,=\,
-b^{-1/2}\varepsilon^{\mu\nu}
(\mbox{\boldmath$e$}_\mu\cdot\mbox{\boldmath$\nabla$} b) \;
\left[{2\over3}J_\nu
+ g\,(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$\sigma$})\Pi_\nu\right]
-{2\over3}\, b^{1/2}({\mbox{\boldmath$e$}_\mu}\cdot\mbox{\boldmath${\cal A}$}) \;J_\mu \nonumber\\
& &+\left[ {1\over2}\big(\mbox{\boldmath$e$}_1\cdot\mbox{\boldmath$l_2$}-\mbox{\boldmath$e$}_2\cdot\mbox{\boldmath$l_1$}\big)
-(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath${\cal A}$})\right]\; J\, \Pi_3
+{1\over4}\big(\mbox{\boldmath$e$}_1\cdot\mbox{\boldmath$l_2$}+\mbox{\boldmath$e$}_2\cdot\mbox{\boldmath$l_1$}\big)\;
({\Pi_1}^2-{\Pi_2}^2)\, \Pi_3 \nonumber \\
& &-{1\over4}\big(\mbox{\boldmath$e$}_1\cdot\mbox{\boldmath$l_1$}-\mbox{\boldmath$e$}_2\cdot\mbox{\boldmath$l_2$}\big)\;
\{\Pi_1,\Pi_2\}\, \Pi_3
+ \, b^{-1/2}\Big[(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$l_2$})\,\Pi_1
-(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$l_1$})\,\Pi_2\Big]\,{\Pi_3}^2 \nonumber\\
& &-gb^{1/2} \varepsilon^{\mu\nu}
\Big[(\mbox{\boldmath$e$}_\mu\cdot\mbox{\boldmath$l_1$})(\mbox{\boldmath$e$}_1\cdot\mbox{\boldmath$\sigma$})
+(\mbox{\boldmath$e$}_\mu\cdot\mbox{\boldmath$l_2$})(\mbox{\boldmath$e$}_2\cdot\mbox{\boldmath$\sigma$})\Big]\;
\Pi_\nu,
\label{H1x}
\end{eqnarray}
indicating the first order coupling among the various operators.
The notation $J_\mu={1\over2}\delta^{\alpha\beta}\Pi_\alpha\Pi_\mu\Pi_\beta$
has been introduced and all the functions are evaluated in $\vec X$.
As expected from dimensional considerations ${\cal H}^{(1)}$ depends only on the
first order derivatives of the filed. It is nevertheless worthwhile to stress
that the gradient of the magnetic-field-norm, $\mbox{grad}\,b=
\mbox{\boldmath$\nabla$}b$, appears only in the first term of the right
hand side of this expression, all the remaining terms depending only on
the quantities $\mbox{\boldmath${\cal A}$}$, $\mbox{\boldmath$l_1$}$ and $\mbox{\boldmath$l_2$}$ completely characterizing
the intrinsic/extrinsic geometry of the magnetic line bundle ${\cal M}$.
To a large amount, therefore, the complication of this expression is produced
by the variation of direction of the magnetic field, that is, by the
nontrivial geometry of ${\cal M}$. It is not yet time to comment on
the structure of ${\cal H}^{(1)}$.
First of all, it is in fact necessary to operate a suitable unitary
transformation separating the zero order fast motion from the other
degrees of freedom, that is diagonalizing the the spin term
$\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$\sigma$}$. This will produce a
modification of the first order term of the adiabatic expansion.
Secondly, it is possible to drastically simplify the form of
${\cal H}^{(1)}$ by operating a suitable first order unitary transformation.
The strategy is nothing else than the quantum equivalent
of the so called {\sl averaging transformation} of classical
mechanics and results of great help in shading light on the physical
content of (\ref{H1x}).
\section{Quantum averaging transformations}
A well known strategy in dealing with the adiabatic separation of
fast and slow variables in classical mechanics consists in performing
a series of successive canonical transformations (the {\sl averaging
transformations}) separating, order by order in some adiabatic
parameter, the rapid oscillation of the system from its slow
averaged motion. The analysis depending essentially on the canonical
structure of the problem generalizes immediately to quantum
mechanics, the canonical transformations being replaced by
suitable unitary transformations. The full adiabatic expansion describing
the motion of a spin degree of freedom adiabatically driven by external
parameters has been obtained along these lines by M.V.\ Berry \cite{Be87}
while R.G.\ Littlejohn and S.\ Weigert \cite{L&W93} employed the method
in discussing the first adiabatic corrections to the semiclassical motion
of a neutral spinning particle in an inhomogeneous magnetic field.
We shall consider therefore a set of unitary operators
\begin{eqnarray}
U^{(n)}=\exp\left\{i{l_B}^n{\cal L}^{(n)}\right\},
\end{eqnarray}
$n=0,1,...$ such that fast and slow/very-slow degrees of freedom separate
up to ${\cal O}({l_B}^{n+1})$ in the Hamiltonian obtained by the
successive application of $U^{(0)}$, $U^{(1)}$, ...,$U^{(n)}$.
Whereas in classical mechanics it is natural to consider the averaging
transformation as defining new canonical variables, in quantum mechanics
it appears more convenient to keep the canonical operators fixed and
transform the Hamiltonian.
\subsection*{Zero-order transformation}
The zero order separation of the fast and slow/very-slow motion
requires the diagonalization of the spin term $gb({\vec X})
(\mbox{\boldmath$e$}_3({\vec X})\cdot\mbox{\boldmath$\sigma$})$ of Hamiltonian (\ref{H0x}).
Denoting by ${\rho_i}^j({\vec x})$ the infinitesimal generator
of the rotation ${R_i}^j({\vec x})$ bringing the fixed frame
$\{\hat{\mbox{\boldmath$x$}},\hat{\mbox{\boldmath$y$}},\hat{\mbox{\boldmath$z$}}\}$ into the adapted frame $\{\mbox{\boldmath$e$}_1({\vec x}),\mbox{\boldmath$e$}_2({\vec x}),\mbox{\boldmath$e$}_3({\vec x})\}$,
${R_i}^j={(\mbox{e}^\rho)_i}^j\equiv\,\delta_i^j+{\rho_i}^j+
{1\over2}{\rho_i}^k{\rho_k}^j+...$, the aim is achieved by choosing
\begin{eqnarray}
{\cal L}^{(0)}=-{1\over2}\varepsilon^{ijk}\rho_{ij}(\vec X)\sigma_k,
\label{L0x}
\end{eqnarray}
the matrix $\rho_{ij}={\rho_i}^j$ being evaluated in the guiding center
operators $\vec X$. Because the commutation relations (\ref{cr4.5})
the operator $U^{(0)}$ commute with $\Pi_1$, $\Pi_2$ and therefore with $J$,
produces ${\cal O}({l_B})$ terms when commuting with $\Pi_3$
and ${\cal O}({l_B}^2)$ terms when commuting with functions of $\vec X$.
In evaluating the new Hamiltonian ${\cal H}'=U^{(0)}{\cal H} {U^{(0)}}^\dagger
={{\cal H}^{(0)}}'+{l_B}{{\cal H}^{(1)}}'+...\ $ up to terms of order ${l_B}^2$
we have therefore to worry only about the action of $U^{(0)}$
on $\mbox{\boldmath$\sigma$}$ and $\Pi_3$. A very rapid computation yields the
transformation rule
\begin{eqnarray}
U^{(0)}(\mbox{\boldmath$e$}_i\cdot\mbox{\boldmath$\sigma$}){U^{(0)}}^\dagger= \sigma_i +{\cal O}({l_B}^2)
\label{U0sigma}
\end{eqnarray}
while the action of $U^{(0)}$ on $\Pi_3$,
$U^{(0)}\Pi_3{U^{(0)}}^\dagger=\Pi_3+U^{(0)}[\Pi_3,{U^{(0)}}^\dagger]$,
may be easily evaluated by computing the commutator in the original
set of operators $\pi_i$s and $x^i$s and transforming back to adiabatic
variables
\begin{eqnarray}
U^{(0)}\Pi_3{U^{(0)}}^\dagger= \Pi_3 + {l_B}(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$l_2$})\sigma_1
- {l_B}(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$l_1$})\sigma_2
+ {l_B}(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath${\cal A}$}) \sigma_3
+ {\cal O}({l_B}^2).
\label{U0pi3}
\end{eqnarray}
Subjecting ${\cal H}^{(0)}$ and ${\cal H}^{(1)}$ to the zero order averaging
transformation $U^{(0)}$ and by using (\ref{U0sigma}) and (\ref{U0pi3})
we obtain the new adiabatic expansion
\begin{eqnarray}
& &\;\;\;\;\;{{\cal H}^{(0)}}'/\hbar\omega_B=\, {1\over2}\,{\Pi_3}^2+ b\,
\left(J +g\, \sigma_3\right), \label{H0'x} \\
& &\;\;\;\;\;{{\cal H}^{(1)}}'/\hbar\omega_B\,=\,
-b^{-1/2}\varepsilon^{\mu\nu}
(\mbox{\boldmath$e$}_\mu\cdot\mbox{\boldmath$\nabla$} b) \;
\left({2\over3}J_\nu + g\,\sigma_3\Pi_\nu\right)
-{2\over3}\, b^{1/2}({\mbox{\boldmath$e$}_\mu}\cdot\mbox{\boldmath${\cal A}$}) \;J_\mu \nonumber\\
& &+\left[{1\over2}\big(\mbox{\boldmath$e$}_1\cdot\mbox{\boldmath$l_2$}-\mbox{\boldmath$e$}_2\cdot\mbox{\boldmath$l_1$}\big)J
-(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath${\cal A}$})(J-\sigma_3)\right]\, \Pi_3
+{1\over4}\big(\mbox{\boldmath$e$}_1\cdot\mbox{\boldmath$l_2$}+\mbox{\boldmath$e$}_2\cdot\mbox{\boldmath$l_1$}\big)\;
({\Pi_1}^2-{\Pi_2}^2)\, \Pi_3 \nonumber \\
& &-{1\over4}\big(\mbox{\boldmath$e$}_1\cdot\mbox{\boldmath$l_1$}-\mbox{\boldmath$e$}_2\cdot\mbox{\boldmath$l_2$}\big)\;
\{\Pi_1,\Pi_2\}\, \Pi_3
+ \, b^{-1/2}\Big[(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$l_2$})\,\Pi_1
-(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$l_1$})\,\Pi_2\Big]\,{\Pi_3}^2 \nonumber\\
& &+ \Big[(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$l_2$})\sigma_1
-(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$l_1$})\sigma_2\Big]\; \Pi_3
- gb^{1/2} \varepsilon^{\mu\nu}
\Big[(\mbox{\boldmath$e$}_\mu\cdot\mbox{\boldmath$l_1$})\sigma_1
+(\mbox{\boldmath$e$}_\mu\cdot\mbox{\boldmath$l_2$})\sigma_2\Big]\; \Pi_\nu, \label{H1'x}\\
& & \;\;\;\;\;... \nonumber
\end{eqnarray}
All the functions are evaluated in $\vec X$.
The fast and slow/very-slow motion are separated in this way in the zero
order term of the adiabatic expansion but not in the first order term.
\subsection*{First-order transformation}
The application of the first order averaging transformation $U^{(1)}$ to
${\cal H}'$ produces the new Hamiltonian ${\cal H}''=U^{(1)}{\cal H}'{U^{(1)}}^\dagger={{\cal H}^{(0)}}'
+{l_B}({{\cal H}^{(1)}}'+i[{\cal L}^{(1)},{{\cal H}^{(0)}}'])+...\ $. It is then possible
to simplify the first order term of the adiabatic expansion
by choosing ${\cal L}^{(1)}$ in such a way that its commutator with
${{\cal H}^{(0)}}'$ cancels as much terms as possible of ${{\cal H}^{(1)}}'$. The analysis
of the commutation relation involved and a little thought indicates that
it is possible to annihilate all but not the third term of (\ref{H1'x})
by choosing
\begin{eqnarray}
& &\;\;\;\;{\cal L}^{(1)}\,=\,
-b^{-3/2}
(\mbox{\boldmath$e$}_\mu\cdot\mbox{\boldmath$\nabla$} b) \;
\left({2\over3}J_\mu + g\,\sigma_3\Pi_\mu\right)
+{2\over3}\, b^{-1/2}\varepsilon^{\mu\nu}({\mbox{\boldmath$e$}_\mu}\cdot\mbox{\boldmath${\cal A}$}) \;J_\nu\nonumber\\
& & -{1\over8}b^{-1}\big(\mbox{\boldmath$e$}_1\cdot\mbox{\boldmath$l_2$}+\mbox{\boldmath$e$}_2\cdot\mbox{\boldmath$l_1$}\big)\;
\{\Pi_1,\Pi_2\}\,\Pi_3
-{1\over8}b^{-1}\big(\mbox{\boldmath$e$}_1\cdot\mbox{\boldmath$l_1$}-\mbox{\boldmath$e$}_2\cdot\mbox{\boldmath$l_2$}\big)\;
({\Pi_1}^2-{\Pi_2}^2)\,\Pi_3 \nonumber\\
& & - \,b^{-3/2} \Big[(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$l_2$})\,\Pi_2
+(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$l_1$})\,\Pi_1\Big]\,{\Pi_3}^2
+ \,g^{-1}b^{-1} \Big[(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$l_2$})\sigma_2
+(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath$l_1$})\sigma_1\Big]\;\Pi_3\nonumber\\
& & +{g\over g^2-4}b^{-3/2}
\Big[(\mbox{\boldmath$e$}_\mu\cdot\mbox{\boldmath$l_1$})( 2\sigma_1\delta^{\mu\nu}
-g\sigma_2\varepsilon^{\mu\nu})
-(\mbox{\boldmath$e$}_\mu\cdot\mbox{\boldmath$l_2$})( 2\sigma_2\delta^{\mu\nu}
+g\sigma_1\varepsilon^{\mu\nu})\Big]\Pi_\nu
\label{L1x}
\end{eqnarray}
The commutators of the zero order Hamiltonian (\ref{H0'x}) with
the various terms of ${\cal L}^{(1)}$ yields the terms of
(\ref{H1'x}) times the imaginary factor $i$, in such a way that they
cancel in the new adiabatic expansion. All the terms but not the third.
It is in fact immediate to convince that no operator may be found in
such a way that its commutator with (\ref{H0'x}) produces a term
proportional to $J\Pi_3$ and $\sigma_3\Pi_3$. The third therm of (\ref{H1'x})
may therefore not be removed from the adiabatic expansion representing
a real first order coupling among fast and slow/very-slow motion and not
a complication produced by a wrong choice of variables. Its
relevance in the context of the classical guiding center motion has
been first recognized by R.G.\ Littlejohn \cite{Lj}.
It is therefore not a surprise to re-find it in the discussion of the
quantum guiding center dynamics. The quantum averaging method
produces so the adiabatic expansion
\begin{eqnarray}
&&{{\cal H}^{(0)}}''=\,{{\cal H}^{(0)}}'\\
&&{{\cal H}^{(1)}}''/\hbar\omega_B=
\,\left[{1\over2}\big(\mbox{\boldmath$e$}_1\cdot\mbox{\boldmath$l_2$}-\mbox{\boldmath$e$}_2\cdot\mbox{\boldmath$l_1$}\big)J
-(\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath${\cal A}$})(J-\sigma_3)\right]\, \Pi_3,\label{H1''x}\\
&& \;\;\; ...\ . \nonumber
\end{eqnarray}
We observe that whereas the zero order terms (\ref{H0'x}) depends
only on the magnetic-filed-norm $b$ (other than on the commutation
relations (\ref{cr4.5})) the first order term (\ref{H1''x}) is completely
characterized by the Frobenius invariant (\ref{Frobenius}),
and by the magnetic filed lines torsion (\ref{torsion}).
\section[]{Quantum guiding center dynamics and \\
magnetic-induced geometric-magnetism}
The construction of a suitable set of non-canonical operators
embodying the classically expected features of the motion of a charged
particle in an inhomogeneous magnetic field and the quantum averaging
method allow us to rewrite the Pauli Hamiltonian (\ref{H_inhom})
in such a way that the fast degree
of freedom---corresponding to the classical rotation of the particle
around its guiding center---and the spin degree of freedom separate, up
to terms of order ${l_B}^2$, from the guiding
center dynamics. The transformation to the adiabatic operators $\Pi_i$s,
$X^i$s, (\ref{X1}) and (\ref{P1}), and application of the zero and
first order quantum averaging operators, (\ref{L0x}) and (\ref{L1x}),
produces in fact the Hamiltonian
\begin{eqnarray}
{\cal H}/\hbar\omega_B=\,{1\over2}\,{\Pi_3}^2+ b\,\left(J +g\, \sigma_3\right)
-{l_B}\left[\tau\,(J-\sigma_3)-{1\over2}{\cal F}\,J \right]\,\Pi_3
+{\cal O}({l_B}^2).
\label{Heffx}
\end{eqnarray}
Disregarding terms of order higher than ${l_B}$ the operators $J$,
representing the magnetic moment of gyration of the particle,
and $\sigma_3$ are constant of motion of the system.
Frozen the particle in one of its $J$ and $\sigma_3$ eigenstates
Hamiltonian (\ref{Heffx}) describes therefore the corresponding
guiding centre dynamics.
As long as ${\cal O}({l_B}^2)$ are ignored $X^1$ and $X^2$ appear
as non-dynamical external adiabatic parameters and only the
$\Pi_3$-$X^3$ degree of freedom, representing in the classical
limit the drift of the particle along the magnetic field lines,
is dynamically relevant.
To this order, therefore, the quantum guiding center dynamics is
described by a one degree of freedom Hamiltonian given by the sum of
the kinetic energy ${\Pi_3}^2/2$ and of an effective potential proportional
to $b(\vec X)$.
$\Pi_3$ being a {\sl slow} variable, that is of the same magnitude of the first
adiabatic correction, the order ${l_B}$ term $[\tau(J-\sigma_3)-{\cal F}J/2]
\Pi_3$ may be identified as a magnetic-like interaction and reabsorbed
in the zero order Hamiltonian as a gauge potential. The guiding center
Hamiltonian rewrites in this way in the familiar form
\begin{eqnarray}
{\cal H}/\hbar\omega_B=\,{1\over2}\,(\Pi_3-{l_B}\,A(\vec X))^2
+V(\vec X)
+{\cal O}({l_B}^2),
\label{Hend}
\end{eqnarray}
with
\begin{eqnarray}
A(\vec X)&=& (J-\sigma_3)\,\tau(\vec X)-{J\over2}\,{\cal F}(\vec X)\label{A},\\
V(\vec X)&=& (J +g\, \sigma_3)\,b(\vec X)\label{V}.
\end{eqnarray}
As it might be expected from the general discussion of section 3 the
magnetic filed line torsion $\tau=\mbox{\boldmath$e$}_3\cdot\mbox{\boldmath${\cal A}$}$ appears
as (a part of) a gauge potential in the effective slow dynamics,
taking into account the anholonomy produced by the non trivial
parallel transport of the magnetic line bundle ${\cal M}$.
Maybe unexpected, at least form this point of view, is the contribution given
by the Frobenius invariant. Let us in fact
compare the guiding center motion of a charged particle
along a magnetic filed line with the propagation of light in a
coiled optical fiber \cite{C&W86-T&C86} or to the motion of an electron
constrained on a twisted line \cite{T&T92}. In both cases---sharing
the same mathematical background---the adiabatic dynamics
is coupled with an effective vector potential proportional to the torsion of
the optical fiber or of the twisted line. The analogue contribution appears
in the guiding center motion, $(J-\sigma_3)\,\tau(\vec X)$,
but it is not the whole story.
The particle being not homogeneously constrained in the neighborhood
of a line, it results sensible to the variation of the geometry in
the magnetic
field lines surrounding the one on which the guiding center si located.
If all the field lines would have the same geometry,
the foliation of $R^3$ in terms of them would be trivial, the Frobenius
invariant zero and the situation analogue the examples above.
The geometry of this foliation being in general non trivial it yields
a further contribution to the gauge potential $A(\vec X)$ proportional
to the Frobenius invariant, $J{\cal F}(\vec X)/2$.
It is obviously not possible, in the general case, to remove the
gauge potential (\ref{A}) by means of a suitable choice of gauge.
In order to make the identification of (\ref{Hend}) with a one---two,
if we want to consider $X^1$, $X^2$ as dynamical---degree of freedom
Hamiltonian complete it is necessary to replace the $\Pi_i$s, $X^i$s
by a set of canonical operators. The task is achieved by introducing a
Darboux coordinate frame $\mbox{x}^i=\mbox{x}^i({\vec x})$, $i=1,2,3$,
bringing the closed skew symmetric two-form $b_{ij}({\vec x})$ in the
canonical form,
\begin{eqnarray}
b_{kl}({\vec x}){\partial x^k\over\partial\mbox{x}^i}{\partial x^l\over\partial\mbox{x}^j}=\,\varepsilon_{ij}
\end{eqnarray}
($\mbox{x}^1({\vec x})$, $\mbox{x}^2({\vec x})$ may be identified with a pair of Euler
potentials \cite{St70}
for the magnetic filed $\mbox{\boldmath$b$}({\vec x})$, $\mbox{\boldmath$\nabla$}\mbox{x}^1\wedge\mbox{\boldmath$\nabla$}\mbox{x}^2
=\mbox{\boldmath$b$}$, while $\mbox{x}^3({\vec x})$ with the arc length of the magnetic filed
lines). Defining $\mbox{X}^i=\mbox{x}^i(\vec X)$, $i=1,2,3$ we get the
canonical commutation relations $[\Pi_3,\mbox{X}^i]=-i{l_B}\delta_3^i$ and
$[\mbox{X}^i,\mbox{X}^j]=-i{l_B}^2\varepsilon^{ij}$ allowing the identification of
the operators describing the slow and the very-slow degrees of freedom.
It is in principle possible to start from the beginning by introducing
such curvilinear coordinates in $R^3$ and work out the problem by using
a canonical set of operators \cite{Ga59}.\footnote{The introduction of
Darboux coordinate
would produce automatically the framing of $T_{{\vec x}}R^3$ by means of an
adapted frame $\{\mbox{\boldmath$e$}_1({\vec x}),\mbox{\boldmath$e$}_2({\vec x}),\mbox{\boldmath$e$}_3({\vec x})\}$. Our method, on the
other hand, consists in adapting the frame of $T_{{\vec x}}R^3$ without
introducing the curvilinear coordinates.}
Nevertheless, whereas the existence of a Darboux coordinate frame is
always guaranteed by Darboux theorem, it is hardly ever possible to find it
explicitly and to proceed to the construction of the $\mbox{X}^i$s.
For this reason---thought the $\Pi_i$s, $\mbox{X}^i$s appear as the most
natural variables for the problem---the explicit construction
of a set of non canonical operators appears as a better
strategy.
\section{Conclusions}
The main difficulty in addressing the separation of fast and slow
degrees of freedom in the study of an adiabatic system consist
generally in finding out a suitable set of variables adapting with
sufficient accuracy to the different time scale behaviours of the
system. Starting from the homogeneous case and the
canonical commutation relations (\ref{cr2.2}) we showed
how the analysis of the canonical structure of a charged spinning
particle moving in an external inhomogeneous magnetic field
leads naturally to the construction---as power series in the
magnetic length $\l_B$---of a suitable set of non-canonical operators
allowing to systematically take into account the coupling between
spatial and spin degrees of freedom. The new variables fulfil
the very compact commutation relations (\ref{cr4.5}) clearly displaying the
dependence of the canonical structure on the norm and direction of the
external magnetic field. In terms of the new operators the Pauli Hamiltonian
rewrites as a power series in the adiabatic parameter $\l_B$ which
may be brought in a particular simple form by operating suitable
unitary transformations. In this way the {\sl fast} degree of freedom
of the system representing classically the rapid rotation of the particle
around the guiding center and the spin are separated from the remaining
degrees of freedom up to terms of order $\l_B^2$. The resulting effective
guiding center dynamics displays geometric-magnetism:
the coupling with the geometry induced gauge potential (\ref{A}), depending
on the magnetic filed lines torsion (\ref{torsion}) and on the Frobenius
invariant (\ref{Frobenius}), and with the scalar potential (\ref{V}),
proportional to the magnetic field norm. This completely extend to the
quantum domain
the previous classical treatments of the problem showing that the anholonomy
first studied by R.G.\ Littlejohn in the classical guiding center theory
plays an equivalent role in the discussion of the quantum problem. It is
a feature of the canonical structure of the system after all. The
geometrical mechanism responsible for the appearance of induced gauge
structures has also been analyzed in some detail and formalized in the
geometry of the magnetic line bundle ${\cal M}$.
In concluding we observe that our discussion gives in some sense the
solution of only half of the problem. The guiding center dynamics is still
characterized by the presence of a {\sl fast} and a {\sl slow} time scale
({\sl slow} $\rightarrow$ {\sl fast}, {\sl very-slow} $\rightarrow$
{\sl slow}) and is therefore amenable to a treatment by means of adiabatic
techniques. Nevertheless, the remaining problem is not of a
so deep geometrical nature as the original one and is probably not
capable of a treatment in general terms.
\section*{Acknoledgments}
It is a genuine pleasure to thank M.V.\ Berry for hospitality
and a very stimulating ``few days discussion about {\sl fast} and {\sl slow}''
in Bristol. I am also indebted with J.H.\ Hannay, E.\ Onofri and
J.M.\ Robbins for very helpful discussions on related topics.
\newpage
| proofpile-arXiv_065-331 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The non-linear $O(3)$ $\sigma$-model in (3+1) dimensional
space-time is a scalar field theory whose target space is
$S^2$.
The static fields are maps ${\Bbb R}^3 \cup \{\infty \} \mapsto S^2$
and can be classified by a homotopy invariant which is called
the Hopf number.
Such a model in three space dimensions
must include higher order terms in the field
gradient in order to allow non-singular, topologically non-trivial,
static solutions.
The corresponding ``sigma model with a Skyrme term'' was proposed
long ago by L.D. Faddeev \cite{Fad:76}.
For this model the Hopf number provides a lower topological bound
on the energy \cite{WaK}.
Early studies on ``Hop{f}ions'' (soliton solutions of Hopf number unity)
in classical field theory, including estimates for their size and mass,
were carried out by de~Vega\cite{deV}.
Subsequently it was suggested to employ them in
an effective chiral theory describing low-energy hadron dynamics;
in that respect they are similar to Skyrmions \cite{Gip/Tze:80}.
It was later shown by Kundu and Rybakov \cite{akr}
that Hop{f}ions in the $O(3)$ $\sigma$-model are of closed vortex type.
Models with non-zero Hopf number
have also been investigated in condensed matter physics
for the description of three-dimensional ferromagnets and
superfluid ${}^3$He\cite{DzI,VoM}.
These are effective theories of Ginzburg-Landau type
where the fields are interpreted as physical order parameters.
However, a field configuration which is a solution of the
full equations of motion has not been found for any of the
mentioned theories.
In this paper we mainly study classical static Hop{f}ions.
Our model is defined in section \ref{Hmaps} where also
an ansatz of azimuthal symmetry is introduced which is later
used for numerical computations.
In section \ref{Nres} we present our numerical results which are
minima of the energy functional for
Hopf number one and two.
We discuss their shapes and binding energies as well
as their relation to (2+1) dimensional solitons.
Our model has a self-interaction
coupling parameter and we study the
dependence of the energy on this coupling.
In addition, the effect of a symmetry breaking
potential term is described.
In section \ref{Srot} we give a simple
approximation for the excitation spectrum
of a Hop{f}ion slowly rotating around its axis of symmetry.
We conclude with section \ref{Conc} where we \mbox{also remark on
possible further investigations.}
\section{Hopf maps and toroidal ansatz}\label{Hmaps}
We are almost exclusively interested in static solutions
and therefore define
our model by the following energy functional on ${\Bbb R}^3$
\begin{equation}
\label{en}
E_{stat}\left[ \bbox{\phi} \right] =
\Lambda\int_{{\Bbb R}^3} d{\bf x}\;
\frac{1}{2}
\left(\partial_i \bbox{\phi} \right)^2 +
\frac{g_1}{8}\left(\partial_i \bbox{\phi} \times
\partial_j \bbox{\phi} \right)^2 +
\frac{g_2}{8}\left(\partial_i \bbox{\phi} \right)^2
\left( \partial_j \bbox{\phi}\right)^2 \,.
\end{equation}
For $g_2=0$ this is equivalent to the static
energy of the Faddeev-Skyrme model \cite{Fad:76,WaK}.
The field $\bbox{\phi}$ is a three component vector in
iso-space, subject to the constraint $\bbox{\phi}^2=1$.
The cross product is taken in internal space and the coordinate
indices $i,j$ run from 1 to 3.
For $g_1=g_2=0$
minima of $E$ (eq.~(\ref{en})) are harmonic maps from ${\Bbb R}^3$
to $S^2.$ As shown in ref.~\cite{Bai/Woo:88}, all non-constant
harmonic maps are orthogonal projections
${\Bbb R}^3 \mapsto {\Bbb R}^2$, followed by a
harmonic map ${\Bbb R}^2 \mapsto S^2$ and therefore have infinite energy.
Consistently,
simple scaling arguments along the line of the Hobart-Derrick theorem
\cite{Hob:63/der} show that the fourth order terms in the energy functional
are required to stabilize the soliton against shrinkage. We include here
the most general combination of global $O(3)$-invariant fourth order terms.
The parameter $\Lambda$ is a constant of dimension energy/length and
determines the models energy unit.
The couplings $g_1$ and $g_2$ are of dimension (length)${}^2.$
The ratio $g_1/g_2$ is the only physically relevant coupling
since an overall scaling of $g_1$ and $g_2$
can be absorbed by a rescaling of length and energy units.
Using $\left(\partial_i \bbox{\phi} \times
\partial_j \bbox{\phi} \right)^2$ = $
\left(\partial_i \bbox{\phi}\right)^2\left(\partial_j \bbox{\phi}\right)^2-
\left(\partial_i \bbox{\phi} \cdot \partial_j \bbox{\phi}\right)^2$
and the inequality
\begin{equation}
2\sum_{i j}
\left(\partial_i \bbox{\phi} \cdot \partial_j \bbox{\phi}\right)^2
\geq
\sum_{i j}
\left(\partial_i \bbox{\phi}\right)^2\left(\partial_j \bbox{\phi}\right)^2
\geq
\sum_{i j}
\left(\partial_i \bbox{\phi} \cdot \partial_j \bbox{\phi}\right)^2\,,
\end{equation}
one sees that the allowed ranges for the coupling constants are
$g_2 \geq 0$ and $g_1 > -2g_2.$
For finite energy solutions one requires $\bbox{\phi} \to {\bf n}$
as $\left|{\bf r}\right| \to \infty$, where ${\bf n}$ is a constant
unit vector.
Thus ${\Bbb R}^3$ can be one-point compactified to
$S^3$ and the fields $\bbox{\phi}$ are maps
\begin{equation}
\bbox{\phi} : \quad S^3 \mapsto S^2.
\end{equation}
Because $\pi_3(S^2)={\Bbb Z}$, every $\bbox{\phi}$ falls into a
class of topologically equivalent maps, where each class
is characterized by an integer: the Hopf number $H$.
Although it is not a simple ``winding number'', $H$ has an
elementary geometric interpretation.
The pre-image of
every point of the target space $S^2$ is isomorphic to a circle.
All those circles are interlinked with each other in the
sense that any circle intersects the disc spanned by
any other one.
The Hopf number just equals the multiplicity by which two
arbitrary circles are linked.
$H$ also has a differential geometric representation
~\cite{Bott82}:
If $f$ is a generator of the de-Rham cohomology
$H^2_{DR}(S^2)$, its pullback $F$ under
$\bbox{\phi}$ is necessarily exact since $H^2_{DR}(S^3)=0.$
Hence a 1-form $A$ with $F = dA$ exist and
$H \sim \int A\wedge F.$
In coordinate language, the dual of $F$ is
$B_i=\varepsilon_{ijk}\,\bbox{\phi}\cdot\partial_j\bbox{\phi} \times
\partial_k\bbox{\phi}$
and
\begin{equation}\label{hopfnr}
H = -\frac{1}{(8\pi)^2}\int_{{\Bbb R}^3} d{\bf x}\;
{\bf B \cdot A}\,.
\end{equation}
It was proved in \cite{WaK} that
the energy eq.~(\ref{en}) has a lower topological bound
in terms of $H$. For $g_1 \geq 0$ it is given by
\begin{equation}\label{topbound}
E_{stat} \geq \Lambda k H^{3/4}\,,
\end{equation}
where $k=\sqrt{2g_1}(2\pi)^23^{3/8}$ \cite{akr}.
The variational equations resulting from eq.~(\ref{en}) are coupled
non-linear partial differential equations.
It would be useful to find a parametrization of $\bbox{\phi}$
which carries non-zero Hopf charge and
allows the equations to be reduced to ordinary
differential equations.
There have been two proposals for such fields in the
literature.
One of them uses
spherical coordinates and is a composition of the standard Hopf map
and a map $S^2 \mapsto S^2$ for which a hedgehog ansatz is employed
\cite{VoM,fot/zhs}.
Alternatively, a closed vortex ansatz in toroidal
coordinates was suggested
\cite{deV,Gip/Tze:80,Meis:85,Wu/Zee:89}.
However, as shown in \cite{Kund:86},
even for $g_2=0$ none of these proposals
allows a consistent separation of variables in
the variational equations derived from eq.~(\ref{en}).
At this point it is instructive to look at the symmetries of the field.
It was shown in ref.~\cite{akr}
that the maximal subgroup of
$O(3)_X \otimes O(3)_I$ under which
fields with non-vanishing
Hopf number can be invariant is
\begin{equation}\label{symm}
G =
\text{diag} \left[ O(2)_X \otimes O(2)_I \right].
\end{equation}
Here $O(2)_X$ and $O(2)_I$ denote rotations
about a fixed axis in space and iso-space respectively.
We choose the $z$- and $\phi_3$-axis as the axes of symmetry.
According to the Coleman-Palais theorem we expect to find
the minimal energy solution in the class of
$G$-invariant configurations~\cite{Mak/Ryb:SkMod}.
Therefore we
use the most general $G$-invariant ansatz,
written in terms of two functions $w(\xi_1, \xi_2)$ and
$v(\xi_1,\xi_2).$ They depend on coordinates $\xi_1$ and
$\xi_2$ which form an orthogonal coordinate system
together with $\alpha$, the angle around the $z$-axis:
\begin{equation}\label{ansatz1}
\phi_1 + i \phi_2 = \sqrt{1- w^2(\xi_1, \xi_2)}
e^{i(N\alpha + v(\xi_1, \xi_2))}
\,, \qquad \phi_3 = w(\xi_1, \xi_2).
\end{equation}
We have checked the consistency of this ansatz with the
variational equations derived from eq.~(\ref{en}).
The components $\phi_1$ and $\phi_2$ have to vanish along the $z$-axis
for the field to be well-defined.
This is realized
by setting $\bbox{\phi}(0,0,z)$ =
{\bf n} = (0, 0, 1), which
also defines the vacuum state of the theory.
In order to describe
a non-trivial map, $\bbox{\phi}$ has to
be surjective.
Hence there is at least one point ${\bf r}_0$ with
$\bbox{\phi}({\bf r}_0) = - {\bf n}$.
Under the action of $G$, ${\bf r}_0$ represents a full circle
around the $z$-axis.
We fix our coordinate system such that this circle lies in the
$xy$-plane and define $a \equiv \left| {\bf r}_0 \right|$.
On every trajectory from the circle to the $z$-axis or
infinity, $w(\xi_1, \xi_2)$ runs at least once from $-1$ to $1.$
Therefore the surfaces of constant $w$ are homeomorphic to tori.
This structure prompts us to choose toroidal coordinates
$(\eta, \beta, \alpha)$,
related to cylindrical
coordinates $(r,z,\alpha)$ as follows
\begin{equation}
r = \frac{a \sinh \eta}{\tau}, \qquad
z = \frac{a \sin \beta}{\tau}\,,
\end{equation}
where $\tau = \cosh \eta - \cos \beta$.
Surfaces of constant $\eta$ describe tori about the
$z$-axis, while
each of these tori is parametrized by the two angles
$(\beta,\alpha)$.
The two cases $\eta =0$ and $\eta =\infty$ correspond to
degenerated tori,
$\eta = 0$ being the $z$-axis and
$\eta = \infty$ the
circle of radius $a$ in the $xy$-plane.
The function $w(\eta,\beta)$ is subject to the boundary conditions
$w(0, \beta) =1, w(\infty, \beta)=-1$ and is periodic in
$\beta.$
$v(\eta,\beta)$ is an angle around $\phi_3$ and can include windings
around $\beta.$ Therefore we set $v(\eta,\beta)=M\beta+v_0(\eta,\beta)$
where $v_0(.,\beta): S^1\mapsto S^1$ is homotopic to the constant map.
Since $v$ is ill-defined for $w=\pm 1,$ it is not restricted by
any boundary condition at $\eta=0,\infty.$
The ``potential'' ${\bf A}$ and the ``field strength''
${\bf B}$ for this ansatz are given by
\begin{eqnarray}\label{AB}
A_\alpha = 2\frac{\tau}{a \sinh\eta}N(w-1),
\qquad
A_\beta = 2\frac{\tau}{a}(M+\dot{v}_0)(w+1),\qquad
A_\eta = 2\frac{\tau}{a} v_0^\prime(w+1),\nonumber\\
B_\alpha = 2\frac{\tau^2}{a^2}
( w^\prime(M+ \dot{v}_0) - v_0^\prime\dot{w}),\qquad
B_\beta = -2\frac{\tau^2}{a^2\sinh\eta}N w^\prime,\qquad
B_\eta = 2\frac{\tau^2}{a^2\sinh\eta}N
\dot{w},
\end{eqnarray}
where the dot and prime denote derivatives
with respect to $\beta$ and $\eta$ respectively.
Note that the field $\bf A$ is well defined on all of ${\Bbb R}^3.$
The gauge has been chosen such that
$A_\alpha$ vanishes for $\eta=0$ (where the coordinate $\alpha$
is undefined) and
analogously $A_\beta$ vanishes for $\eta=\infty.$
Eq.~(\ref{hopfnr}) then gives
$H=N\,M$ in accordance with the linking number
definition given above.
The energy eq.~(\ref{en}) of ansatz eq.~(\ref{ansatz1}) is given by
\begin{eqnarray}\label{en2}
E[w(\eta,\beta),v(\eta,\beta),a] & = &\pi{\Lambda}
\int d\eta\,d\beta\;\frac{a^3 \sinh\eta}{\tau^3}
\left\{
\frac{(\nabla w)^2}{1-w^2}+(1-w^2)\left((\nabla v)^2+\frac{N^2\tau^2}{a^2
\sinh^2\eta}\right) \right. \nonumber\\
&&+\left.\frac{g_1}{2}\left(\frac{N^2\tau^2}{a^2\sinh^2\eta}(\nabla w)^2+
(\nabla w\times\nabla v)^2\right)\right. \nonumber\\
&&+\left.\frac{g_2}{4}\left[
\frac{(\nabla w)^2}{1-w^2}+(1-w^2)\left((\nabla v)^2+\frac{N^2\tau^2}{a^2
\sinh^2\eta}\right)\right]^2
\right\}\,.
\end{eqnarray}
In toroidal coordinates the gradient includes a factor $a^{-1}$.
Hence the term quadratic in the gradients is proportional to $a$
while the quartic terms are inverse proportional to it.
For soliton solutions, the energy functional has to be varied
with respect to $w, v$ and $a$.
\section{Numerical Results}\label{Nres}
The variational equations for eq.~(\ref{en2}) are highly
nonlinear coupled PDEs and numerically hard to tackle.
Therefore we solved the problem by
a minimization of the energy functional
which was discretized on an $(\eta,\beta)$ grid.
The search for the minimum in a high-dimensional space
is feasible using the NETLIB routine $ve08$ with
an algorithm described in \cite{Gri/Toi:82}.
This method is applicable if the objective function is a sum
$f({\bf x})=\sum f_i({\bf x})$
of simpler functions $f_i,$ each of which is
non-constant only for a few components of the (multi-dimensional)
vector {\bf x}.
Thus the Hessian matrix is very sparse and can be updated
locally. This saves a considerable amount of memory and time
compared to a more naive implementation of a conjugate gradient search.
We obtain
field configurations as displayed in
Fig.~\ref{fig1}(a) where the Hopf number equals 1.
In this plot the field $\bbox{\phi}$ is viewed
from above the north pole of target $S^2$.
Iso-vectors in the northern hemisphere terminate in a cross, those
in the southern hemisphere in a dot.
The toroidal structure of the fields is clearly visible.
Also note that the fields in the southern hemisphere
span a torus indeed.
There is an interesting interpretation of such configurations
in terms of the
$O(3)$ $\sigma$-model in (2+1) dimensions, the solutions of which
we call (anti-) baby Skyrmions.
The fields in the positive and negative $x$-halfplane
of Fig.~\ref{fig1} are baby Skyrmions and
anti-baby Skyrmions respectively.
This can be understood in the following way.
Wilczek and Zee \cite{Wil/Zee:83} show that a
(2+1)-dimensional configuration of Hopf number one can be produced
by creating a baby Skyrmion/anti-baby Skyrmion pair from the vacuum,
rotating the (anti-) Skyrmion adiabatically
by $2\pi$ and then annihilating the pair.
In our model time corresponds to the third space dimension,
hence Fig.~\ref{fig1}(a) displays a ``snapshot'' at the time
when the anti-baby Skyrmion is rotated by $\pi$.
Baby Skyrmions are classified by a homotopy invariant
$Q \in {\Bbb Z}$ due to $\pi_2(S^2) = {\Bbb Z}$.
The analytic expression for $Q$ is given by
\begin{equation}\label{baby}
Q = \frac{1}{4\pi} \int_{{\Bbb R}^2}\, d{\bf x}\,
\bbox{\phi}\cdot\partial_1 \bbox{\phi}\times \partial_2 \bbox{\phi}\,,
\end{equation}
where 1 and 2 denote cartesian coordinates in ${\Bbb R}^2$.
The topological charge density is half the $\alpha$-component
of {\bf B}.
The integral over the whole plane vanishes because the contributions
for negative and for positive $x$ exactly cancel.
However, if integrated over the positive
half-plane only, eq.~(\ref{baby}) yields the
baby Skyrmion number for ansatz ~(\ref{ansatz1}):
\begin{equation}
Q = \frac{1}{8\pi}
\int_0^{2\pi} d \beta\,\int_0^\infty d\eta\,
\frac{a^2}{\tau^2}B_\alpha = M\,,
\end{equation}
where we use $B_\alpha$ of eq.~(\ref{AB}).
Next we turn to Hop{f}ions of topological charge two.
For parametrisation eq.~(\ref{ansatz1}) there are two
ways of creating a Hop{f}ion with $H=2$, namely by setting
either $N$ or $M$ to 2.
Both cases correspond to two Hop{f}ions sitting on top of each other.
In order to determine which configuration represents the true
ground state we computed their energies
and found that the configuration with $N=2, M=1$ yields the
lower energy for all couplings.
The interpretation of the $H=2$ solutions in terms
of a (2+1)-dimensional soliton/anti-soliton pair
is equivalent to the
one given above for the 1-Hop{f}ion.
Because the multiplicity of the azimuthal
rotation is $N=2$ for the 2-Hop{f}ion, the
anti-baby Skyrmion
in the negative $x$-halfplane (see Fig.~1(b))
has a relative angle of $\pi$ compared to the anti-baby Skyrmion
of Fig.~1(a).
It is instructive to investigate how the inclusion of a potential
term $V[\bbox{\phi}]$ alters the configuration.
Its energy can be lowered by rescaling {\bf x} $\to \lambda${\bf x},
($\lambda \to 0$)
under which $V \to \lambda^3 V$.
This means that the potential term induces a ``shrinkage'' of the
configuration in the sense that the favoured position of the
fields is closer to their vacuum value.
This effect
is counter-balanced by the higher order derivatives
in the energy functional eq.~(\ref{en}).
Any potential explicitly breaks the models global $O(3)$ symmetry
because $O(3)$ acts transitively on the target space.
We chose
\mbox{$V=m^2\int d{\bf x}\,(1-{\bf n}\cdot\bbox{\phi})$},
where the parameter $m$ is of dimension (length)${}^{-1}$
and, in a quantum version of the theory, becomes
the mass of the elementary excitations.
The minimum energy solution for $m=4$ can be seen in
Fig.~1(c).
The tube-like region where the field is in the
southern hemisphere has clearly shrunk.
Adding a linear potential term also means that
the fields fall off exponentially at large distances.
The reason is that the equations of motion become
in the asymptotic limit those of
the massive Klein-Gordon equation.
The fields of minimal energy correspond, via eq.~(\ref{en}), to
energy distributions which are displayed in Fig.~2.
Despite the toroidal structure of the fields, we find that
the energy for the Hop{f}ion of $H=1$ is lump-shaped,
see Fig.~2(a).
Although unexpected, this is not entirely unlikely, because the
field changes far more rapidly within the disc
$\left|{\bf r}\right| \leq a$ than outside it.
Hence the gradient energy can be concentrated in the
vicinity of the origin.
If the potential term becomes very large compared to the gradient terms
one expects the energy to become more localized around the
filament where the fields are far away from the vacuum.
We observe this transition to a toroidal energy distribution
at $m \approx 4$ for $g_1=1, g_2=0.$
The energy distribution of the 2-Hop{f}ion is
of toroidal shape (for all $m$), as shown in Fig.~2(b).
It is a common feature in many soliton theories that
solutions of topological charge two are tori, notably
Skyrmions, baby Skyrmions and magnetic monopoles.
It is interesting to ask whether the 2-Hop{f}ion is in a
stable state or likely to decay into two Hop{f}ions of charge one.
As an estimate for short range interactions one can compare the
energy per Hop{f}ion for the solution of $H=1$ and $H=2$
and conclude from the
sign of the energy gap whether there is a repulsive
or attractive channel.
Our results are plotted in Fig.~3(a), which also
shows the topological bound eq.~(\ref{topbound}).
For a pure Skyrme coupling we obtain
energies of $197\Lambda$ and $2*158\Lambda$ for the 1-Hop{f}ion and
2-Hop{f}ion respectively.
Moreover, it turns out that for all couplings the 2-Hop{f}ion
has a lower energy per topological unit than the
1-Hop{f}ion.
This indicates that there is a range where the forces
are attractive and that the 2-Hop{f}ion can be stable at least
under small perturbations.
Of course, there can be a range in which the forces
are repulsive, however, an investigation of
such interactions would require a full
(3+1)-dimensional simulation which is beyond our present means.
Also note that the gap between the energies per Hop{f}ion
is largest when the fourth order terms are purely the Skyrme term.
On the other hand,
for $g_1 \to -2g_2$, (i.e. $g\to 1)$ the energy of the quartic terms tends to
zero.
Hence the energy of the soliton vanishes
as a consequence of the above mentioned Hobart-Derrick
theorem.
\section{Spinning Hopfions}\label{Srot}
Finally, we study the effect of a slow rotation
around the axis of symmetry.
For this we use a Lorentz-invariant extension of our
model into (3+1) dimensional space-time.
The energy of the rotating Hop{f}ion $E=E_{rot}+E_{stat}$, where
$E_{stat}$ is the static energy given by eq.~(\ref{en}) and $E_{rot}$ is the
rotation energy functional:
\begin{equation}
E_{rot}\left[ \bbox{\phi} \right] =
\Lambda\int_{{\Bbb R}^3} d{\bf x}\;
\frac{1}{2}
\left(\partial_t \bbox{\phi} \right)^2 +
\frac{g_1}{8}\left(\partial_t \bbox{\phi} \times
\partial_i \bbox{\phi} \right)^2 +
\frac{g_2}{8}\left(\partial_t \bbox{\phi} \right)^2
\left( \partial_i \bbox{\phi}\right)^2 +
O\left(\left(\partial_t \bbox{\phi}\right)^4\right) \,.
\end{equation}
In the spirit of a moduli space approximation
we assume that the configuration does not alter its shape
due to the rotation (``rigid rotor''), i.e.
it is given at any time by a static solution (see \cite{Mak/Ryb:SkMod}
for a review on similar treatment of the Skyrmion).
We impose time dependence on the
azimuthal angle by \mbox{$\alpha \to \alpha + \frac{\omega}{N} t$}
with constant velocity $\omega.$
$E_{rot}$ leads to a term in the energy that is proportional to
$\omega^2$
\begin{equation}
E = E_{stat} + \frac{J}{2} \omega^2\,,
\end{equation}
where terms $O(\omega^4)$ are neglected.
$J$ is the moment of inertia and, using eq.~(\ref{ansatz1}), given by
\begin{equation}\label{moi}
J = 2\pi\Lambda \int \, d \eta d\beta\,
\left[ 1 + \frac{g_1}{2}\frac{(\nabla w)^2}{1- w^2} +
\frac{g_2}{2} \left(
\frac{(\nabla w)^2}{1- w^2} +
\left((\nabla v)^2+\frac{N^2\tau^2}{a^2\sinh^2\eta}\right)(1-w^2)\right)\right]
(1-w^2)\,.
\end{equation}
$J$ can be measured explicitly on the individual solution.
We plotted the values for $H=1$ and $H=2$ in Fig.~3(b).
The moment of inertia per Hop{f}ion is always larger for the
$H=1$ solution, with an increasing gap for decreasing $g$.
This should be compared with the dependence of
$E_{stat}$ on $g$.
The functional $E_{stat}$
(eq.~(\ref{en})) is invariant under $\alpha$-rotations
while the fields of ansatz~(\ref{ansatz1}) are clearly not.
Therefore, upon quantization,
the coordinate $\alpha$ describes a zero-mode and
requires treatment as a collective coordinate.
This is similar to the problem of the rotating radially symmetric
Skyrmion.
In analogy to the Skyrme model we
therefore use, as a first approximation, the spectrum obtained
by a straightforward quantization.
The canonical momentum is $l = i\frac{d}{d\alpha}, (\hbar=1)$ and
the rotational energy $E_{rot} =- l^2/2J$.
It is then trivial to solve the eigenvalue problem
$E_{rot}\psi = \lambda \psi$, which gives
$\lambda_n = \frac{n^2}{2J}$.
\section{Conclusions}\label{Conc}
We have studied topological solitons in a generalized non-linear
$O(3)$ $\sigma$-model in three space dimensions.
Physically one may think of them as a model
for hadronic matter or
topological defects in a condensed matter system.
By using a general
ansatz for the fields we
obtained explicit numerical solutions for soliton number one and two.
Unexpectedly, the energy of the 1-Hop{f}ion is distributed as a lump.
We also observed that two solitons sitting on top of each other
attract, thus indicating a stable configuration.
There are several interesting questions which remain unanswered.
In particular, the stability of Hop{f}ions of higher topological charge
deserves some scrutiny. It is worthwhile asking
how multi-solitons which sit on top of each other,
or at least are very close, behave under induced perturbations.
In analogy to planar $O(3)$ $\sigma$-models
there might be several decay channels
into less symmetric configurations \cite{Piet/Schr:95}.
At the opposite end of the scale, it would be instructive to look in
greater detail at the interaction potential of two or more
well-separated Hop{f}ions.
This is also interesting in comparison to the well-studied
dynamics of Skyrmions and monopoles.
Clearly, a first step in such an investigation would be to determine
the asymptotic fields of the Hopf soliton. It seems obvious that
inter-soliton forces will depend on the orientation of the
Hop{f}ions.
The complete description of Hop{f}ion dynamics
would require a huge numerical effort which can, however,
possibly be reduced by an appropriate approximation scheme.
For Bogomol'nyi solitons, the low-energy behaviour can be approximated
via the truncation of the dynamics to the moduli space.
Although our numerical results show that
Hop{f}ions are not of Bogomol'nyi type, given that the static forces
between them are weak, there is a chance that
their dynamics can be described by
some kind of moduli space approximation, in analogy to
Skyrmions (which are also not of Bogomol'nyi type).
Finally, it seems worth to study spinning
Hop{f}ions in a more sophisticated way.
This should include an assessment of the back reaction of the
rotation on the matter fields.
From this one expects a non-trivial shift of the energy levels in the
rotation spectrum and
possibly radiation of excessive energy.
\acknowledgements
It is a pleasure to thank Wojtek Zakrzewski, Jacek Dziarmaga and
Rob deJeu for helpful discussions.
We also wish to
thank Bernd Schroers for making reference \cite{WaK} available to
us.
JG acknowledges an EPSRC grant No.~94002269.
MH is supported by
Deutscher Akademischer Austauschdienst.
| proofpile-arXiv_065-332 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Algebraic (geometric) $n$-stacks}
Carlos Simpson
\bigskip
In the introduction of Laumon-Moret-Bailly (\cite{LaumonMB} p. 2) they
refer to a
possible theory of algebraic $n$-stacks:
\begin{inset}
Signalons au passage que Grothendieck propose d'\'elargir \`a son tour le cadre
pr\'ec\'edent en rempla\c{c}ant les $1$-champs par des $n$-champs (grosso modo,
des faisceaux en $n$-cat\'egories sur $(Aff)$ ou sur un site arbitraire) et il
ne fait gu\`ere de doute qu'il existe une notion utile de $n$-champs
alg\'ebriques \ldots .
\end{inset}
The purpose of this paper is to propose such a theory. I guess that the main
reason why Laumon and Moret-Bailly didn't want to get into this theory was for
fear of getting caught up in a horribly technical discussion of $n$-stacks of
groupoids over a general site. In this paper we simply {\em assume} that a
theory of $n$-stacks of groupoids exists. This is not an unreasonable
assumption, first of all because there is a relatively good substitute---the
theory of simplicial presheaves or presheaves of spaces
(\cite{Brown} \cite{BrownGersten} \cite{Joyal} \cite{Jardine1}
\cite{kobe} \cite{flexible})---which should be equivalent, in an appropriate
sense, to any eventual theory of $n$-stacks; and second of all because it seems
likely that a real theory of $n$-stacks of $n$-groupoids could be developped in
the near future (\cite{BreenAsterisque}, \cite{Tamsamani}).
Once we decide to ignore the technical complications involved in theories of
$n$-stacks, it is a relatively straightforward matter to generalize Artin's
definition of algebraic $1$-stack. The main observation is that there is an
inductive structure to the definition whereby the ingredients for the
definition of algebraic $n$-stack involve only $n-1$-stacks and so are already
previously defined.
This definition came out of discussions with C. Walter in preparation for the
Trento school on algebraic stacks (September 1996). He made the remark that the
definition of algebraic stack made sense in any category where one has a
reasonable notion of smooth morphism, and suggested a general terminology of
``geometric stack'' for this notion. One immediately realizes that the notion
of smooth morphism makes sense---notably---in the ``category'' of algebraic
stacks and therefore according to Walter's remark, one could define the notion
of geometric stack in the category of algebraic stacks. This is the notion of
algebraic $2$-stack. It is an easy step to go from there to the general
inductive definition of algebraic $n$-stack. Walter informs me that he had
also come upon the notion of algebraic $2$-stack at the same time (just
before the Trento school).
Now a note about terminology: I have chosen to write the paper using Walter's
terminology ``geometric $n$-stack'' because this seems most closely to reflect
what is going on: the definition is made so that we can ``do geometry'' on the
$n$-stack, since in a rather strong sense it looks locally like a scheme. For
the purposes of the introduction, the terminology ``algebraic $n$-stack'' would
be better because this fits with Artin's terminology for $n=1$. There is
another place where the terminology ``algebraic'' would seem to be useful,
this is when we start to look at geometric stacks on the analytic site, which
we call ``analytic $n$-stacks''. In fact one could interchange the
terminologies and in case of confusion one could even say ``algebraic-geometric
$n$-stack''.
In
\cite{RelativeLie} I proposed a notion of {\em presentable $n$-stack}
stable under homotopy fiber products and truncation. One key part of the notion
of algebraic stack is the smoothness of the morphism $X\rightarrow T$ from a
scheme. This is lost under truncation (e.g. the sheaf $\pi _0$ of an algebraic
stack may no longer be an algebraic stack); this indicates that the notion of
``geometric stack'' is something which combines together the various homotopy
groups in a fairly intricate way. In particular, the notion of presentable
$n$-stack is not the same as the notion of geometric $n$-stack (however a
geometric $n$-stack will be presentable). This is a little bit analogous to the
difference between constructible and locally closed subsets in the theory of
schemes.
We will work over the site ${\cal X}$ of schemes of finite type over $Spec (k)$ with
the etale topology, and with the notion of smooth morphism. The definitions and
basic properties should also work for any site in which fiber products exist,
provided with a certain class of morphisms analogous to the smooth morphisms.
Rather than carrying this generalization through in the discussion, we leave it
to the reader. Note that there are several examples which readily come to mind:
\newline
---the site of schemes of finite type with the etale topology and the class of
etale morphisms: this gives a notion of what might be called a ``Deligne-Mumford
$n$-stack'';
\newline
---the site of schemes of finite type with the fppf topology and
the class of flat morphisms: this gives a notion of what might be called a
``flat-geometric $n$-stack'';
\newline
---the site of schemes of finite type with the qff topology and the class of
quasi-finite flat morphisms: this gives a notion of what might be called
a ``qff-geometric $n$-stack''.
Whereas Artin proves \cite{ArtinInventiones} that
flat-geometric $1$-stacks are also smooth-geometric stacks (i.e. those defined
as we do here using smooth morphisms)---his proof is recounted in
\cite{LaumonMB}---it seems unlikely that the same would be true for $n$-stacks.
Artin's method also shows that qff-geometric $1$-stacks are Deligne-Mumford
stacks. However it looks like
Deligne-Mumford $n$-stacks are essentially just gerbs over Deligne-Mumford
$1$-stacks, while on the other hand in characteristic $p$ one could apply
Dold-Puppe (see below) to a complex of finite flat group schemes to get a fairly
non-trivial qff-algebraic $n$-stack. This seems to show that the
implication ``qff-geometric
$\Rightarrow $ Deligne-Mumford'' no longer holds for $n$-stacks. This is why
it seems unlikely that Artin's reasoning for the implication ``flat-geometric
$\Rightarrow$ smooth-geometric'' will work for $n$-stacks.
Here is the plan of the paper. In \S 1 we give the basic definitions of
geometric $n$-stack and smooth morphism of geometric $n$-stacks. In \S 2 we
give some basic properties which amount to having a good notion of geometric
morphism between $n$-stacks (which are themselves not necessarily geometric).
In \S 3 we briefly discuss some ways one could obtain geometric $n$-stacks by
glueing. In \S 4 we show that geometric $n$-stacks are presentable in the sense
of \cite{RelativeLie}. This is probably an important tool if one wants to do
any sort of Postnikov induction, since presentable $n$-stacks are closed under
the truncation processes which make up the Postnikov tower (whereas the
notion of geometric $n$-stack is not closed under truncation). In \S 5 we do a
preliminary version of what should be a more general Quillen theory. We treat
only the $1$-connected case, and then go on in the subsection ``Dold-Puppe'' to
treat the relative (i.e. over a base scheme or base $n$-stack) stable (in the
sense of homotopy theory) case in a
different way. It would be nice to have a unified version including a
reasonable notion of differential graded Lie algebra over an $n$-stack $R$
giving an algebraic approach to relatively $1$-connected $n$-stacks over $R$,
but this seems a bit far off in a technical sense.
In \S 6 we look at maps from a projective variety (or a smooth formal category)
into a geometric $n$-stack. Here again it would be nice to have a fairly
general theory covering maps into any geometric $n$-stack but we can only say
something interesting in the easiest case, that of maps into {\em connected
very presentable $T$}, i.e. $n$-stacks with $\pi _0(T)=\ast$, $\pi _1(T)$ an
affine algebraic group scheme and $\pi _i(T)$ a vector space for $i\geq 2$.
(The terminology ``very presentable'' comes from \cite{RelativeLie}). At the
end we speculate on how one might generalize to various other classes of $T$.
In \S 7 we briefly present an approach to defining the tangent stack to a
geometric $n$-stack. This is a generalization of certain results in the last
chapter of \cite{LaumonMB} although we don't refer to the cotangent complex.
In \S 8 we explain how to use geometric $n$-stacks as a framework for looking
at de Rham theory for higher nonabelian cohomology. This is sort of a
synthesis of things that are in \cite{SantaCruz} and \cite{kobe}.
\bigskip
We assume known an adequate theory of $n$-stacks of groupoids over a site ${\cal X}$.
The main thing we will need is the notion of fiber product (which of course
means---as it always shall below---what one would often call the
``homotopy fiber
product'').
We work over an algebraically closed field $k$ of characteristic zero,
and sometimes directly over field $k={\bf C}$ of complex numbers.
Note however that the
definition makes sense over arbitrary base scheme and the ``Basic properties''
hold true there.
The term ``connected'' when applied to an $n$-stack means that the sheaf $\pi
_0$ is the final object $\ast$ (represented by $Spec (k)$). In the case of a
$0$-stack represented by $Y$ this should not be confused with
connectedness of the scheme $Y$ which is a different question.
\numero{Definitions}
Let ${\cal X}$ be the site of schemes of finite type over $Spec (k)$ with the etale
topology. We will define the following notions: that an $n$-stack $T$ be {\em
geometric}; and that a morphism $T\rightarrow Z$ from a geometric $n$-stack to a
scheme be {\em smooth}. We define these notions together by induction on $n$.
Start by saying that a $0$-stack (sheaf of sets) is {\em geometric} if it is
represented by an algebraic space. Say that a morphism $T\rightarrow Z$ from a
geometric $0$-stack to a scheme is {\em smooth} if it is smooth as a morphism of
algebraic spaces.
Now we give the inductive definitions:
say that an $n$-stack
$T$ is {\em geometric} if: \newline
GS1---for any schemes $X$ and $Y$ and morphisms $X\rightarrow T$,
$Y\rightarrow T$ the fiber product $X\times _TY$ (which is an $n-1$-stack)
is geometric using the inductive definition; and
\newline
GS2---there is a scheme $X$ and a morphism of $n$-stacks
$f:X\rightarrow T$ which is surjective on $\pi _0$ with the property that for
any scheme $Y$ and morphism $Y\rightarrow T$, the morphism
$$
X\times _TY\rightarrow Y
$$
from a geometric $n-1$ stack to the scheme $Y$ is smooth (using the inductive
definition).
If $T$ is a geometric $n$-stack we say that a morphism $T\rightarrow Y$ to a
scheme is {\em smooth} if for at least one
morphism $X\rightarrow T$ as in
GS2, the composed morphism $X\rightarrow Y$ is a smooth morphism of schemes.
This completes our inductive pair of definitions.
For $n=1$ we recover the notion of algebraic stack, and in fact our definition
is a straightforward generalization to $n$-stacks of Artin's definition of
algebraic stack.
The following lemma shows that the phrase ``for at least one'' in the definition
of smoothness can be replaced by the phrase ``for any''.
\begin{lemma}
\label{independence}
Suppose $T\rightarrow Y$ is a morphism from an $n$-stack to a scheme which
is smooth according to the previous definition,
and suppose that $U\rightarrow T$
is a morphism from a scheme such that for any scheme $Z\rightarrow T$, $U\times
_TZ\rightarrow Z$ is smooth (again according to the previous definition, as
a morphism from an $n-1$-stack to a scheme).
Then
$U\rightarrow Y$ is a smooth morphism of schemes.
\end{lemma}
{\em Proof:}
We prove this for
$n$-stacks by induction on $n$. Let $X\rightarrow T$ be the morphism as in GS2
such that $X\rightarrow Y$ is smooth. Let $R= X\times _TU$. This is an
$n-1$-stack and the morphisms $R\rightarrow X$ and $R\rightarrow U$ are both
smooth as morphisms from $n-1$-stacks to schemes according to the above
definition. Let $W\rightarrow R$ be a surjection from a scheme as in property
GM2. By the present lemma applied inductively for $n-1$-stacks, the morphisms
$W\rightarrow X$ and $W\rightarrow U$ are smooth morphisms of schemes. But
the condition that $X\rightarrow Y$ is smooth implies that $W\rightarrow Y$ is
smooth, and then since $W\rightarrow U$ is smooth and surjective we get that
$U\rightarrow Y$ is smooth as desired. This argument doesn't work when $n=0$
but then $R$ is itself an algebraic space and the maps $R\rightarrow X$ (hence
$R\rightarrow Y$) and $R\rightarrow U$ are smooth maps of algebraic spaces;
this implies directly that $U\rightarrow Y$ is smooth.
\hfill $\Box$\vspace{.1in}
The following lemma shows that these definitions don't change if we think of an
$n$-stack as an $n+1$-stack etc.
\begin{lemma}
\label{ntom}
Suppose $T$ is an $n$-stack which, when considered as an $m$-stack for some
$m\geq n$, is a geometric $m$-stack. Then $T$ is a geometric $n$-stack.
Similarly smoothness of a morphism $T\rightarrow Y$ to a scheme when $T$ is
considered as an $m$-stack implies smoothness when $T$ is considered as an
$n$-stack.
\end{lemma}
{\em Proof:} We prove this by induction on $n$ and then
$m$. The case $n=0$ and $m=0$ is clear. First treat the case $n=0$ and any $m$:
suppose $T$ is a sheaf of sets which is a geometric $m$-stack. There is a
morphism $X\rightarrow T$ with $X$ a scheme, such that if we set $R= X\times
_TX$ then $R$ is an $m-1$-stack smooth over $X$. However $R$ is again a sheaf of
sets so by the inductive statement for $n=0$ and $m-1$ we have that $R$ is an
algebraic space. Furthermore the smoothness of the morphism $R\rightarrow X$
with $R$ considered as an $m-1$-stack implies smoothness with $R$ considered as
a $0$-stack. In particular $R$ is an algebraic space with smooth maps to the
projections. Since the quotient of an algebraic space by a smooth equivalence
relation is again an algebraic space, we get that $T$ is an algebraic space
i.e. a geometric $0$-stack (and note by the way that $X\rightarrow T$ is a
smooth surjective map of algebraic spaces). This proves the first statement for
$(0,m)$. For the second statement, suppose $T\rightarrow Y$ is a morphism to a
scheme $Y$ which is smooth as a morphism from an $m$-stack. Then choose the
smooth surjective morphism
$X\rightarrow T$; as we have seen above this is a smooth morphism of algebraic
spaces. The definition of smoothness now is that $X\rightarrow Y$ is smooth.
But this implies that $T\rightarrow Y$ is smooth. This completes the
inductive step for $(0,m)$.
Now suppose we want to show the lemma for $(n,m)$ with $n\geq 1$ and suppose we
know it for all $(n', m')$ with $n'<n$ or $n'=n$ and $m'<m$. Let $T$ be an
$n$-stack which is geometric considered as an $m$-stack. If $X,Y\rightarrow T$
are maps from schemes then $X\times _TY$ is an $n-1$-stack which is geometric
when considered as an $m-1$-stack; by the induction hypothesis it is geometric
when considered as an $n-1$-stack, which verifies GS1. Choose a smooth
surjection $X\rightarrow T$ from a scheme as in property GS2 for $m$-stacks.
Suppose $Y\rightarrow T$ is any morphism from a scheme. Then $X\times _TY$ is an
$n-1$-stack with a map to $Y$ which is smooth considered as a map from
$m-1$-stacks. Again by the induction hypothesis it is smooth considered as a map
from an $n-1$-stack to a scheme, so we get GS2 for $n$-stacks. This completes
the proof that $T$ is geometric when considered as an $n$-stack.
Finally suppose $T\rightarrow Y$ is a morphism from an $n$-stack to a scheme
which is smooth considered as a morphism from an $m$-stack. Choose a surjection
$X\rightarrow T$ as in property GS2 for $m$-stacks; we have seen above that it
also satisfies the same property for $n$-stacks. By definition of smoothness
of our original morphism from an $m$-stack, the morphism $X\rightarrow Y$ is
smooth as a morphism of schemes; this gives smoothness of $T\rightarrow Y$
considered as a morphism from an $n$-stack to a scheme. This finishes the
inductive proof of the lemma.
\hfill $\Box$\vspace{.1in}
{\em Remarks:}
\newline
(1)\, We can equally well make a definition of {\em Deligne-Mumford $n$-stack}
by
replacing ``smooth'' in the previous definition with ''etale''. This gives an
$n$-stack whose homotopy group sheaves are finite...
\newline
(2)\, We could also make definitions of flat-geometric or qff-geometric
$n$-stack, by replacing the smoothness conditoin by flatness or quasifinite
flatness. If all of these notions are in question then we will denote the
standard one by ``smooth-geometric $n$-stack''. Not to be confused with ``smooth
geometric $n$-stack'' which means a smooth-geometric $n$-stack which is smooth!
We now complete our collection of basic definitions in some obvious ways.
We
say that a morphism of $n$-stacks $R\rightarrow T$ is {\em geometric}
if for any scheme $Y$ and map $Y\rightarrow T$ the fiber product $R\times _TY$
is a geometric $n$-stack.
We
say that a geometric morphism of $n$-stacks $R\rightarrow T$ is {\em smooth} if
for any scheme $Y$ and map $Y\rightarrow T$ the morphism $R\times
_TY\rightarrow Y$ is a smooth morphism in the sense of our inductive
definition.
\begin{lemma}
If $T\rightarrow Z$ is a morphism from an $n$-stack to a scheme then it is
smooth and geometric in the sense of the previous paragraph, if and only if $T$
is geometric and the morphism is smooth in the sense of our inductive
definition.
\end{lemma}
{\em Proof:}
Suppose that $T$ is geometric and the morphism is smooth in the
sense of the previous paragraph. Then applying that to the scheme $Z$ itself
we obtain that the morphism is smooth in the sense of the inductive definition.
On the other hand, suppose the morphism is smooth in the sense of the inductive
definition. Let $X\rightarrow T$ be a surjection as in GS2. Thus $X\rightarrow
Z$ is smooth. For any scheme $Y\rightarrow Z$ we have that $X\times
_ZY\rightarrow T\times _ZY$ is surjective and smooth in the sense of the
previous
paragraph; but in this case (and using the direction we have proved above) this
is exactly the statement that it satisfies the conditions of GS2 for the stack
$T\times _ZY$. On the other hand $X\times _ZY\rightarrow Y$ is smooth. This
implies (via the independence of the choice in the original definition of
smoothness which comes from \ref{independence}) that that $T\times
_ZY\rightarrow Y$ is smooth in the original sense. As this works for all $Y$, we
get that $T\rightarrow Z$ is smooth in the new sense.
\hfill $\Box$\vspace{.1in}
\numero{Basic properties}
We assume that the propositions, lemmas and corollaries in this section are
known for $n-1$-stacks and we are proving them all in a gigantic induction for
$n$-stacks. On the other hand, in proving any statement we can use the
{\em previous} statements for the same $n$, too.
\begin{proposition}
\label{fiberprod}
If $R$, $S$ and $T$ are geometric $n$-stacks with morphisms
$R,T\rightarrow S$ then the fiber product
$R\times _ST$ is a geometric $n$-stack.
\end{proposition}
{\em Proof:}
Suppose $R$, $S$ and $T$ are geometric $n$-stacks with morphisms $R\rightarrow
S$ and $T\rightarrow S$. Let $X\rightarrow R$, $Y\rightarrow S$ and
$Z\rightarrow T$ be smooth surjective morphisms from schemes.
Choose a smooth surjective morphism $W\rightarrow X\times _SZ$ from a scheme
(by axiom GS2 for $S$).
By base change of the morphism $Z\rightarrow T$, the morphism $X\times
_SZ\rightarrow X\times _ST$ is a geometric smooth surjective morphism.
We first claim that the morphism $W\rightarrow X\times _ST$ is smooth.
To prove this, suppose $A\rightarrow X\times _ST$ is a morphism from a scheme.
Then $W\times _{X\times _ST}A\rightarrow A$ is the composition of
$$
W\times _{X\times _ST}A \rightarrow (X\times _SZ)\times _{X\times _ST}A
\rightarrow A.
$$
Both morphisms are geometric and smooth, and all three terms are $n-1$-stacks
(note that in the middle $(X\times _SZ)\times _{X\times _ST}A= A\times _TZ$).
By the composition result for $n-1$-stacks (Corollary \ref{geocomposition} below
with our global induction hypothesis) the composed morphism
$W\times _{X\times _ST}A\rightarrow A$ is smooth, and this for any $A$. Thus
$W\rightarrow X\times _ST$ is smooth.
Next we claim that the morphism $W\rightarrow R\times _ST$ is smooth. Again
suppose that $A\rightarrow R\times _ST$ is a morphism from a scheme. The two
morphisms
$$
W\times _{R\times _ST} A \rightarrow (X\times _ST)\times _{R\times _ST}A =
X\times _RA \rightarrow A
$$
are smooth and geometric by base change. Again this is a composition of
morphisms of $n-1$-stacks so by Corollary \ref{smoothcomposition2} and our
global
induction hypothesis the composition is smooth and geometric. Finally the
morphism $W\rightarrow R\times _ST$ is the composition of three surjective
morphisms so it is surjective. We obtain a morphism as in GS2 for $R\times
_ST$.
We turn to GS1.
Suppose $X\rightarrow R\times _ST$ and $Y\rightarrow R\times _ST$
are morphisms from schemes. We would like to check that
$X\times _{R\times _ST}Y$ is a geometric $n-1$-stack. Note that calculating
$X\times _{R\times _ST}Y$ is basically the same thing as calculating in usual
homotopy theory the path space between two points $x$ and $y$ in a product of
fibrations $r\times _st$. From this point of view we see that
$$
X\times _{R\times _ST} Y =
(X\times _RY)\times _{X\times _SY}(X\times _TY).
$$
Note that the three components in the big fiber product on the right are
geometric $n-1$-stacks, so by our inductive hypothesis (i.e. assuming the
present proposition for $n-1$-stacks) we get that the right hand side is a
geometric $n-1$-stack, this gives the desired statement for GS1.
\hfill $\Box$\vspace{.1in}
\begin{corollary}
If $R$ and $T$ are geometric $n$-stacks then any morphism between them is
geometric. In particular
an $n$-stack $T$ is geometric
if and only if the structural morphism $T\rightarrow \ast$ is geometric.
\end{corollary}
\hfill $\Box$\vspace{.1in}
\begin{lemma}
\label{smoothcomposition1}
If $R\rightarrow S\rightarrow T$ are morphisms of geometric $n$-stacks and if
each morphism is smooth then the composition is smooth.
\end{lemma}
{\em Proof:}
We have already proved this for morphisms $X\rightarrow T \rightarrow Y$
where $X$ and $Y$ are schemes (see Lemma \ref{independence}). Suppose
$U\rightarrow T\rightarrow Y$ are smooth morphisms of geometric $n$-stacks
with
$Y$ a scheme. We prove that the composition is smooth, by induction on $n$ (we
already know it for $n=0$). If
$Z\rightarrow T$ is a smooth surjective morphism from a scheme then the
morphism
$$
U\times _TZ \rightarrow Z
$$
is smooth by the definition of smoothness of $U\rightarrow T$. Also the map
$Z\rightarrow Y$ is smooth by definition of smoothness of $T\rightarrow Y$.
Choose a smooth surjection $V\rightarrow U\times _TZ$ from a scheme $V$ and note
that the map $V\rightarrow Z$ is smooth by definition, so (since these are
morphisms of schemes) the composition $V\rightarrow Y$ is smooth.
On the other
hand $$ U\times _TZ \rightarrow U
$$
is smooth and surjective, by base change from
$Z\rightarrow T$.
We claim that the morphism $V\rightarrow U$ is smooth and surjective---actually
surjectivity is obvious. To prove that it is smooth, let $W\rightarrow U$ be a
morphism from a scheme; then
$$
W\times _UV \rightarrow W\times _U(U\times _TZ) = W\times _TZ \rightarrow W
$$
is a composable pair of morphisms of $n-1$-stacks each of which is smooth
by base
change. By our induction hypothesis the composition is smooth. This shows by
definition that $V\rightarrow U$ is smooth.
In particular the map $V\rightarrow U$ is admissible as in GS2, and then we
can conclude that the map $U\rightarrow Y$ is smooth by the original definition
using $V$. This completes the proof in the current case.
Suppose finally that $U\rightarrow T \rightarrow R$ are smooth
morphisms of geometric $n$-stacks. Then for any scheme $Y$ the morphisms
$U\times _RY\rightarrow T\times _RY \rightarrow Y$ are smooth by base change;
thus from the case treated above their composition is smooth, and this
is the definition of smoothness of $U\rightarrow R$.
\hfill $\Box$\vspace{.1in}
\begin{lemma}
\label{descendgeometric}
Suppose $S\rightarrow T$ is a geometric smooth surjective morphism of
$n$-stacks,
and suppose that $S$ is geometric. Then $T$ is geometric.
\end{lemma}
{\em Proof:}
We first show GS2. Let $W\rightarrow S$ be a smooth geometric surjection from a
scheme. We claim that the morphism $W\rightarrow T$ is surjective (easy),
geometric and smooth. To show that it is geometric, suppose $Y\rightarrow T$
is a morphism from a scheme. Then since $S\rightarrow T$ is geometric we have
that $Y\times _TS$ is a geometric $n$-stack. On the other hand,
$$
Y\times _TW = (Y\times _TS)\times _SW,
$$
so by Proposition \ref{fiberprod} $Y\times _TW$ is geometric. Finally to show
that $W\rightarrow T$ is smooth, note that
$$
Y\times _TW\rightarrow Y\times _TS \rightarrow Y
$$
is a composable pair of smooth (by base change) morphisms of geometric
$n$-stacks, so by the previous lemma the composition is smooth. The morphism
$W\rightarrow T$ thus works for condition GS2.
To show GS1, suppose $X,Y\rightarrow T$ are morphisms from schemes. Then
$$
(X\times _TY)\times _TW = (X\times _TW)\times _W (Y\times _TW).
$$
The geometricity of the morphism $W\rightarrow T$ implies that $X\times _TW$
and $Y\times _TW$ are geometric, whereas of course $W$ is geometric. Thus by
Proposition \ref{fiberprod} we get that
$(X\times _TY)\times _TW$ is geometric. Now note that the morphism
$$
(X\times _TY)\times _TW \rightarrow X\times _TY
$$
of $n-1$-stacks is geometric, smooth and surjective (by base change of the same
properties for $W\rightarrow T$). By the inductive version of the present
lemma for $n-1$ (noting that the lemma is automatically true for $n=0$) we
obtain that $X\times _TY$ is geometric. This is GS1.
\hfill $\Box$\vspace{.1in}
\begin{corollary}
\label{localization}
Suppose $Y$ is a scheme and $T\rightarrow Y$ is a morphism from an $n$-stack. If
there is a smooth surjection $Y' \rightarrow Y$ such that $T':=Y'\times
_YT\rightarrow Y'$ is geometric then the original morphism is geometric.
\end{corollary}
{\em Proof:}
The morphism $T'\rightarrow T$ is geometric, smooth and surjective (all by
base-change from the morphism $Y'\rightarrow Y$). By \ref{descendgeometric},
the
fact that $T'$ is geometric implies that $T$ is geometric.
\hfill $\Box$\vspace{.1in}
This corollary is particularly useful to do etale localization. It implies
that the property of a morphism of $n$-stacks being geometric, is etale-local
over the base.
\begin{corollary}
\label{fibration}
Given a geometric morphism $R\rightarrow T$ of $n$-stacks such that $T$ is
geometric, then $R$ is geometric.
\end{corollary}
{\em Proof:}
Let $X\rightarrow T$ be the geometric smooth surjective morphism from a scheme
given by GS2 for $T$. By base change, $X\times _TR \rightarrow R$ is a geometric
smooth surjective morphism. However, by the geometricity of the morphism
$R\rightarrow T$ the fiber product $X\times _TR$ is geometric; thus by the
previous lemma, $R$ is geometric.
\hfill $\Box$\vspace{.1in}
\begin{corollary}
\label{geocomposition}
The composition of two geometric morphisms is
again geometric.
\end{corollary}
{\em Proof:}
Suppose $U\rightarrow T\rightarrow R$ are geometric morphisms, and suppose
$Y\rightarrow R$ is a morphism from a scheme. Then
$$
U\times _RY= U\times _T(T\times _RY).
$$
By hypothesis $T\times _RY$ is geometric. On the other hand $U\times
_RY\rightarrow T\times _RY$ is geometric (since the property of being geometric
is obviously stable under base change). By the previous Proposition
\ref{fiberprod} we get that $U\times _RY$ is geometric. Thus the morphism
$U\rightarrow R$ is geometric.
\hfill $\Box$\vspace{.1in}
\begin{corollary}
\label{smoothcomposition2}
The composition of two geometric smooth morphisms is
geometric and smooth.
\end{corollary}
{\em Proof:}
Suppose $R\rightarrow S \rightarrow T$ is a pair of geometric smooth morphisms.
Suppose $Y\rightarrow T$ is a morphism from a scheme. Then
(noting by the previous corollary that $R\rightarrow T$ is geometric)
$R\times _TY$ and $S\times _TY$ are geometric. The composable pair
$$
R\times _TY \rightarrow S\times _TY \rightarrow Y
$$
of smooth morphisms now falls into the hypotheses of Lemma
\ref{smoothcomposition1} so the composition is smooth. This implies that our
original composition was smooth.
\hfill $\Box$\vspace{.1in}
In a relative setting we get:
\begin{corollary}
Suppose $U\stackrel{a}{\rightarrow}T\stackrel{b}{\rightarrow}R$ is a composable
pair of morphisms of $n$-stacks. If $a$ is geometric, smooth and surjective
and $ba$ is geometric (resp. geometric and smooth) then $b$ is geometric (resp.
geometric and smooth).
\end{corollary}
{\em Proof:}
Suppose $Y\rightarrow R$ is a morphism from a scheme. Then
$$
Y\times _RU = (Y\times _RT)\times _TU.
$$
The map $Y\times _RU\rightarrow Y\times _RT$ is geometric, smooth and
surjective (since those properties are obviously---from the form of their
definitions---invariant under base change). The fact that $ba$ is geometric
implies that $Y\times _RU$ is geometric, which by the previous lemma implies
that $Y\times _RT$ is geometric. Suppose furthermore that $ba$ is smooth.
Choose a smooth surjection $W\rightarrow Y\times _RT$ from a scheme. Then
the morphism
$$
W\times _{Y\times _RT} (Y\times _RU)\rightarrow Y\times _RU
$$
is smooth by basechange and the morphism $Y\times _RU\rightarrow Y$ is smooth
by hypothesis. Thus $W\times _{Y\times _RT} (Y\times _RU)\rightarrow Y$
is smooth. Choosing a smooth surjection from a scheme
$$
V\rightarrow W\times _{Y\times _RT} (Y\times _RU)
$$
we get that $V\rightarrow Y$ is a smooth morphism of schemes.
On the other hand, the morphism
$$
W\times _{Y\times _RT} (Y\times _RU)\rightarrow W
$$
is smooth and surjective, so $V\rightarrow W$ is smooth and surjective.
Therefore $W\rightarrow Y$ is smooth. This proves that if $ba$ is smooth then
$b$ is smooth.
\hfill $\Box$\vspace{.1in}
{\em Examples:}
Proposition \ref{fibration} allows us to construct many
examples. The main examples we shall look at below are the {\em connected
presentable $n$-stacks}. These are connected $n$-stacks $T$ with (choosing a
basepoint $t\in T(Spec ({\bf C} ))$ $\pi _i (T, t)$ represented by
group schemes of finite type. We apply \ref{fibration} inductively to
show that such a $T$ is geometric. Let $T\rightarrow \tau _{\leq n-1}T$ be the
truncation morphism. The fiber over a morphism $Y\rightarrow \tau _{\leq
n-1}T$ is (locally in the etale topology of $Y$ where there exists a
section---this is good enough by \ref{localization}) isomorphic to $K(G/Y, n)$
for a smooth group scheme of finite type $G$ over $Y$. Using the following
lemma, by induction $T$ is geometric.
\begin{lemma}
\label{eilenbergExample}
Fix $n$, suppose $Y$ is a scheme and suppose $G$ is a smooth group scheme over
$Y$. If $n\geq 2$ require $G$ to be abelian. Then $K(G/Y, n)$ is a geometric
$n$-stack and the morphism $K(G/Y,n)\rightarrow Y$ is smooth..
\end{lemma}
{\em Proof:}
We prove this by induction on $n$. For $n=0$ we simply have $K(G/Y,0)=G$ which
is a scheme and hence geometric---also note that by hypothesis it is smooth over
$Y$. Now for any $n$, consider the basepoint section $Y\rightarrow K(G/Y,n)$.
We claim that this is a smooth geometric map. If $Z\rightarrow K(G/Y,n)$ is
any morphism then it corresponds to a map $Z\rightarrow Y$ and a class in
$H^n(Z,G|_Z)$. Since we are working with the etale topology, by definition this
class vanishes on an etale surjection $Z'\rightarrow Z$ and for our claim it
suffices to show that $Y\times _{K(G/Y,n)}Z'$ is smooth and geometric over
$Z'$. Thus we may assume that our map $Z'\rightarrow K(G/Y,n)$ factors through
the basepoint section $Y\rightarrow K(G/Y,n)$. In particular it suffices to
prove that $Y\times _{K(G/Y,n)}Y\rightarrow Y$ is smooth and geometric. But $$
Y\times _{K(G/Y,n)}Y= K(G/Y,n-1)
$$
so by our induction hypothesis this is geometric and smooth over $Y$. This
shows that $K(G/Y,n)$ is geometric and furthermore the basepoint section is a
choice of map as in GS2. Now the composed map $Y\rightarrow K(G/Y,n)\rightarrow
Y$ is the identity, in particular smooth, so by definition $K(G/Y,n)\rightarrow
Y$ is smooth.
\hfill $\Box$\vspace{.1in}
Note that stability under fiber products (Proposition \ref{fiberprod}) implies
that if $T$ is a geometric $n$-stack then $Hom (K, T)$ is geometric for any
finite CW complex $K$. See (\cite{kobe} Corollary 5.6) for the details of the
argument---which was in the context of presentable $n$-stacks but the argument
given there only depends on stability of our class of $n$-stacks under fiber
product. We can apply this in particular to the geometric $n$-stacks
constructed just above, to obtain some non-connected examples.
If $T= BG$ for an algebraic group $G$ and $K$ is connected with basepoint
$k$ then $Hom (K, T)$ is the moduli stack of representations $\pi
_1(K,k)\rightarrow G$ up to conjugacy.
\numero{Locally geometric $n$-stacks}
The theory we have described up till now concerns objects {\em of finite type}
since we have assumed that the scheme $X$ surjecting to our $n$-stack $T$ is of
finite type. We can obtain a definition of ``locally geometric'' by relaxing
this to the condition that $X$ be locally of finite type (or equivalently that
$X$ be a disjoint union of schemes of finite type). To be precise we say that an
$n$-stack $T$ is {\em locally geometric} if there exists a sheaf
which is a disjoint union of schemes of finite type, with a morphism
$$
\varphi : X=\coprod
X_i \rightarrow T
$$
such that $\varphi$ is smooth and geometric.
Note that if $X$ and $Y$ are schemes of finite type mapping to $T$ we still
have that $X\times _TY$ is geometric (GS1).
All of the previous results about fibrations, fiber products, and so on still
hold for locally geometric $n$-stacks.
One might want also to relax the definition even further by only requiring
that $X\times _TY$ be itself locally geometric (and so on) even for schemes of
finite type. We can obtain a notion that we call {\em slightly
geometric} by replacing ``scheme of finite type'' by ``scheme locally
of finite type'' everywhere in the preceeding definitions. This notion may be
useful in the sense that a lot more $n$-stacks will be ``slightly geometric''.
However it seems to remove us somewhat from the realm where geometric reasoning
will work very well.
\numero{Glueing}
We say that a morphism $U\rightarrow T$ of geometric stacks is a {\em
Zariski open subset} (resp. {\em etale open subset}) if for every scheme $Z$
and $Z\rightarrow T$ the fiber product $Z\times _TU$ is a Zariski open subset
of $Z$ (resp. an algebraic space with etale map to $Z$).
If we have two
geometric $n$-stacks $U$ and $V$ and a geometric $n$-stack $W$ with morphisms
$W\rightarrow U$ and $W\rightarrow V$ each of which is a Zariski open subset,
then we can glue $U$ and $V$ together along $W$ to get a geometric $n$-stack
$T$ with Zariski open subsets $U\rightarrow T$ and $V\rightarrow T$ whose
intersection is $W$. If one wants to glue several open sets it has to be done
one at a time (this way we avoid having to talk about higher cocycles).
As a more general result we have the following. Suppose $\Phi$ is a functor
from the simplicial category $\Delta$ to the category of $n$-stacks (say a
strict functor to the category of simplicial presheaves, for example).
Suppose that each $\Phi _k$ is a geometric $n$-stack, and suppose that
the two morphisms $\Phi _1 \rightarrow \Phi _0$ are smooth. Suppose furthermore
that $\Phi $ satisfies the Segal condition that
$$
\Phi _k \rightarrow \Phi _1\times _{\Phi _0} \ldots \times _{\Phi _0}\Phi _1
$$
is an equivalence (i.e. Illusie weak equivalence of simplicial presheaves).
Finally suppose that for any element of $\Phi _1(X)$ there is, up to
localization over $X$, an ``inverse'' (for the multiplication on $\Phi _1$ that
comes from Segal's condition as in \cite{Segal}) up to homotopy.
Let $T$
be the realization over the simplicial variable, into a presheaf of spaces (i.e.
we obtain a bisimplicial presheaf, take the diagonal).
\begin{proposition}
In the above situation, $T$ is a geometric $n+1$-stack.
\end{proposition}
{\em Proof:}
There is a surjective map $\Phi _0 \rightarrow T$ and we have by definition that
$\Phi _0 \times _T\Phi _0 = \Phi _1$. From this one can see that $T$ is
geometric.
\hfill $\Box$\vspace{.1in}
As an example of how to apply the above result, suppose $U$ is a geometric
$n$-stack and suppose we have a geometric $n$-stack $R$ with
$R\rightarrow U\times U$. Suppose furthermore that we have a multiplication
$R\times _{p_2, U, p_1}R\rightarrow R$ which is associative and such that
inverses exist up to homotopy. Then we can set $\Phi _k = R\times _U\ldots
\times _UR$ with $\Phi _0 = U$. We are in the above situation, so we obtain the
geometric $n$-stack $T$. We call this the {\em $n$-stack associated to the
descent data $(U, R)$}.
The original result about glueing over Zariski open
subsets can be interpreted in this way.
The simplicial version of this descent
with any $\Phi$ satisfying Segal's condition is a way to avoid having to talk
about strict associativity of the composition on $R$.
\numero{Presentability}
Recall first of all that the category of {\em vector sheaves} over a scheme $Y$
is the smallest abelian category of abelian sheaves on the big site of
schemes over $Y$ containing the structure sheaf (these were called
``$U$-coherent
sheaves'' by Hirschowitz in \cite{Hirschowitz}, who was the first to define
them). A vector sheaf may be presented as the kernel of a sequence of $3$
coherent sheaves which is otherwise exact on the big site; or dually as the
cokernel of an otherwise-exact sequence of $3$ {\em vector schemes} (i.e. duals
of coherent sheaves). The nicest thing about the category of vector sheaves is
that duality is involutive.
Recall that we have defined in \cite{RelativeLie} a notion of {\em presentable
group sheaf} over any base scheme $Y$. We will not repeat the definition
here, but just remark (so as to give a rough idea of what is going on) that if
$G$ is a presentable group sheaf over $Y$ then it admits a Lie algebra object
$Lie (G)$ which is a vector sheaf with bilinear Lie bracket operation
(satisfying Jacobi).
In \cite{RelativeLie} a definition was then made of {\em presentable $n$-stack};
this involves a certain condition on $\pi _0$ (for which we refer to
\cite{RelativeLie})
and the condition that the higher homotopy group sheaves (over any base scheme)
be presentable group sheaves.
For our purposes we shall often be interested in the slightly more restrictive
notion of {\em very presentable $n$-stack}. An $n$-stack $T$ is defined (in
\cite{RelativeLie}) to be very presentable if it is presentable, and if
furthermore:
\newline
(1)\, for $i\geq 2$ and for any scheme $Y$ and $t\in T(Y)$ we have
that $\pi _i (T|_{{\cal X} /Y}, t)$ is a vector sheaf over $Y$; and
\newline
(2)\, for any artinian scheme $Y$ and $t\in T(Y)$ the group of sections
$\pi _1(T|_{{\cal X} /Y},t)(Y)$ (which is naturally an algebraic group scheme over
$Spec (k)$) is affine.
For our purposes here we will mostly stick to the case of connected $n$-stacks
in the coefficients. Thus we review what the above definitions mean for $T$
connected (i.e. $\pi _0(T)=\ast $). Assume that $k$ is algebraically closed
(otherwise one has to take what is said below possibly with some Galois
twisting). In the connected case there is essentially a unique basepoint $t\in
T(Spec (k))$. A group sheaf over $Spec (k)$ is presentable if and only if it is
an algebraic group scheme (\cite{RelativeLie}), so $T$ is presentable if
and only
if $\pi _i(T,t)$ are represented by algebraic group schemes. Note that a
vector sheaf over $Spec (k)$ is just a vector space, so $T$ is very presentable
if and only if the $\pi _i (T,t)$ are vector spaces for $i\geq 2$ and $\pi
_1(T,t)$ is an affine algebraic group scheme (which can of course act on the
$\pi _i$ by a representation which---because we work over the big
site---is automatically algebraic).
\begin{proposition}
\label{presentable}
If $T$ is a geometric $n$-stack on ${\cal X}$ then $T$ is presentable in the sense
of \cite{RelativeLie}.
\end{proposition}
{\em Proof:}
Suppose $X\rightarrow R$ is a smooth morphism from a scheme $X$ to a geometric
$n$-stack $R$. Note that the morphism
$R\rightarrow \pi _0(R)$ satisfies the lifting properties $Lift _n(Y, Y_i)$,
since by localizing in the etale topology we get rid of any cohomological
obstructions to lifting coming from the higher homotopy groups. On the other
hand the morphism $X\rightarrow R$ being smooth, it satisfies the lifting
properties (for example one can say that the map $X\times _RY\rightarrow Y$
is smooth and admits a smooth surjection from a scheme smooth over $Y$;
with this one gets the lifting properties, recalling of course that a smooth
morphism between schemes is vertical. Thus we get that
$X\rightarrow \pi _0(R)$ is vertical.
Now suppose $T$ is geometric and choose a smooth surjection $u:X\rightarrow T$.
We get from above that $X\rightarrow \pi _0(T)$ is vertical. Note that
$$
X\times _{\pi _0(T)}X = im (X\times _TX \rightarrow X\times X).
$$
Let $G$ denote the group sheaf $\pi _1(T|_{{\cal X} /X}, u)$ over $X$.
We have that $G$ acts freely on $\pi _0(X\times _TX)$ (relative to the first
projection $X\times _TX\rightarrow X$) and the quotient is the image
$X\times _{\pi _0(T)}X$. Thus, locally over schemes mapping into
the target, the morphism
$$
\pi _0(X\times _TX) \rightarrow X\times _{\pi _0(T)}X
$$
is the same as the morphism
$$
G\times _X(X\times _{\pi _0(T)}X)\rightarrow X\times _{\pi _0(T)}X
$$
obtained by base-change. Since $G\rightarrow X$ is a group sheaf it is an
$X$-vertical morphism (\cite{RelativeLie} Theorem 2.2 (7)), therefore its base
change is again an $X$-vertical morphism. Since verticality is local over
schemes mapping into the target, we get that
$$
\pi _0(X\times _TX) \rightarrow X\times _{\pi _0(T)}X
$$
is an $X$-vertical morphism. On the other hand by the definition that $T$ is
geometric we obtain a smooth
surjection $R\rightarrow X\times _TX$ from a scheme
$R$, and by the previous discussion this gives a $Spec({\bf C} )$-vertical
surjection
$$
R\rightarrow \pi _0(X\times _TX).
$$
Composing we get the $X$-vertical surjection $R\rightarrow X\times _{\pi
_0(T)}X$. We have now proven that $\pi _0(T)$ is $P3\frac{1}{2}$ in the
terminology of \cite{RelativeLie}.
Suppose now that $v:Y\rightarrow T$ is a point. Let $T':= Y\times _TY$.
Then $\pi _0(T')= \pi _1(T|_{{\cal X} /Y}, v)$ is the group sheaf we are interested
in looking at over $Y$. We will show that it is presentable.
Note that $T'$ is
geometric; we apply the same argument as above, choosing a smooth surjection
$X\rightarrow T'$. Recall that this gives a $Spec ({\bf C} )$-vertical (and hence
$Y$-vertical) surjection $X\rightarrow \pi _0(T')$. Choose a smooth surjection
$R\rightarrow X\times _{T'}X$. In the previous proof the group sheaf denoted $G$
on $X$ is actually pulled back from a group sheaf $\pi _2(T|_{{\cal X} /Y}, v)$ on
$Y$. Therefore the morphism
$$
\pi _0(X\times _{T'}X)\rightarrow X\times _{\pi _0(T')}X
$$
is a quotient by a group sheaf over $Y$, in particular it is $Y$-vertical.
As usual the morphism $R\rightarrow \pi _0(X\times _{T'}X)$ is $Spec ({\bf C}
)$-vertical so in particular $Y$-vertical. We obtain a $Y$-vertical surjection
$$
R\rightarrow X\times _{\pi _0(T')}X.
$$
This finishes the proof that $\pi _1(T|_{{\cal X} /Y}, v)$
satisfies property $P4$ (and since it
is a group sheaf, $P5$ i.e. presentable) with respect to $Y$.
Now note that $\pi _i(T|_{{\cal X} /Y}, v) = \pi _{i-1}(T'|_{{\cal X} /Y}, d)$
where $d: Y \rightarrow T' := Y\times _TY$ is the diagonal morphism.
Hence (as $T'$ is itself geometric) we obtain by induction that all of the
$\pi _i(T|_{{\cal X} /Y}, v)$ are presentable group sheaves over $Y$.
This shows that $T$ is presentable in the terminology of \cite{RelativeLie}.
\hfill $\Box$\vspace{.1in}
Note that presentability in \cite{RelativeLie} is a slightly stronger condition
than the condition of presentability as it is referred to in \cite{kobe} so all
of the results stated in \cite{kobe} hold here; and of course all of the
results of \cite{RelativeLie} concerning presentable $n$-stacks hold for
geometric $n$-stacks. The example given below which shows that the class of
geometric $n$-stacks is not closed under truncation, implies that the class of
presentable $n$-stacks is strictly bigger than the class of geometric ones,
since the class of presentable $n$-stacks is closed under truncation
\cite{RelativeLie}.
The results announced (some with sketches of proofs) in
\cite{kobe} for presentable $n$-stacks hold for geometric $n$-stacks.
Similarly the basic results of \cite{RelativeLie} hold for geometric $n$-stacks.
For example, if $T$ is a geometric $n$-stack and $f:Y\rightarrow T$ is a
morphism
from a scheme then $\pi _i (T|_{{\cal X} /Y}, f)$ is a presentable group sheaf, so
it has a Lie algebra object $Lie\, \pi _i (T|_{{\cal X} /Y}, f)$ which is a {\em
vector sheaf} (or ``$U$-coherent sheaf'' in the terminology of
\cite{Hirschowitz}) with Lie bracket operation.
{\em Remark:} By Proposition \ref{presentable}, the condition of being
geometric
is stronger than the condition of being presentable given in
\cite{RelativeLie}.
Note from the example given below showing that geometricity is not compatible
with truncation (whereas by definition presentability is compatible with
truncation),
the condition of being geometric is {\em strictly} stronger than the condition
of being presentable.
Of course in the connected case, presentability and geometricity are the same
thing.
\begin{corollary}
A connected $n$-stack $T$ is geometric if and only if the $\pi _i(T,t)$ are
group schemes of finite type for all $i$.
\end{corollary}
{\em Proof:}
We show in \cite{RelativeLie} that presentable groups over $Spec (k)$ are just
group schemes of finite type. Together with the previous result this shows
that if $T$ is connected and geometric then the $\pi _i(T,t)$ are
group schemes of finite type for all $i$. On the other hand, if $\pi
_0(T)=\ast$ and the $\pi _i(T,t)$ are
group schemes of finite type for all $i$ then by the Postnikov decomposition
of $T$
and using \ref{fibration}, we conclude that $T$ is geometric (note that
for a group scheme of finite type $G$,
$K(G,n)$ is geometric).
\hfill $\Box$\vspace{.1in}
\numero{Quillen theory}
Quillen in \cite{Quillen} associates to every $1$-connected rational
space $U$ a {\em differential graded Lie algebra (DGL)} $L_{\cdot} = \lambda
(U)$: a DGL is a graded Lie algebra
(over ${\bf Q}$ for our purposes) $L_{\cdot} = \bigoplus _{p\geq 1}L_p$
(with all elements of strictly positive degree)
with differential $\partial : L_p \rightarrow L_{p-1}$ compatible in the usual
(graded) way with the Lie bracket. Note our conventions that the indexing is
downstairs, by positive numbers and the differential has degree $-1$. The
homology groups of $\lambda (U)$ are the homotopy groups of $U$ (shifted by one
degree).
This construction gives an equivalence between the homotopy theory of DGL's and
that of rational spaces. Let $L_{\cdot} \mapsto |L_{\cdot}| $ denote the
construction going in the other direction. We shall assume for our purposes
that there exists such a realization functor from the category of DGL's to
the category of $1$-connected spaces, compatible with finite direct products.
Let $DGL_{{\bf C} , n}$ denote the category of $n$-truncated ${\bf C}$-DGL's of finite
type (i.e. with homology groups which are finite dimensional vector
spaces, vanishing in degree $\geq n$) and free as graded Lie algebras.
We define a realization functor $\rho ^{\rm pre}$ from $DGL_{{\bf C} ,n}$ to the
category of presheaves of spaces over ${\cal X}$. If $L_{\cdot}\in DGL_{{\bf C}
,n}$ then
for any $Y\in {\cal X}$ let
$$
\rho ^{\rm pre}(L_{\cdot})(Y):= | L_{\cdot} \otimes _{{\bf C}}{\cal O} (Y) |.
$$
Then let $\rho (L_{\cdot})$ be the $n$-stack associated to the presheaf of
spaces
$\rho ^{\rm pre}(L_{\cdot})$. This construction is functorial and compatible
with direct products (because we have assumed the same thing about the
realization functor $|L_{\cdot}|$).
Note that $\pi _0^{\rm pre}(\rho ^{\rm pre}(L_{\cdot}))=\ast$
and in fact we can choose a basepoint $x$ in $\rho ^{\rm pre}(L_{\cdot})(Spec
({\bf C} ))$.
We have
$$
\pi _i ^{\rm pre}(\rho ^{\rm pre}(L_{\cdot}), x)= H_{i-1}(L_{\cdot})
$$
(in other words the presheaf on the left is represented by the vector space on
the right). This gives the same result on the level of associated stacks and
sheaves:
$$
\pi _i (\rho (L_{\cdot}), x)= H_{i-1}(L_{\cdot}).
$$
In particular note that a morphism of DGL's induces an equivalence of
$n$-stacks if and only if it is a quasiisomorphism. Note also
that $\rho (L_{\cdot})$ is a $1$-connected $n$-stack whose higher homotopy
groups are complex vector spaces, thus it is a very presentable
geometric $n$-stack.
\begin{theorem}
The above construction gives an equivalence between the homotopy
category $ho\, DGL_{{\bf C} , n}$ and the homotopy category of $1$-connected very
presentable $n$-stacks.
\end{theorem}
{\em Proof:}
Let $(L,M)$ denote the set of homotopy classes of maps from $L$ to $M$ (either
in the world of DGL's or in the world of $n$-stacks on ${\cal X}$). Note that if $L$
and $M$ are DGL's then $L$ should be free as a Lie algebra (otherwise we have
to replace it by a quasiisomorphic free one). We prove that the map
$$
(L,M)\rightarrow (\rho (L), \rho (M))
$$
is an isomorphism. First we show this for the case where $L=V[n-1]$ and
$M=U[n-1]$ are finite dimensional vector spaces in degrees $n-1$ and $m-1$. In
this case (where unfortunately $L$ isn't free so has to be replaced by a free
DGL) we have
$$
(V[n-1], U[m-1])= Hom (Sym ^{m/n}(V), U)
$$
where the symmetric product is in the graded sense (i.e. alternating or
symmetric according to parity) and defined as zero when the exponent is not
integral. Note that $\rho (V[n-1])= K(V, n)$ and $\rho (U[m-1])= K(U,m)$.
The Breen calculations in characteristic zero (easier than the case treated in
\cite{BreenIHES}) show that
$$
(K(V,n), K(U,m))= Hom (Sym ^{m/n}(V), U)
$$
so our claim holds in this case.
Next we treat the case of arbitrary $L$ but $M= U[m-1]$ is again a vector space
in degree $m-1$. In this case we are calculating the cohomology of $L$ or
$\rho (L)$ in degree $m$ with coefficients in $U$. Using a Postnikov
decomposition of $L$ and the appropriate analogues of the Leray spectral
sequence on both sides we see that our functor induces an isomorphism on these
cohomology groups.
Finally we get to the case of arbitrary $L$ and arbitrary $M$. We proceed by
induction on the truncation level $m$ of $M$. Let $M'=\tau _{\leq m-1}M$ be the
truncation (coskeleton) with the natural morphism $M\rightarrow M'$. The fiber
is of the form $U[m-1]$ (we index our truncation by the usual homotopy groups
rather than the homology groups of the DGL's which are shifted by $1$). Note
that $\rho (M')= \tau _{\leq m-1}\rho (M)$ (since the construction $\rho$ is
compatible with homotopy groups so it is compatible with the truncation
operations). The fibration $M\rightarrow M'$ is classified by a map
$f:M'\rightarrow U[m]$ and the fibration $\rho (M)\rightarrow \rho (M')$
by the corresponding map $\rho (f): \rho (M')\rightarrow K(U, m+1)$.
The image of
$$
(L,M)\rightarrow (L, M')
$$
consists of the morphisms whose composition into $U[m]$ is homotopic to the
trivial morphism $L\rightarrow U[m]$. Similarly the image of
$$
(\rho (L),\rho (M))\rightarrow (\rho (L),\rho (M'))
$$
is the morphisms whose composition into $K(U,m+1)$ is homotopic to the
trivial morphism. By our inductive hypothesis
$$
\rho : (L,M')\rightarrow (\rho (L), \rho (M'))
$$
is an isomorphism. The functor $\rho$ is an
isomorphism on the images, because we know the statement for targets $U[m]$.
Suppose we are given a map $a:L\rightarrow M'$ which is in the image.
The inverse
image of this homotopy class in $(L,M)$ is the quotient of the set of liftings
of $a$ by the action of the group of self-homotopies of the map $a$. The set of
liftings is a principal homogeneous space under $(L, U[m-1])$.
Similarly the inverse image of the homotopy class of $\rho (a)$ in $(\rho (L),
\rho (M))$ is the quotient of the set of liftings of $\rho (a)$ by the group of
self-homotopies of $\rho (a)$. Again the set of liftings is a principal
homogeneous space under $(\rho (L), K(U,m))$.
The actions in the principal homogeneous spaces come from maps
$$
U[m-1]\times M\rightarrow M
$$
over $M'$ and
$$
K(U, m)\times \rho (M)\rightarrow \rho (M)
$$
over $\rho (M')$, the second of which is the image under $\rho$ of the first.
Since $\rho : (L, U[m-1])\cong (\rho (L), K(U,m))$, we will get that $\rho$
gives an isomorphism of the fibers if we can show that the images of the actions
of the groups of self-homotopy equivalences are the same. Notice that since
these actions are on principal homogeneous spaces they factor through the
abelianizations of the groups of self-homotopy equivalences.
In general if $A$ and $B$ are spaces then $(A\times S^1, B)$ is the disjoint
union over $(A,B)$ of the sets of conjugacy classes of the groups of
self-homotopies of the maps from $A$ to $B$. On the other hand a map of groups
$G\rightarrow G'$ which induces an isomorphism on sets of conjugacy classes is
surjective on the level of abelianizations. Thus if we know that a certain
functor gives an isomorphism on $(A,B)$ and on $(A\times S^1, B)$ then it is a
surjection on the abelianizations of the groups of self-homotopies of the maps.
Applying this principle in the above situation, and noting that we know
by our induction hypothesis that $\rho$ induces isomorphisms on $(L, M')$ and
$(L\times \lambda (S^1)\otimes _{{\bf Z}}k, M')$, we find that $\rho$ induces a
surjection from the abelianization of the group of self-homotopies of the map
$a:L\rightarrow M'$ to the abelianization of the group of self-homotopies of
$\rho (a)$. This finally allows us to conclude that $\rho$ induces an
isomorphism from the inverse image of the class of $a$ in $(L,M)$ to the
inverse image of the class of $\rho (a)$ in $(\rho (L), \rho (M))$. We have
completed our proof that
$$
\rho : (L,M)\cong (\rho (L), \rho (M)).
$$
In order to obtain that $\rho$ induces an isomorphism on homotopy categories we
just have to see that any $1$-connected very presentable $n$-stack $T$ is of
the form $\rho (L)$. We show this by induction on the truncation level.
Put $T'=\tau _{\leq n-1}T$. By the induction hypothesis there is a DGL $L'$
with $\rho (L')\cong T'$ (and we may actually write $\rho (L')=T'$). Now the
fibration $T\rightarrow T'$ is classified by a map $f:T'\rightarrow K(V,n+1)$.
From the above proof this map comes from a map $b:L'\rightarrow V[n]$, that is
$f=\rho (b)$. In turn this map classifies a new
DGL $L$ over $L'$. The fibration $\rho (L)\rightarrow \rho (L')=T'$ is
classified by the map $\rho (b)=f$ so $\rho (L)\cong T$.
\hfill $\Box$\vspace{.1in}
\subnumero{Dold-Puppe}
Eventually it would be nice to have a relative version of the previous theory,
over any $n$-stack $R$. The main problem in trying to do this is to
have the right notion of complex of sheaves over an $n$-stack $R$. Instead of
trying to do this, we will simply use the notion of {\em spectrum over $R$} (to
be precise I will use the word ``spectrum'' for what is usually called an
``$\Omega$-spectrum''. For our purposes we are only interested in spectra with
homotopy groups which are rational and vanish outside of a bounded interval. In
absolute terms such a spectrum is equivalent to a complex of rational vector
spaces, so in the relative case over a presheaf of spaces $R$ this gives a
generalized notion of complex over $R$.
For our spectra with homotopy groups nonvanishing only in a bounded region, we
can replace the complicated general theory by the simple consideration of
supposing that we are in the stable range. Thus we fix numbers
$N, M$ with $M$ bigger than the length of any complex we want to consider and
$N$ bigger than $2M+2$. For example if we are only interested in dealing with
$n$-stacks then we cannot be interested in complexes of length bigger than $n$
so we could take $M>n$.
An {\em spectrum} (in our setting) is then simply an $N$-truncated
rational space with a basepoint, and which is $N-M-1$-connected. More
generally if $R$ is an $n$-stack with $n \leq N$ then a {\em spectrum over
$R$} is just an $N$-stack $S$ with morphism $p:S\rightarrow R$ and section
denoted $\xi : R\rightarrow S$ such that $S$ is rational and $N-M-1$-connected
relative to $R$. A morphism of spectra is a morphism of spaces (preserving the
basepoint).
Suppose $S$ is a spectrum; we define the {\em complex associated to $S$}
by setting $\gamma (S)^i$ to be the singular $N-i$-chains on $S$. The
differential $d: \gamma (S)^i\rightarrow \gamma (S)^{i+1}$ is the same as the
boundary map on chains (which switches direction because of the change in
indexing). Note that we have normalize things so that the complex starts in
degree $0$. The homotopy theory of spectra is the same as that of complexes of
rational vector spaces indexed in degrees $\geq 0$,
with cohomology nonvanishing only in degrees $\leq M$.
If $C^{\cdot}$ is a complex as above then let $\sigma (C^{\cdot})$ denote the
corresponding spectrum.
This can be generalized to the case where the base is a $0$-stack. If $Y$ is
a $0$-stack (notably for example a scheme) and if $S$ is a spectrum over $Y$
then we obtain a complex of presheaves of rational vector spaces
$\gamma (S/Y)$ over $Y$. Conversely if $C^{\cdot}$ is a complex of presheaves
of rational vector spaces over $Y$ then we obtain a spectrum denoted $\sigma
(C^{\cdot}/Y)$. These constructions are an equivalence in homotopy theories,
where the weak equivalence between complexes means quasiisomorphism (i.e.
morphisms inducing isomorphisms on associated cohomology sheaves).
If $S$ is a spectrum and $n \leq N$ then we can define the {\em realization}
$\kappa (S, n)$ to be the $N-n$-th loop space $\Omega ^{N-n}S$ (the loops are
taken based at the given basepoint). Similarly if $S$ is a spectrum over an
$n'$-stack $R$ then we obtain the {\em realization} $\kappa
(S/R,n)\rightarrow R$
as the $N-n$-th relative loop space based at the given section $\xi$.
Taken together we obtain the following construction: if $C^{\cdot}$ is a
complex of vector spaces then $\kappa (\sigma (C^{\cdot}), n)$ is an $n$-stack.
If $C^{\cdot}$ is a complex of presheaves of rational vector spaces over a
$0$-stack (presheaf of sets) $Y$ then $\kappa (\sigma (C^{\cdot}/Y)/Y, n)$
is an $n$-stack over $Y$. These constructions are what is known as {\em
Dold-Puppe}. They are compatible with the usual Eilenberg-MacLane constructions:
if $V$ is a presheaf of rational vector spaces over $Y$ considered as a complex
in degree $0$ then $$
\kappa (\sigma (V/Y)/Y, n)= K(V/Y, n).
$$
The basic idea behind our notational system is that we think of spectra over
$R$ as being complexes of rational presheaves over $R$ starting in degree $0$.
The operation $\kappa (S/R, n)$ is the {\em Dold-Puppe} realization from a
``complex'' to a space relative to $R$.
We can do higher direct images in this context. If $f:R\rightarrow T$
is a morphism of $n$-stacks and if $S$ is a spectrum over $R$ then define
$f_{\ast}(S)$ to be the $N$-stack $\Gamma (R/T, S)$ of sections relative to $T$.
This is compatible with realizations: we have
$$
\Gamma (R/T, \kappa (S, n))= \kappa (f_{\ast}(S),n).
$$
Suppose that $f: X\rightarrow Y$ is a morphism of $0$-stacks. Then for a
complex of rational presheaves $C^{\cdot}$ on $X$ the direct image construction
in terms of spectra is the same as the usual higher direct image of complexes
of sheaves (applied to the sheafification of the complex):
$$
f_{\ast}(\sigma (C^{\cdot}/X))= \sigma (({\bf R}f_{\ast}C^{\cdot})/Y).
$$
We extend this just a little bit, in a special case in which it still makes
sense to talk about complexes. Suppose $X$ is a $1$-stack and $Y$ is a
$0$-stack, with $f: X\rightarrow Y$ a morphism. Suppose $V$ is a local system
of presheaves on $X$ (i.e. for each $Z\in {\cal X}$, $V(Z)$ is a local system of
rational vector spaces on $X(Z)$). Another way to put this is that $V$ is an
abelian group object over $Z$. We can think of $V$ as being a complex of
presheaves over $X$ (even though we have not defined this notion in general) and
we obtain the spectrum which we denote by $\sigma (V/X)$ over $X$ (even
though this doesn't quite fit in with the general definition of $\sigma$
above), and its realization $\kappa (\sigma (V/X)/X, n)\rightarrow X$ which is
what we would otherwise denote as $K(V/X,n)$. The higher
direct image ${\bf R}f_{\ast}(V)$ makes sense as a complex of presheaves on $Y$,
and we have the compatibilities
$$
f_{\ast} \sigma (V/X) = \sigma ({\bf R}f_{\ast}(V))
$$
and
$$
\Gamma (X/Y, \kappa (\sigma (V/X)/X, n))= \kappa (\sigma ({\bf
R}f_{\ast}(V)),n).
$$
\begin{proposition}
\label{ComplexOfVB}
Suppose $R$ is an $n$-stack and $S$ is a spectrum over $R$ such that for every
map $Y\rightarrow R$ from a scheme, there is (locally over $Y$ in the etale
topology) a complex of vector bundles $E^{\cdot}_Y$ over $Y$ with $S\times
_RY\cong \sigma (E^{\cdot}_Y/Y)$. Then the realization $\kappa (S/R,n)$ is
geometric over $R$. In particular if $R$ is geoemtric then so is $\kappa
(S/R,n)$. \end{proposition}
{\em Proof:}
In order to prove that the morphism $\kappa (S/R,n)\rightarrow R$ is geometric,
it suffices to prove that for every base change to a scheme $Y\rightarrow R$,
the fiber product $\kappa (S/R, n)\times _RY$ is geometric. But
$$
\kappa (S/R,n)\times _RY= \kappa (\sigma (E^{\cdot}_Y/Y)/Y, n),
$$
so it suffices to prove that for a scheme $Y$ and a complex of vector bundles
$E^{\cdot}$ on $Y$, we have $\kappa (\sigma (E^{\cdot}/Y)/Y, n)$ geometric.
Note that $\kappa (\sigma (E^{\cdot}), n)$ only depends on the part of the
complex
$$
E^0\rightarrow E^1 \rightarrow \ldots \rightarrow E^n\rightarrow E^{n+1}
$$
so we assume that it stops there or earlier. Now we proceed by induction on the
length of the complex.
Define a complex $F^i= E^{i-1}$ for $i\geq 1$, which has length strictly
smaller than that of $E^{\cdot}$. Let $E^0$ denote the first vector bundle of
$E^{\cdot}$ considered as a complex in degree $0$ only. We have a morphism of
complexes $E^0\rightarrow F^{\cdot}$ and $E^{\cdot}$ is the mapping cone. Thus
$$
\sigma (E^{\cdot}/Y) = \sigma (E^0/Y)\times _{\sigma (F^{\cdot}/Y)}Y
$$
with $Y\rightarrow \sigma (F^{\cdot}/Y)$ the basepoint section. We get
$$
\kappa (\sigma (E^{\cdot}/Y)/Y,n) = K(E^0/Y,n)\times _{\kappa (\sigma
(F^{\cdot}/Y)/Y, n)}Y.
$$
By our induction hypothesis, $\kappa (\sigma
(F^{\cdot}/Y)/Y, n)$ is geometric. Note that $E_0$ is a smooth group
scheme over
$Y$ so by Lemma \ref{eilenbergExample}, $K(E^0/Y,n)$ is geometric $Y$. By
\ref{fiberprod}, \linebreak $\kappa (\sigma (E^{\cdot}/Y)/Y,n)$ is geometric.
\hfill $\Box$\vspace{.1in}
{\em Remark:} This proposition is a generalisation to $n$-stacks of
(\cite{LaumonMB} Construction 9.19, Proposition 9.20). Note that
if $E^{\cdot}$ is a complex where $E^i$ are vector bundles for $i<n$ and $E^n$
is a vector scheme (i.e. something of the form ${\bf V}({\cal M} )$ for a coherent
sheaf ${\cal M}$ in the notation of \cite{LaumonMB}) then we can express $E^n$ as
the kernel of a morphism $U^n\rightarrow U^{n+1}$ of vector bundles (this would
be dual to the presentation of ${\cal M}$ if we write $E^n = {\bf V}({\cal M} )$).
Setting $U^i= E^i$ for $i<n$ we get $\kappa (\sigma (E^{\cdot}), n)=
\kappa (\sigma (U^{\cdot}),n)$. In this way we recover Laumon's and
Moret-Bailly's construction in the case $n=1$.
\begin{corollary}
Suppose $f:X\rightarrow Y$ is a projective flat morphism of schemes, and
suppose that $V$ is a vector bundle on $X$. Then
$\Gamma (X/Y, K(V/X,n))$ is a geometric $n$-stack lying over $Y$.
\end{corollary}
{\em Proof:}
By the discussion at the start of this subsection,
$$
\Gamma (X/Y, K(V/X,n)) = \kappa (\sigma ({\bf R} f_{\ast}(V)/Y)/Y, n).
$$
But by Mumford's method \cite{Mumford}, ${\bf R} f_{\ast}(V)$ is
quasiisomorphic (locally over $Y$) to a complex of vector bundles. By
Proposition \ref{ComplexOfVB} we get that $\Gamma (X/Y, K(V,n))$
is geometric over $Y$.
\hfill $\Box$\vspace{.1in}
Recall that a {\em formal groupoid} is a stack $X_{\Lambda}$ associated to a
groupoid of formal schemes where the object object is a scheme $X$ and the
morphism object is a formal scheme $\Lambda \rightarrow X\times X$ with support
along the diagonal. We say it is {\em smooth} if the projections
$\Lambda \rightarrow X$ are formally smooth. In this case the cohomology
of the stack $X_{\Lambda}$ with coefficients in vector bundles over
$X_{\Lambda}$ (i.e. vector bundles on $X$ with $\Lambda$-structure meaning
isomorphisms between the two pullbacks to $\Lambda$ satisfying the cocycle
condition on $\Lambda \times _X\Lambda$) is calculated by the {\em de Rham
complex} $\Omega ^{\cdot}_{\Lambda}\otimes _{{\cal O}}V$ of locally free sheaves
associated to the formal scheme \cite{Illusie} \cite{Berthelot}.
We say that $X_{\Lambda}\rightarrow Y$ is a smooth formal groupoid over $Y$ if
$X_{\Lambda}$ is a smooth fomal groupoid mapping to $Y$ and if $X$ is flat over
$Y$.
\begin{corollary}
Suppose $f: X_{\Lambda} \rightarrow Y$ is a projective smooth formal groupoid
over a scheme $Y$. Suppose that $V$ is a vector bundle on $X_{\Lambda}$ (i.e. a
vector bundle on $X$ with $\Lambda$-structure). Then
$\Gamma (X_{\Lambda}/Y, K(V/X_{\Lambda},n))$ is a geometric $n$-stack lying over
$Y$.
\end{corollary}
{\em Proof:}
By the ``slight extension'' in the discussion at the start of this subsection,
$$
\Gamma (X_{\Lambda}/Y, K(V/X_{\Lambda},n)) = \kappa (\sigma ({\bf R}
f_{\ast}(V)/Y)/Y, n).
$$
But
$$
{\bf R}f_{\ast}(V) = {\bf R} f'_{\ast}(\Omega ^{\cdot}_{\Lambda}\otimes _{{\cal O}}V)
$$
where $f': X\rightarrow Y$ is the morphism on underlying schemes.
Again by Mumford's method \cite{Mumford},
${\bf R} f'_{\ast}(\Omega ^{\cdot}_{\Lambda}\otimes _{{\cal O}}V)$ is
quasiisomorphic (locally over $Y$) to a complex of vector bundles. By the
Proposition \ref{ComplexOfVB} we get that $\Gamma (X_{\Lambda}/Y,
K(V/X_{\Lambda},n))$ is geometric over $Y$.
\hfill $\Box$\vspace{.1in}
\numero{Maps into geometric $n$-stacks}
\begin{theorem}
\label{maps}
Suppose $X\rightarrow S$ is a projective flat morphism. Suppose $T$ is a
connected $n$-stack which is very presentable (i.e. the fundamental group is
represented by an affine group scheme of finite type denoted $G$ and the higher
homotopy groups are represented by finite dimensional vector spaces). Then the
morphism $Hom (X/S,T) \rightarrow Bun _G(X/S)= Hom (X/S, BG)$ is a geometric
morphism. In particular $Hom (X/S, T)$ is a locally geometric $n$-stack.
\end{theorem}
{\em Proof:}
Suppose $V$ is a finite dimensional vector space. Let
$$
{\cal B} (V,n)= BAut (K(V,n))
$$
be the classifying $n+1$-stack for fibrations with fiber $K(V,n)$. It is
connected with fundamental group $GL(V)$ and homotopy group $V$ in dimension
$n+1$ and zero elsewhere. The truncation morphism
$$
{\cal B} (V,n)\rightarrow B\, GL(V)
$$
has fiber $K(V,n+1)$ and
admits a canonical section $o: BGL(V)\rightarrow {\cal B} (V,n)$ (which corresponds
to the trivial fibration with given action of $GL(V)$ on $V$---this fibration
may itself be constructed as ${\cal B} (V, n-1)$ or in case $n= 2$ as $B(GL(V)
\semidirect V)$). The fiber of the morphism $o$ is $K(V, n)$, and $BGL(V)$ is
the universal object over ${\cal B} (V, n)$.
Note that $BGL(V)$ is an geometric $1$-stack (i.e. algebraic stack) and by
Proposition \ref{fibration} applied to the truncation fibration, ${\cal B} (V, n)$
is a geoemtric $n+1$-stack.
If $X\rightarrow S$ is a projective flat morphism then $Hom (X/S, BGL(V))$ is
a locally geometric $1$-stack (via the theory of Hilbert schemes). We show
that $p:Hom (X/S, {\cal B} (V, n))\rightarrow Hom (X/S, BGL(V))$ is a geometric
morphism. For this it suffices to consider a morphism $\zeta :Y\rightarrow Hom
(X/S, BGL(V))$ from a scheme $Y/S$ which in turn corresponds
to a vector bundle $V_{\zeta}$ on $X\times _SY$. The fiber
of the map $p$ over $\eta$ is $\Gamma (X\times _SY/Y; K(V_{\zeta}, n+1))$
which as we have seen above is geometric over $Y$. This shows that $p$ is
geometric. In particular $Hom (X/S, {\cal B} (V, n))$ is locally geometric.
We now turn to the situation of a general connected geometric and very
presentable $n$-stack $T$. Consider the truncation morphism $a:T\rightarrow
T':=\tau
_{\leq n-1}T$. We may assume that the theorem is known for the $n-1$-stack
$T'$. The morphism $a$ is a fibration with fiber $K(V, n)$ so it comes from a
map $b:T' \rightarrow {\cal B} (V, n)$ and more precisely we have
$$
T = T' \times _{{\cal B} (V,n)} BGL(V).
$$
Thus
$$
Hom (X/S, T)= Hom (X/S, T') \times _{Hom (X/S,{\cal B} (V,n))} Hom (X/S,BGL(V)).
$$
But we have just checked that $Hom (X/S,BGL(V))$
and $Hom (X/S,{\cal B} (V,n))$ are locally geometric, and by hypothesis
$Hom (X/S, T')$ is locally geometric. Therefore by the version of
\ref{fiberprod} for locally geometric $n$-stacks, the fiber product is locally
geometric. This completes the proof. \hfill $\Box$\vspace{.1in}
\begin{theorem}
\label{smoothformal}
Suppose $(X,\Lambda )\rightarrow S$ is a smooth
projective morphism with smooth formal category structure relative to $S$.
Let $X_{\Lambda}\rightarrow S$ be the resulting family of stacks.
Suppose $T$ is a connected very presentable $n$-stack which is very presentable
(with fundamental group scheme denoted $G$). Then the morphism $Hom
(X_{\Lambda }/S,T) \rightarrow
Hom (X_{\Lambda }/S, BG)$ is a geometric morphism.
In particular $Hom (X_{\Lambda }/S, T)$ is a locally
geometric $n$-stack.
\end{theorem}
{\em Proof:}
The same as before. Note here also that $Hom (X_{\Lambda }/S, BG)$ is an
algebraic stack locally of finite type.
\hfill $\Box$\vspace{.1in}
{\em Remark:} In the above theorems the base $S$ can be assumed to be any
$n$-stack, one looks at morphisms with the required properties when base
changed to any scheme $Y\rightarrow S$.
\subnumero{Semistability}
Suppose $X\rightarrow S$ is a projective flat morphism, with fixed ample class,
and suppose $G$ is an affine algebraic group. We get a notion of semistability
for $G$-bundles (for example, fix the convention that we speak of Gieseker
semistability). Fix also a collection of Chern classes which we denote $c$. We
get a Zariski open substack
$$
Hom ^{\rm se}_c(X/S, BG)\subset Hom (X/S, BG)
$$
(just the moduli $1$-stack of semistable $G$-bundles with Chern classes $c$).
The boundedness property for semistable $G$-bundles with fixed Chern classes
shows that $Hom ^{\rm se}_c(X/S, BG)$ is a geometric $1$-stack.
Now if $T$ is a connected very presentable $n$-stack, let $G$ be the
fundamental group scheme and let $c$ be a choice of Chern classes for
$G$-bundles. Define
$$
Hom ^{\rm se}_c(X/S, T):= Hom (X/S, T)\times _{Hom (X/S, BG)} Hom ^{\rm
se}_c(X/S, BG).
$$
Again it is a Zariski open substack of $Hom (X/S, T)$ and it is a geometric
$n$-stack rather than just locally geometric.
We can do the same in the case of a smooth formal category $X_{\Lambda}
\rightarrow S$. Make the convention in this case that we ask the Chern classes
to be zero (there is no mathematical need to do this, it is just to conserve
indices, since practically speaking this is the only case we are interested in
below). We obtain a Zariski open substack
$$
Hom ^{\rm se}(X_{\Lambda}/S, BG)\subset Hom (X_{\Lambda}/S, BG),
$$
the moduli stack for semistable $G$-bundles on $X_{\Lambda}$ with vanishing
Chern classes. See \cite{Moduli} for the construction (again the methods given
there suffice for the construction, although stacks are not explicitly
mentionned). Again for any connected very presentable $T$ with fundamental
group scheme $G$ we put
$$
Hom ^{\rm se}(X_{\Lambda}/S, T):= Hom (X_{\Lambda}/S, T)\times _{Hom (X
_{\Lambda}/S, BG)} Hom ^{\rm
se}(X_{\Lambda}/S, BG).
$$
It is a geometric $n$-stack.
Finally we note that in the case of the relative de Rham formal category
$X_{DR/S}$ semistability of principal $G$-bundles is automatic (as is the
vanishing of the Chern classes). Thus
$$
Hom ^{\rm se}(X_{DR/S}/S, T)= Hom (X_{DR/S}/S, T)
$$
and $Hom (X_{DR/S}, T)$ is already a geometric $n$-stack.
\subnumero{The Brill-Noether locus}
Suppose $G$ is an algebraic group and $V$ is a representation. Define the
$n$-stack $\kappa (G,V,n)$ as the fibration over $K(G,1)$ with fiber
$K(V,n)$ where $G$ acts on $V$ by the given representation and such that
there is a section. Let $X$ be a projective variety. We have a morphism
$$
Hom (X, \kappa (G,V,n))\rightarrow Hom (X, K(G,1))= Bun _G(X).
$$
The fiber over a point $S\rightarrow Bun _G(X)$ corresponding to a principal
$G$-bundle $P$ on $X\times S$ is the relative section space
$$
\Gamma (X\times S/S, K(P\times ^GV/X\times S, n)).
$$
By the compatibilities given at the start of the section on Dold-Puppe, this
relative section space is the $n$-stack corresponding to the direct image
$Rp_{1,\ast}(P\times ^GV)$ which is a complex over $S$. Note that this
complex is quasiisomorphic to a complex of vector bundles. Thus we have:
\begin{corollary}
\label{BN}
The morphism
$$
Hom (X, \kappa (G,V,n))\rightarrow Bun _G(X)
$$
is a morphism of geometric $n$-stacks.
\end{corollary}
\hfill $\Box$\vspace{.1in}
{\em Remark:} The $Spec ({\bf C} )$-valued points of $Hom (X, \kappa
(G,V,n))$ are the pairs $(P, \eta )$ where $P$ is a principal $G$-bundle
on $X$ and $\eta \in H^n(X, P\times ^GV)$.
Thus $Hom (X, \kappa
(G,V,n))$ is a geometric $n$-stack whose $Spec ({\bf C} )$-points are the
Brill-Noether set of vector bundles with cohomology classes on $X$.
\subnumero{Some conjectures}
We give here some conjectures about the possible extension of the above results
to any (not necessarily connected) geometric $n$-stacks $T$.
\begin{conjecture}
If $T$ is a geometric $n$-stack which is very presentable in the sense of
\cite{RelativeLie} (i.e. the fundamental groups over artinian base are affine,
and the higher homotopy groups are vector sheaves) then for any smooth (or just
flat?) projective morphism $X\rightarrow S$ we have that $Hom (X/S, T)$ is
locally geometric. \end{conjecture}
\begin{conjecture}
\label{KGm2}
If $T= K({\bf G}_m , 2)$ then for a flat projective morphism $X\rightarrow S$,
$Hom (X/S, T)$ is locally geometric. Similarly if $G$ is {\em any} group scheme
of finite type (e.g. an abelian variety)
then $Hom (X/S, BG)$ is locally geometric.
\end{conjecture}
Putting together with the previous conjecture we can make:
\begin{conjecture}
If $T$ is a geoemtric $n$-stack whose $\pi _i$ are vector sheaves for $i\geq 3$
then $Hom (X/S, T)$ is locally geometric.
\end{conjecture}
Note that Conjecture \ref{KGm2} cannot be true if $K({\bf G}_m, 2)$ is
replaced by $K({\bf G}_m, i)$ for $i\geq 3$, for in that case the morphism
stacks will themselves be only locally of finite type. Instead we will get a
``slightly geometric'' $n$-stack as discussed in \S 3. One could make the
following conjecture:
\begin{conjecture}
If $T$ is any geometric (or even locally or slightly geometric) $n$-stack and
$X\rightarrow S$ is a flat projective morphism then $Hom (X/S, T)$ is slightly
geometric.
\end{conjecture}
After these somewhat improbable-sounding conjectures, let finish by making
a more
reasonable statement:
\begin{conjecture}
If $T$ is a very presentable geometric $n$-stack and $X$ is a smooth projective
variety then $Hom (X_{DR}, T)$ is again geometric.
\end{conjecture}
Here, we have already announced the finite-type result in the statement that
\linebreak
$Hom (X_{DR}, T)$ is very presentable \cite{kobe} (I have not yet circulated
the proof, still checking the details...).
\subnumero{GAGA}
Let ${\cal X} ^{\rm an}$ be the site of complex analytic spaces with the etale (or
usual--its the same) topology. We can make similar definitions of geometric
$n$-stack on ${\cal X} ^{\rm an}$ which we will now denote by {\em analytic
$n$-stack} (in case of confusion...). There are similar definitions of
smoothness and so on.
There is a morphism of sites from the analytic to the algebraic sites.
If $T$ is a geometric $n$-stack on ${\cal X}$ then its pullback by this morphism (cf
\cite{realization}) is an analytic $n$-stack which we denote by $T^{\rm an}$.
We have:
\begin{theorem}
\label{gaga}
Suppose $T$ is a connected very presentable geometric $n$-stack. Suppose
$X\rightarrow S$ is a flat projective morphism (resp. suppose
$X_{\Lambda}\rightarrow S$ is the morphism associated to a smooth formal
category over $S$). Then the natural morphism
$$
Hom (X/S, T)^{\rm an} \rightarrow Hom (X^{\rm an}/S^{\rm an}, T^{\rm an})
$$
$$
\left( \mbox{resp.} Hom (X_{\Lambda}/S, T)^{\rm an} \rightarrow Hom
(X_{\Lambda}^{\rm
an}/S^{\rm an}, T^{\rm an})
\right)
$$
is an isomorphism of analytic $n$-stacks.
\end{theorem}
{\em Proof:}
Just following through the proof of the facts that $Hom (X/S, T)$
or $Hom (X_{\Lambda}/S, T)$ are geometric, we can keep track of the analytic
case too and see that the morphisms are isomorphisms along with the main
induction.
\hfill $\Box$\vspace{.1in}
{\em Remarks:}
\newline
(1)\, This GAGA theorem holds for $X_{DR}$ with coefficients in any very
presentable $T$ (not necessarily connected) \cite{kobe}.
\newline
(2)\, In \cite{kobe} we also give a ``GFGA'' theorem for $X_{DR}$ with
coefficients in a very presentable $n$-stack.
\newline
(3)\, The GAGA theorem does not hold with coefficients in $T= K({\bf G}_m ,
2)$. Thus the condition that the higher homotopy group sheaves of $T$ be vector
sheaves is essential. Maybe it could be weakened by requiring just that the
fibers over artinian base schemes be unipotent (but this might also be
equivalent to the vector sheaf condition). \newline
(4)\, Similarly the GAGA theorem does not hold with coefficients in
$T= BA$ for an abelian variety $A$; thus again the hypothesis that the fibers of
the fundamental group sheaf over artinian base be affine group schemes, is
essential.
\numero{The tangent spectrum}
We can treat a fairly simple case of the conjectures outlined above: maps from
the spectrum of an Artin local algebra of finite type.
\begin{theorem}
\label{mapsFromArtinian}
Let $X=Spec (A)$ where
$A$ is artinian, local, and of finite type over $k$. Suppose $T$ is a
geometric $n$-stack. Then $Hom (X,T)$ is a geometric $n$-stack. If
$T\rightarrow T'$ is a geometric smooth morphism of $n$-stacks then $Hom (X,
T)\rightarrow Hom (X, T')$ is a smooth geometric morphism of $n$-stacks.
\end{theorem}
{\em Proof:}
We prove the following statement: if $Y$ is a scheme and $A$ as in the theorem,
and if $T\rightarrow Y\times Spec (A)$ is a geometric (resp. smooth geometric)
morphism of $n$-stacks then $\Gamma (Y\times Spec (A)/Y, T)$ is geometric
(resp.
smooth geometric) over $Y$. The proof is by induction on $n$; note that it
works for $n=0$. Now in general choose a smooth surjection $X\rightarrow T$
from a scheme. Then $\Gamma (Y\times Spec (A)/Y, X)$
is a scheme over $Y$, and if $X$ is smooth over $Y$ then the section scheme is
smooth over $Y$. We have a surjection
$$
a:\Gamma (Y\times Spec (A)/Y, X)\rightarrow
\Gamma (Y\times Spec (A)/Y, T),
$$
and for $Z\rightarrow \Gamma (Y\times Spec (A)/Y, X)$ (which amounts to
a section morphism $Z\times Spec (A)\rightarrow T$) the fiber product
$$
\Gamma (Y\times Spec (A)/Y, X)\times _{ \Gamma (Y\times Spec (A)/Y, T)}
Z
$$
is equal to
$$
\Gamma (Z\times Spec (A)/Z, X\times _T(Z\times Spec (A))).
$$
But $X\times _T(Z\times Spec (A)$ is a smooth $n-1$-stack over
$Z\times Spec (A)$ so by induction this section stack is geometric and smooth
over $Z$. Thus our surjection $a$ is a smooth geometric morphism so
$\Gamma (Y\times Spec (A)/Y, T)$ is geometric. The smoothness statement
follows immediately.
\hfill $\Box$\vspace{.1in}
We apply this to define the {\em tangent spectrum} of a geometric
$n$-stack. This is a generalization of the discussion at the end of
(\cite{LaumonMB} \S 9), although we use a different approach because I
don't have
the courage to talk about cotangent complexes!
Recall from \cite{Adams} Segal's infinite loop space machine: let $\Gamma$ be
the category whose objects are finite sets and where the morphisms from
$\sigma$ to $\tau$ are maps $P(\sigma )\rightarrow P(\tau )$ preserving
disjoint unions (here $P(\sigma )$ is the set of subsets of $\sigma$).
A morphism is determined, in fact, by the map $\sigma \rightarrow P(\tau )$
taking different elements of $\sigma$ to disjoint subsets of $\tau$ (note that
the empty set must go to the empty set). Let $[n]$
denote the set with $n$ elements. There is a functor $s:\Delta \rightarrow
\Gamma$ sending the the ordered set $\{ 0,\ldots , n\}$ to the finite set $\{ 1,
\ldots , n\}$---see \cite{Adams} p. 64 for the formulas for the morphisms.
Segal's version of an
$E_{\infty}$-space (i.e. infinite loop space) is a contravariant functor
$\Psi : \Gamma \rightarrow Top$ such that the associated simplicial
space (the composition $\Psi \circ s$) satisfies Segal's condition
\cite{Adams}. In order to really get an infinite loop space it is also required
that $\Psi (\emptyset )$ be a point (although this condition seems to have been
lost in Adams' very brief treatment).
Segal's machine is then a classifying space functor $B$ from
special $\Gamma$-spaces to special $\Gamma$-spaces. This actually works even
without the condition that $\Phi (\emptyset )$ be a point, however the
classifying space construction is the inverse to the {\em relative} loop space
construction over $\Phi (\emptyset )$. Note that since $\emptyset$ is a final
object in $\Gamma$ the components of a $\Gamma$-space are provided with a
section from $\Phi (\emptyset )$. If
$\Phi$ is a special $\Gamma$-space then $B^n\Phi$ is again a special $\Gamma$
space with
$$
B^n\Phi (\emptyset )= \Phi (\emptyset )
$$
and
$$
\Omega ^n(B^n\Phi ([1])/\Phi (\emptyset ))= \Phi ([1]).
$$
The notion of $\Gamma$-space (say with $\Phi [1] $ rational over $\Phi
(\emptyset )=R$) is another replacement for our notion of spectrum over $R$; we
get to our notion as defined above by looking at $B^N\Phi ([1])$.
The above discussion makes sense in the context of presheaves of spaces over
${\cal X}$ hence in the context of $n$-stacks.
We now try to apply this in our situation to construct the tangent spectrum.
For any object $\sigma \in \Gamma$ let ${\bf A}^\sigma$ be the affine space
over $k$
with basis the set $\sigma$. An element of ${\bf A}^\sigma$ can be written as
$\sum _{i\in \sigma} a_ie_i$ where $e_i$ are the basis elements and $a_i\in k$.
Given a map $f:\sigma \rightarrow P(\tau )$ we define a map
$$
{\bf A}^f : {\bf A}^{\sigma} \rightarrow {\bf A}^{\tau}
$$
$$
\sum a_i e_i \mapsto \sum _{i\in \sigma} \sum _{j\in f(i)\subset \tau}
a_i e_j.
$$
For example there are four morphisms from $[1]$ to $[2]$, sending $1$ to
$\emptyset$, $\{ 1\}$, $\{ 2\}$ and $\{ 1,2\}$ respectively. These correspond
to the constant morphism, the two coordinate axes, and the diagonal from ${\bf
A}^1$ to ${\bf A}^2$. We get a covariant functor from $\Gamma$ to the category
of affine schemes.
For a finite set $\sigma$ let $D^{\sigma}$ denote the subscheme of ${\bf
A}^{\sigma}$ defined by the square of the maximal ideal defining the origin.
These fit together into a covariant functor from $\Gamma$ to the category of
artinian local schemes of finite type over $k$.
If $T$ is a geometric $n$-stack thought of as a strict presheaf of spaces, then
the functor
$$
\Theta :\sigma \mapsto Hom (D^{\sigma}, T)
$$
is a contravariant functor from $\Gamma$ to the category of
geometric $n$-stacks, with $\Theta (\emptyset )=T$. To see that it satisfies
Segal's condition we have to check that the map
$$
Hom (D^n, T)\rightarrow Hom (D^1, R)\times _R \ldots \times _THom (D^1, T)
$$
is an equivalence. Once this is checked we obtain a spectrum over $T$ whose
interpretation in our terms is as the $N$-stack $B^N\Phi ([1])$.
In the statement of the following theorem we will normalize our relationship
between complexes and spectra in a different way from before---the most natural
way for our present purposes.
\begin{theorem}
\label{tangent}
Suppose $T$ is a geometric $n$-stack. The above construction gives a spectrum
$\Theta (T)\rightarrow T$ which we call the {\em tangent spectrum of $T$}.
If $Y\rightarrow T$ is a morphism from a scheme then $\Theta (T)\times _TY$
is equivalent to $\sigma (E^{\cdot}/Y)$ for a complex
$$
E^{-n}\rightarrow \ldots \rightarrow E^0
$$
with $E^i$ vector bundles ($i<0$) and $E^0$ a vector scheme over $Y$.
Furthermore if $T$ is smooth then $E^0$ can be assumed to be a vector bundle.
In particular, $\kappa (\Theta (T)/T, n)$ is geometric, and if $T$ is smooth
then $\Theta (T)$ is geometric.
\end{theorem}
{\em Proof of \ref{tangent}:}
The first task is to check the above condition for $\Theta$ to be a special
$\Gamma$-space. Suppose in general that $A,B\subset C$ are closed artinian
subschemes of an artinian scheme with the extension property that for any
scheme $Y$ the morphisms from $C$ to $Y$ are the same as the pairs of morphisms
$A,B\rightarrow Y$ agreeing on $A\cap B$. We would like to show that for any
geometric stack $T$,
$$
Hom (C,T)\rightarrow Hom (A,T)\times _{Hom(A\cap B, T)}Hom (B,T)
$$
is an equivalence. We have a similar relative statement for sections of a
geometric morphism $T\rightarrow Y\times C$ for a scheme $Y$. We prove the
relative statement by induction on the truncation level $n$, but for
simplicity use the notation of the absolute statement. Let $X\rightarrow
T$ be a
smooth geometric morphism from a scheme. Then consider the diagram
$$
\begin{array}{ccc}
Hom (C,X) &\stackrel{\cong}{\rightarrow}&
Hom (A,X)\times _{Hom(A\cap B, X)}Hom (B,X) \\
\downarrow && \downarrow \\
Hom (C,T)&\rightarrow &Hom (A,T)\times _{Hom(A\cap B, T)}Hom (B,T).
\end{array}
$$
It suffices to prove that for a map from a scheme $Y\rightarrow
Hom (C,T)$ the morphism on fibers is an equivalence. The fiber on the left is
$$
Hom (C,X)\times _{Hom (C,T)}Y= \Gamma (Y\times C, X\times _{T}(Y\times C)),
$$
whereas the fiber on the right is
$$
\Gamma (Y\times A, X\times _{T}(Y\times A))
\times _{\Gamma (Y\times (A\cap B), X\times _{T}(Y\times (A\cap B)))}
\Gamma (Y\times B, X\times _{T}(Y\times B)).
$$
By the relative version of the statement for the $n-1$-stack
$X\times _{T}(Y\times C)$ over $Y\times C$, the map of fibers is an equivalence,
so the map
$$
Hom (C,T)\rightarrow Hom (A,T)\times _{Hom(A\cap B, T)}Hom (B,T)
$$
is an equivalence.
Apply this inductively with $C= D^n$, $A= D^1$ and $B= D^{n-1}$ (so $A\cap
B=D^0$). We obtain the required statement, showing that $\Theta$ is a special
$\Gamma$-space relative to $T$. It integrates to a spectrum which we denote
$\Theta (T)\rightarrow T$.
Note that if $T=X$ is a scheme considered as an $n$-stack then $\Theta (X)$ is
just the spectrum associated to the complex consisting of the tangent vector
scheme of $X$ in degree $0$. We obtain the desired statement in this case.
If $R\rightarrow T$ is a morphism of geometric $n$-stacks then
we obtain a morphism of spectra
$$
\Theta (R) \rightarrow \Theta (T)\times _T R .
$$
The cofiber (i.e. $B$ of the fiber) we denote by $\Theta (R/T)$.
We prove more generally---by induction on $n$---that if $T\rightarrow Y$ is a
geometric morphism from an $n$-stack to a scheme, and if $Y\rightarrow T$ is a
section then $\Theta (T/Y)\times _TY$ is associated (locally on $Y$) to a
complex
of vector bundles and a vector scheme at the end; with the last vector scheme
being a bundle if the morphism is smooth. Note that it is true for $n=0$. For
any $n$ choose a smooth geometric morphism $X\rightarrow T$ and we may assume
(by etale localization) that there is a lifting of the section to $Y\rightarrow
X$. Now there is a triangle of spectra (i.e. associated to a triangle of
complexes in the derived category)
$$
\Theta (X)\times _XY \rightarrow \Theta (T)\times _TY \rightarrow B\Theta
(X/T)\times _XY.
$$
On the other hand,
$$
B\Theta (X/T)\times _XY=B\Theta (X\times _TY/Y)\times _{X\times _TY}Y.
$$
By induction this is associated to a complex as desired, and we know already
that $\Theta (X)\times _XY$ is associated to a complex as desired. Therefore
$\Theta(T)\times _TY$ is an extension of complexes of the desired form, so it
has the desired form. Note that since $X\times _TY\rightarrow Y$ is smooth,
by the induction hypothesis we get that $B\Theta (X/T)\times _XY$ is associated
to a complex of bundles.
If the morphism $T\rightarrow Y$ is smooth then the last term in the complex
will be a bundle (again following through the same induction).
\hfill $\Box$\vspace{.1in}
If $T$ is a smooth geometric $n$-stack and $P: Spec (k)\rightarrow T$ is a
point then we say that the {\em dimension of $T$ at $P$} is the alternating sum
of the dimensions of the vector spaces in the complex making up the
complex associated to $P^{\ast} (\Theta (T))$. This could, of course, be
negative.
For example if $G$ is an algebraic group then the dimension of $BG$ at any
point is $-dim (G)$.
More generally if $A$ is an abelian group scheme smooth over a base $Y$ then
$$
dim (K(A/Y, n))= dim (Y) + (-1)^ndim (A).
$$
\numero{De Rham theory}
We will use certain geometric $n$-stacks as coefficients to look at the de Rham
theory of a smooth projective variety. The answers come out to be geometric
$n$-stacks. (One could also try to look at de Rham theory {\em for} geometric
$n$-stacks, a very interesting problem but not what is meant by the title of
the present section).
If $X$ is a smooth projective variety let $X_{DR}$ be the stack (which is
actually a sheaf of sets) associated to the formal category whose object object
is $X$ and whose morphism object is the formal completion of the diagonal in
$X\times X$. Another cheaper definition is just to say
$$
X_{DR}(Y):= X(Y^{\rm red}).
$$
If $f:X\rightarrow S$ is a smooth morphism, let
$$
X_{DR/S}:= X_{DR}\times _{S_{DR}}S.
$$
It is the stack associated to a smooth formal groupoid over $S$ (take the formal
completion of the diagonal in $X\times _SX$).
The cohomology of $X_{DR}$ with coefficients in an algebraic group scheme is
the same as the de Rham cohomology of $X$ with those coefficients.
We treat this in the case of coefficients in a vector space, or in case of
$H^1$ with coefficients in an affine group scheme. Actually the statement is a
more general one about formal categories. Suppose $(X,\Lambda )\rightarrow S$
is a smooth formal groupoid over $S$ which
we can think of as a smooth scheme $X/S$ with a formal scheme $\Lambda$ mapping
to $X\times _SX$ and provided with an associative product structure. There
is an associated {\em de Rham complex} $\Omega ^{\cdot} _{\Lambda}$ on $X$
(cf \cite{Berthelot} \cite{Illusie})---whose components are locally free
sheaves on $X$ and where the differentials are first order differential
operators. Let $X_{\Lambda}$ denote the stack associated to the formal
groupoid. It is the stack associated to the presheaf of groupoids which to $Y\in
{\cal X}$ associates the groupoid whose objects are $X(Y)$ and whose morphisms are
$\Lambda (Y)$.
Suppose $V$ is a vector bundle over $X_{\Lambda}$, that is a vector bundle
on $X$ together with isomorphisms $p_1^{\ast} V\cong p_2^{\ast} V$ on $\Lambda$
satisfying the cocycle condition on $\Lambda \times _X \Lambda$.
We can define the cohomology sheaves on $S$, $H^i(X_{\Lambda}/S, V)$ which will
be equal to $\pi _0(\Gamma (X_{\Lambda }/S; K(V, i))$ in our notations. These
cohomology sheaves can be calculated using the de Rham complex: there is a
twisted de Rham complex $\Omega ^{\cdot}_{\Lambda} \otimes _{{\cal O}}V$ whose
hypercohomology is $H^i(X_{\Lambda}/S, V)$.
When applied to the de Rham formal category (the trivial example introduced in
\cite{Berthelot} in characteristic zero) whose associated stack is the sheaf of
sets $X_{DR/S}$, we obtain the usual de Rham complex $\Omega ^{\cdot}_{X/S}$
relative to $S$. A vector bundle $V$ over $X_{DR/S}$ is the same thing as a
vector bundle on $X$ with integrable connection, and the twisted de Rham
complex is the usual one. Thus in this case we have
$$
\pi _0(\Gamma (X_{DR/S}/S, K(V,i)))= {\bf H}^i(X/S, \Omega ^{\cdot}_{X/S}\otimes
V).
$$
We can describe more precisely $\Gamma (X_{DR/S}/S, K(V,i))$
as being the$i$-stack obtained by applying Dold-Puppe to the right derived
direct
image complex $Rf _{\ast} (\Omega ^{\cdot}_{X/S}\otimes
V)[i]$ (appropriately shifted).
For the first cohomology with coefficients in an affine algebraic group $G$,
note that a principal $G$-bundle on $X_{DR}$ is the same thing as a principal
$G$-bundle on $X$ with integrable connection. We have that the $1$-stack
$\Gamma (X_{DR/S}/S, BG)$ on $S$ is the moduli stack of principal $G$-bundles
with relative integrable connection on $X$ over $S$. For $X\rightarrow S$
projective this is constructed in \cite{Moduli} (in fact, there we construct
the representation scheme of framed principal bundles; the moduli stack is
immediately obtained as an algebraic stack, the quotient stack by the action
of $G$ on the scheme of framed bundles).
Of course we have seen in \ref{smoothformal} that for any smooth formal category
$(X,\Lambda )$ over $S$ and any connected very presentable $n$-stack $T$,
the morphism $n$-stack $Hom (X_{\Lambda} /S, T)$ is a locally geometric
$n$-stack. Recall that we have defined the {\em semistable} morphism stack
$Hom ^{\rm se}(X_{\Lambda} /S, T)$ which is geometric; but in our case all
morphisms $X_{DR/S}\rightarrow BG$ (i.e. all principal $G$-bundles with
integrable connection) are semistable, so in this case we find that $Hom
(X_{DR/S}/S, T)$ is a geometric $n$-stack. In fact it is just a successive
combination of the above discussions applied according to the Postnikov
decomposition of $T$.
\subnumero{De Rham theory on the analytic site}
The same construction works for smooth objects in the analytic site.
Suppose $f:X\rightarrow S$ is a smooth analytic morphism.
Here
we would like to consider any connected $n$-stack $R$ whose homotopy
groups are represented by analytic Lie groups. Such an $R$ is automatically an
analytic $n$-stack (by the analytic analogue of \ref{fibration}). We call these
the ``good connected analytic $n$-stacks'' since we haven't yet proven that
every
connected analytic $n$-stack must be of this form (I suspect that to be true but
don't have an argument).
If $G$ is an analytic Lie group, a map $X_{DR/S}\rightarrow
BG$ is a principal
$G$-bundle $P$ on $X$ together with an integrable connection
relative to $S$.
Suppose $A$ is an analytic abelian Lie group with action of $G$. Then we can
form the analytic $n$-stack $\kappa (G, A, n)$ with fundamental group $G$ and
$\pi _n= A$. Given a map $X_{DR/S}\rightarrow BG$ corresponding to a
principal bundle $P$, we would like to study the liftings into $\kappa (G,A,n)$.
We obtain the twisted analytic Lie group $A_P:= P\times ^GA$ over $X$ with
integrable connection relative to $S$. Let $V$ denote the universal
covering group of $A$ (isomorphic to $Lie (A)$, thus $G$ acts here) and let
$L\subset V$ denote the kernel of the map to $A$. Note that $V$ is a complex
vector space and $L$ is a lattice isomorphic to $\pi _1(A)$. Again $G$ acts on
$L$. We obtain after twisting $V_P$ and $L_P$. Note that $V_P$ is provided
with an integrable connection relative to $S$. The following Deligne-type
complex calculates the cohomology of $A_P$:
$$
C^{\cdot}_{{\cal D}} (A_P):= \{ L_P \rightarrow V_P \rightarrow \Omega
^1_{X/S}\otimes
_{{\cal O}} V_P \rightarrow \ldots \} .
$$
The $n$-stack $\Gamma (X_{DR/S}/S, K(A_P,n))$ is again obtained by applying
Dold-Puppe to the shifted right derived direct image complex
$Rf_{\ast}(C^{\cdot}_{{\cal D}}(A_P))[n]$. We can write
$ C^{\cdot}_{{\cal D}} (A_P)$ as the mapping cone of a map of complexes
$L_P \rightarrow U^{\cdot}_P$.
If $f$ is a projective morphism then applying GAGA and the argument of Mumford
(actually I think there is an argument of Grauert which treats this for any
proper map), we get that
$$
Rf_{\ast}(C^{\cdot}_{{\cal D}}(U^{\cdot}_P))
$$
is quasiisomorphic to a complex of analytic Lie groups (vector bundles in this
case). On the other hand, locally on the base the direct image
$Rf_{\ast}(C^{\cdot}_{{\cal D}}(L_P)$ is a trivial complex so quasiisomorphic to a
complex of (discrete) analytic Lie groups. The direct image
$Rf_{\ast}(C^{\cdot}_{{\cal D}}(A_P))$ is the mapping cone of a map of these
complexes, so the associated spectrum fits into a fibration sequence. The base
and the fiber are analytic $N$-stack so the total space is also an analytic
$N$-stack. Thus the spectrum associated to
$Rf_{\ast}(C^{\cdot}_{{\cal D}}(A_P))$ is analytic over $S$. In particular its
realization $\Gamma (X_{DR/S}/S, K(A_P,n))$ is a
geometric $n$-stack over $S$.
For $Hom (X_{DR/S}/S, BG)$ we can use the Riemann-Hilbert correspondence (see
below) to see that it is an analytic $1$-stack.
The same argument as in Theorem \ref{maps} now shows
that for any good connected analytic $n$-stack $T$, the $n$-stack of morphisms
$Hom (X_{DR/S}/S, T)$ is an analytic $n$-stack over $S$.
If the base $S$ is a point we don't need to make use of Mumford's argument,
so the same holds true for any proper smooth analytic space $X$.
{\em Caution:} There is (at least) one gaping hole in the above argument,
because we are applying Dold-Puppe for complexes of ${\bf Z}$-modules such as $L_P$
or its higher direct image, which are not complexes of rational vector spaces.
Thus this doesn't fit into the previous discussion of Dold-Puppe, spectra etc.
as we have set it up. In particular there may be problems with torsion, finite
groups or subgroups of finite index in the above discussion. The reader is
invited to try to figure out how to fill this in (and to let me know if he
does).
\subnumero{The Riemann-Hilbert correspondence}
We can extend to our cohomology stacks the classical Riemann-Hilbert
correspondence. We start with a statement purely in the analytic case.
In order to avoid confusion between the analytic situation and the algebraic
one, we will append the superscript {\em `an'} to objects in the analytic site,
even if they don't come from objects in the algebraic site. We will make clear
in the hypothesis whenever our analytic objects actually come from algebraic
ones.
\begin{theorem}
\label{analyticRiemannHilbert}
Suppose $T^{\rm an}$ is a good connected analytic $n$-stack, and suppose
$X^{\rm an}$ is a smooth proper complex analytic space. Define $X^{\rm an}_{DR}$
as above. Let $X^{\rm an}_B$ denote the $n$-stack associated to the constant
presheaf of spaces which to each $Y^{\rm an}$ associates the topological space
$X^{\rm top}$. Then there is a natural equivalence of analytic $n$-stacks $Hom
(X^{\rm an}_{DR}, T^{\rm an}) \cong Hom (X^{\rm an}_B, T^{\rm an})$.
\end{theorem}
{\em Proof:}
By following the same outline as the argument given in \ref{maps}, it
suffices to
see this for the cases $T^{\rm an} = BG^{\rm an}$ for an analytic Lie group
$G^{\rm an}$, and $T^{\rm an}= {\cal B} (A^{\rm an}, n)$ for an abelian analytic Lie
group $A^{\rm an}$. In the
second case we reduce to the case of cohomology with coefficients in a twisted
version of $A^{\rm an}$. We now leave it to the reader to verify these cases
(which are standard examples of using analytic de Rham cohomology to calculate
singular cohomology).
\hfill $\Box$\vspace{.1in}
{\em Remark:} For convenience we have stated only the absolute version. We
leave it to the reader to obtain a relative version for a smooth projective
morphism $f: X\rightarrow S$.
Now we turn to the algebraic situation.
We can combine the above result with GAGA to obtain:
\begin{theorem}
\label{algebraicRiemannHilbert}
Suppose $T$ is a connected very presentable algebraic $n$-stack, and suppose
$X$ is a smooth projective variety. Define $X_{DR}$ as above.
Let $X_B$ denote the $n$-stack associated to the constant presheaf of spaces
which to each $Y$ associates the topological space $X^{\rm top}$. Then
there is a
natural equivalence of analytic $n$-stacks
$Hom (X_{DR}, T)^{\rm an} \cong Hom (X_B, T)^{\rm an}$.
\end{theorem}
{\em Proof:}
By GAGA we have
$$
Hom (X_{DR}, T)^{\rm an} \cong Hom (X^{\rm an}_{DR}, T^{\rm an}).
$$
Similarly the calculation of $Hom (X_B,T)$ using a cell decomposition of
$X_B$ and fiber products yields the equivalence
$$
Hom (X_B,T)^{\rm an}\cong Hom (X^{\rm an}_B, T^{\rm an}).
$$
Putting these together we obtain the desired equivalence.
\hfill $\Box$\vspace{.1in}
\subnumero{The Hodge filtration}
Let $H:= {\bf A}^1/{\bf G}_m$ be the quotient stack of the affine line modulo
the action of the multiplicative group. This has a Zariski open substack which
we denote $1\subset H$; note that $1\cong Spec ({\bf C} )$. There is a closed
substack $0\subset H$ with $0\cong B{\bf G}_m$.
As in \cite{SantaCruz} we can define a smooth formal category $X_{\rm
Hod}\rightarrow H$ whose fiber over $1$ is $X_{DR}$ and whose fiber over $0$ is
$X_{Dol}$.
Suppose $T$ is a connected very presentable $n$-stack. Then we obtain the
relative semistable morphism stack
$$
Hom ^{\rm se}(X_{\rm Hod}/H, T) \rightarrow H.
$$
In the case $T=BG$ this was interpreted as the {\em Hodge filtration on ${\cal M}
_{DR}=Hom (X_{DR}, BG)$}. Following this interpretation, for any connected very
presentable $T$ we call this relative morphism stack the {\em Hodge filtration
on the higher nonabelian cohomology stack $Hom (X_{DR}, T)$}.
Note that when $T= K({\cal O} , n)$ we recover the usual Hodge filtration on the
algebraic de Rham cohomology, i.e. the cohomology of $X_{DR}$ with coefficients
in ${\cal O}$.
The above general definition is essentially just a mixture of the case $BG$ and
the cases $K({\cal O} , n)$ but possibly with various twistings.
{\em The analytic case:} The above discussion works equally well for a smooth
proper analytic variety $X$. For any good connected analytic $n$-stack $T$ we
obtain the relative morphism stack
$$
Hom (X_{\rm Hod}/H^{\rm an}, T) \rightarrow H^{\rm an}.
$$
Note that there is no question of semistability here. The moduli stack of flat
principal $G$-bundles $Hom (X_{\rm Hod}/H^{\rm an}, BG)$ is still an analytic
$n$-stack because in the analytic category there is no distinction between
finite type and locally finite type.
In case $X$ is projective and $G= \pi _1(T)$ affine algebraic we can
put in the semistability condition and get
$$
Hom ^{\rm se}(X_{\rm Hod}/H^{\rm an}, T) \rightarrow H^{\rm an}.
$$
If $T$ is the analytic stack associated to an algebraic geometric $n$-stack
then this analytic morphism stack is the analytic stack associated to the
algebraic morphism stack.
\subnumero{The Gauss-Manin connection}
Suppose $X\rightarrow S$ is a smooth projective morphism and $T$ a connected
very presentable $n$-stack. The formal category $X_{DR/S}\rightarrow S$ is
pulled back from the morphism $X_{DR}\rightarrow S_{DR}$ via the map
$S\rightarrow S_{DR}$. Thus
$$
Hom (X_{DR/S}/S,T) = Hom (X_{DR}/S_{DR}, T)\times _{S_{DR}}S.
$$
Thus the morphism stack $Hom (X_{DR/S}/S,T)$ descends down to an $n$-stack over
$S_{DR}$. If $Y\rightarrow S_{DR}$ is a morphism from a scheme then locally in
the etale topology it lifts to $Y\rightarrow S$. We have
$$
Hom (X_{DR}/S_{DR},T)\times _{S_{DR}}Y=
Hom (X_{DR}\times _{S_{DR}}Y/Y, T)=
$$
$$
Hom (X_{DR/S}\times _SY/Y,T)= Hom
(X_{DR/S}/S,T)\times _SY.
$$
The right hand side is a geometric $n$-stack, so this shows that the morphism
$$
Hom (X_{DR}/S_{DR},T)\rightarrow S_{DR}
$$
is geometric. This descended structure is the {\em Gauss-Manin connection} on
$Hom (X_{DR/S}/S,T)$. In the case $T=BG$ this gives the Gauss-Manin connection
on the moduli stack of $G$-bundles with flat connection (cf \cite{Moduli},
\cite{SantaCruz}). In the case $T= K(V,n)$ this gives the Gauss-Manin
connection on algebraic de Rham cohomology.
In \cite{SantaCruz} we have indicated, for the case $T=BG$, how to obtain the
analogues of {\em Griffiths transversality} and {\em regularity} for the
Hodge filtration and Gauss-Manin connection.
Exactly the same constructions work
here. We briefly review how this works. Suppose $X\rightarrow S$ is a
smooth projective family
over a quasiprojective base (smooth, let's say) which extends to a family
$\overline{X}\rightarrow \overline{S}$ over a normal crossings compactification
of the base. Let $D= \overline{X}-X$ and $E= \overline{S}-S$.
Recall that $\overline{X}_{\rm Hod}(\log D)\rightarrow H$ is the
smooth formal category whose underlying space (stack, really, since we have
replaced ${\bf A}^1$ by its quotient $H$) is $X\times H$ and whose associated
de Rham complex is $(\Omega ^{\cdot}_{\overline{X}}(\log D), \lambda d)$ where
$\lambda $ is the coordinate on $H$ (actually to be correct we have to twist
everything by line bundles on $H$ to reflect the quotient by ${\bf G}_m$ but I
won't put this into the notation). Similarly we obtain the formal category
$\overline{S}_{\rm Hod}(\log E)\rightarrow H$, with a morphism
$$
\overline{X}_{\rm Hod}(\log D) \rightarrow
\overline{S}_{\rm Hod}(\log E).
$$
If we pull back by $\overline{S}\rightarrow \overline{S}_{\rm Hod}(\log E)$
then we get a smooth formal category over $\overline{S}$. Thus by
\ref{smoothformal} for any connected very presentable $n$-stack $T$ the
morphism $$
Hom (\overline{X}_{\rm Hod}(\log D) /
\overline{S}_{\rm Hod}(\log E), T)\rightarrow
\overline{S}_{\rm Hod}(\log E), T)
$$
is a geometric morphism. The existence of this extension (which over the open
subset $S_{DR}\subset \overline{S}_{\rm Hod}(\log E), T)$ is just the
Gauss-Manin family $Hom (X_{DR}/S_{DR}, T)$) combines the Griffiths
transversality of the Hodge filtration and regularity of the Gauss-Manin
connection.
This is discussed in more detail in \cite{SantaCruz} in the case $T=BG$ or
particularly $BGL(r)$---I just wanted to make the point here that the same
thing goes through for any connected very presentable $n$-stack $T$.
The same thing will work in an analytic setting, but in this case we can use
any good connected analytic $n$-stack $T$ as coefficients.
| proofpile-arXiv_065-333 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{#1}}
\newcommand{\setcounter{equation}{0} \section*{Appendix}}{\addtocounter{section}{1} \setcounter{equation}{0}
\section*{Appendix \Alph{section}}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\textwidth 159mm
\textheight 230mm
\def\bf Z {\bf Z}
\def\tilde{\chi} {\tilde{\chi}}
\def\alpha{\alpha}
\def\sigma {\sigma}
\def\rho {\rho}
\def\delta {\delta}
\def\lambda {\lambda}
\def\bar{q} {\bar{q}}
\def\varphi{\varphi}
\def\epsilon {\epsilon}
\def\mbox{d} {\mbox{d}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{displaystyle}{\begin{displaystyle}}
\def\end{displaystyle}{\end{displaystyle}}
\def\begin{array}{\begin{array}}
\def\stackrel{\stackrel}
\def\end{array}{\end{array}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\begin{eqnarray*}{\begin{eqnarray*}}
\def\end{eqnarray*}{\end{eqnarray*}}
\def\rightarrow{\rightarrow}
\def\longrightarrow{\longrightarrow}
\def\hspace{0.1in}{\hspace{0.1in}}
\def\hspace{0.1in}{\hspace{0.1in}}
\def\hspace{0.2in}{\hspace{0.2in}}
\def\hspace{0.3in}{\hspace{0.3in}}
\def\hat{S}{\hat{S}}
\def{\cal C}{{\cal C}}
\def{\cal L}{{\cal L}}
\def{\frac\Lambda 2}{{\frac\Lambda 2}}
\def{-\frac\Lambda 2}{{-\frac\Lambda 2}}
\def{\cal O}{{\cal O}}
\def{\cal G}{{\cal G}}
\def{\cal Q}{{\cal Q}}
\def{\cal F}{{\cal F}}
\def{\cal K}{{\cal K}}
\def{\cal M}{{\cal M}}
\def{\cal N}{{\cal N}}
\def{\cal P}{{\cal P}}
\def\Delta{\Delta}
\def{}^{out}\langle{{}^{out}\langle}
\def{}^{in}\langle{{}^{in}\langle}
\def\rangle^{out}{\rangle^{out}}
\def\rangle^{in}{\rangle^{in}}
\def{}^{out}_{\,\,\,\,\,\,0}\langle{{}^{out}_{\,\,\,\,\,\,0}\langle}
\def{}^{in}_{\,\,\,\,0}\langle{{}^{in}_{\,\,\,\,0}\langle}
\def\rangle^{out}_0{\rangle^{out}_0}
\def\rangle^{in}_0{\rangle^{in}_0}
\def\label{\label}
\def\theta{\theta}
\begin{document}
\oddsidemargin 5mm
\setcounter{page}{0}
\newpage
\setcounter{page}{0}
\begin{titlepage}
\begin{flushright}
ISAS/EP/96/105
IC/96/168
\end{flushright}
\vspace{0.5cm}
\begin{center}
{\large {\bf On the Form Factors of Relevant Operators \\
and their Cluster Property}}\\
\vspace{1.5cm}
{\bf C. Acerbi$^{a,b}$, G. Mussardo$^{a,b,c}$ and A. Valleriani$^{a,b}$
}\\
\vspace{0.5cm}
{\em $^a$ International School for Advanced Studies, \\
Via Beirut 3, 34014 Trieste, Italy} \\
\vspace{1mm}
{\em $^b$ Istituto Nazionale di Fisica Nucleare, \\
Sezione di Trieste, Italy} \\
\vspace{1mm}
{\em $^c$ International Centre of Theoretical Physics,\\
Strada Costiera 11, 34014 Trieste, Italy} \\
\end{center}
\vspace{3mm}
\begin{abstract}
\noindent
We compute the Form Factors of the relevant scaling operators in a class of
integrable models without internal symmetries by exploiting their cluster
properties. Their identification is established by computing the
corresponding anomalous dimensions by means of Delfino--Simonetti--Cardy
sum--rule and further confirmed by comparing some universal ratios of the
nearby non--integrable quantum field theories with their independent
numerical determination.
\end{abstract}
\vspace{5mm}
\end{titlepage}
\newpage
\setcounter{footnote}{0}
\resection{Introduction}
In this work we present a detailed investigation of the matrix elements
\begin{equation}
F^{\cal O}_{a_1,a_2,\ldots,a_n}(\beta_1,\ldots,\beta_n) \,=\,
\langle 0 \mid {\cal O}(0)\mid A_{a_1}(\beta_1) \ldots A_{a_n}(\beta_n)\rangle
\label{FF}
\end{equation}
(the so--called Form Factors (FF)) in a class of integrable two--dimensional
quantum field theories. Our specific aim is to check some new theoretical
ideas which concern the relationships between three different regimes which
two--dimensional quantum field theories may have, namely the ones ruled by
conformal invariance, integrable or non--integrable dynamics.
Conformal Field Theories (CFT) and the associated off-critical
Integrable Models (IM) have been extensively studied in the last years:
as a result of these analyses a good deal of information has been obtained
particularly on correlation functions of a large number of statistical
mechanical models in their scaling limit and on physical quantities
related to them (see for instance [1-8]). In this context, a crucial
problem often consists in the determination of the spectrum of the
scaling operators away from criticality, namely their correct identification
by means of the set of their Form Factors. This is one of the issues
addressed in this work.
Form Factors also play a crucial role in estimating non--integrable
effects. Let us first recall that the above CFT and IM regimes cannot
obviously exhaust all possible behaviours that statistical models
and quantum field
theories can have since typically they do not possess an infinite
number of conservation laws. This means that in general we have to face
all kinds of phenomena and complications associated to Non--Integrable Models
(NIM). The scattering amplitudes and the matrix elements of the quantum
operators will have in these cases a pattern of analytic singularities due
both to the presence of higher thresholds and to the appearance of
resonances. A first step forward in their analysis has been recently
taken in ref.\,\cite{NIM} where it has been shown that some interesting
examples of NIM may be obtained as deformations of integrable models.
The action of such theories can correspondingly be written as
\begin{equation}
{\cal A} \,=\,{\cal A}_{int} +
\sum_i\,\lambda_i\int d^{\,2} x \, \Psi_i(x) \,\,\,,
\label{action}
\end{equation}
${\cal A}_{int}$ being the action of the integrable model.
Since the exact expressions (\ref{FF}) of the Form Factors of the integrable
theories are all assumed calculable, in particular the ones of the fields
$\Psi_i(x)$ entering eq.\,(\ref{action}), one is inclined to study the
non--integrable effects by using the Born series based on the Form Factors.
Although at first sight this still remain a difficult task (and generally,
it is indeed so), there may be favorable circumstances where the analysis
simplifies considerably. For instance, as far as there is only a soft
breaking of integrability, it has been shown in \cite{NIM} that
the complications of the higher terms in the series can often be avoided
since the most relevant corrections only come from the lowest approximation.
If this is the case, one can extract an important amount of
information with relatively little effort: a significant set of physical
quantities to look at is provided for instance by universal ratios, like
the ones relative to the variations of the masses or of the vacuum energy
density ${\cal E}_{vac}$: if the breaking of integrability is realized
by means of a single field $\Psi(x)$, those are expressed by
\begin{equation}
\begin{array}{l}
\begin{displaystyle}
\frac{\delta m_i}{\delta m_j} \,=\, \frac{m_j^{(0)}}{m_i^{(0)}}
\frac{F_{ii}^{\Psi}(i \pi)}{F_{jj}^{\Psi}(i \pi)} \,\, , \end{displaystyle} \\
\\
\begin{displaystyle} \frac{\delta {\cal E}_{vac}}{m_1^{(0)} \delta m_1} \, = \,
\frac{\langle 0 \mid \Psi\mid 0\rangle}{F_{11}^{\Psi}(i \pi)} \,\, ,
\end{displaystyle}
\end{array}
\label{nif}
\end{equation}
where $m_i^{(0)}$ refers to the (unperturbed) mass spectrum of the
original integrable theory. It is thus evident that also to
estimate the non--integrable effects associated to a given operator $\Psi(x)$
one must face the problem of correctly identifying its FF's.
Two new results on the relationship between CFT and IM have been recently
derived by Delfino, Simonetti and Cardy \cite{DSC}. Briefly stated, the
first result consists in a new sum--rule which relates the conformal dimension
$\Delta^{\phi}$ of the operator $\phi(x)$ to the off--critical (connected)
correlator $\langle \Theta(x) \phi(0) \rangle_c$, where $\Theta(x)$ is the
trace of the stress--energy tensor\footnote{The sum--rule
in the form of eq.\,(\ref{sumrule1}) may be violated by effect of
renormalization of the operators outside the critical point, as
clarified in the original reference \cite{DSC}. This is however
not the case for the field theories and the operators considered
in this work.}
\begin{equation}
\Delta^{\phi} \,=\, -\frac{1}{4 \pi \langle \phi\rangle}
\int d^2x \, \langle \Theta(x) \phi(0) \rangle_c \,\,\, .
\label{sumrule1}
\end{equation}
This sum--rule is closely related to the analogous expression for the
conformal central charge $c$ \cite{ctheorem}
\begin{equation}
c\,=\, \frac{3}{4 \pi}
\int d^2x \, \mid x\mid^2 \langle \Theta(x) \Theta(0) \rangle_c \,\,\, .
\label{sumrule2}
\end{equation}
Equations (\ref{sumrule1}) and (\ref{sumrule2}) express elegant relationships between conformal
and off--critical data, but more importantly, they provide very concrete and
efficient tools to characterise the scaling limit of the off-critical models.
As for the second result, it has been suggested by the aforementioned
authors of ref. \cite{DSC}, that the Form Factors of the relevant scaling
fields\footnote{Hereafter we are using the short expression ``scaling
fields" to actually denote the off-critical operators which reduce to
the scaling fields in the conformal limit.} of an integrable field theory
-- in absence of internal symmetries -- are in one--to--one correspondence
with the independent solutions of the so called {\em cluster equations}
\begin{eqnarray}
\label{cluster}
\lim_{\Lambda \rightarrow \infty}
F^\Phi_{a_1,a_2,\ldots,a_k,a_{k+1},\ldots,a_{k+l}}
(\theta_1,\theta_2,\ldots,\theta_k,\Lambda + \theta_{k+1},
\ldots,\Lambda + \theta_{k+l}) &=&
\\
& &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!
= \frac{1}{\langle \Phi\rangle}
F^\Phi_{a_1,a_2,\ldots,a_k}(\theta_1,\theta_2,\ldots,\theta_k)\,
F^\Phi_{a_{k+1},\ldots,a_{k+l}}(\theta_{k+1},\ldots,\theta_{k+l})
\nonumber \,
\end{eqnarray}
These equations can be imposed on the Form Factors {\em in addition} to the
customary functional and residue equations which they satisfy (see in this
respect also \cite{Smirnov,SG}). If this {\em cluster hypothesis}
is valid, we would have
a clear method to identify the matrix elements of all the
relevant operators, at least in the case of theories without
symmetries. It must be stressed that until now this task has often
been a matter of keen guess--work and mostly based on physical intuition.
It turns out that a check of the above {\em cluster hypothesis} provides
a well--suited forum for testing several theoretical aspects. In fact, the
most direct way of confirming the above idea is firstly to solve the
general functional equations of the Form Factors with the additional
constraints of the cluster equations (\ref{cluster}) and to see whether
{\em the number of independent solutions equals the number of relevant
fields} in the corresponding Kac table. If the above check turns out to
be positive, one may use the sum--rule (\ref{sumrule1}) in order
to achieve the correct identification of the (supposed) primary relevant
operators $\phi_i$: from the values of the partial sums one can in fact
infer the value of the anomalous dimension and correspondingly
recognize the operator. An additional confirmation
may also come from the employment of eqs.\,(\ref{nif})
relative to non--integrable field theories. In fact, one can regard the
primary field $\phi_i(x)$ under investigation as that
operator which spoils the integrability of the original theory and
therefore compare the predictions (\ref{nif}) based on its Form Factors
with their independent numerical determinations which may be obtained by
means of the truncation method \cite{TCS}. Note that a successful
test of this kind could also be interpreted the other way around, namely as
a further proof of the effectiveness of the formulas (\ref{nif}) in
estimating non--integrable effects.
The models on which we have chosen to test the above considerations
are integrable deformations of the first representatives of the
non--unitary
conformal series\footnote{The conformal weigths and central charge
are given respectively by
\begin{eqnarray}
&& \Delta_{1,a} = - \frac{(a - 1)\, (2n -a)}{2 \,(2n +1)}, \hspace{3mm}
a = 1,2,\ldots, 2n \nonumber \\ && \nonumber \\
&& c = -\frac{2 \,(6 n^2 - 7 n +1)}{2 n+1} \,\, , \hspace{5mm} n=2,3,\ldots
\nonumber
\end{eqnarray}
}
${\cal M}(2,2\,n+1)$, $n \geq 2$.
They belong to the class of universality of solvable RSOS lattice models
{\em \`{a} la} Andrews--Baxter--Forrester although with negative Boltzmann
weigths \cite{ABF,Riggs}: their simplest example is given by the quantum
field theory associated to the so--called Yang--Lee model which describes
the distribution of zeros in the grand canonical partition function of the
Ising model in a complex magnetic field \cite{Fisher,CMYL}. These models
do not have any internal symmetry and all their fields are relevant
operators: hence, they are ideal for our purposes. Moreover,
the nature of their massive and conformal phases is simple enough. The
price to pay for their relative simplicity is however the presence of
typical non--unitary phenomena, as imaginary coupling constants or negative
values of the anomalous dimensions and central charge, together
with the anomalous poles in the $S$--matrix which induce an
unusual analytic structure in the Form Factors \cite{BPZ,CMYL,KMM}.
The paper is organized as follows. In Section 2 we discuss the general
strategy which can be employed in order to compute the FF's
of the relevant operators in the integrable deformations of the models
${\cal M}(2,2\,n+1)$. In Section 3 and 4 we present a detailed analysis
of the FF's of the models ${\cal M}(2,7)$ and ${\cal M}(2,9)$, which are
the first non--trivial examples on which to check all the theoretical
ideas discussed above. In fact, for the first model ${\cal M}(2,5)$
the {\em cluster hypothesis} is easily verified: the only solution of the
Form Factor equations is the sequence of functions determined in ref. \cite{YL}
which indeed fulfill the cluster equations (\ref{cluster}) and are easily
identified with the matrix elements of the only relevant field
of the Yang--Lee model. The two models ${\cal M}(2,7)$ and ${\cal M}(2,9)$
represent, somehow, the best playground for our purposes
because they give rise to integrable models under each of their
possible (individual) deformations and also because they optimize
the size of the lenghty numerical output which we present for the
solutions of the non--linear equations. Moreover, although there
is in principle no obstacle to extend the analysis to all the models
${\cal M}(2,2\,n+1)$, these are the simplest cases from a computational
point of view since the larger is the value of the index $n$
the higher is the order of the system of algebraic equations to be
solved for determining the Form Factors. Finally, our conclusions
are in Section 5. Two appendices complete the paper: Appendix A
gathers all important formulas relative to the parameterization
of the two--particle Form Factors and Appendix B collects the
$S$--matrices of the models analysed.
\resection{Outline of Our Strategy}
In this section we discuss the general strategy needed in order to obtain
the Form Factors of the scaling primary fields of the integrable
deformations $\phi_{1,k}$ of the conformal models ${\cal M}(2,2n+1)$
(hereafter denoted by the shorthand notation $[{\cal M}(2,2n+1)]_{(1,k)}$).
The deforming field $\phi_{1,k}$ can be one of the operators $\phi_{1,2}$,
$\phi_{1,3}$ or possibly some other primary field which gives rise to an
integrable deformation.
The starting point in the computation of the Form Factors is
the correct parameterization of the two--particle ones which is given
detailed in Appendix A. This is a non--trivial task in the case of
non--unitary models for the reason that the exact $S$--matrices of
these models are usually plagued by a pletora of anomalous poles
\cite{KMM}. By this we mean for example simple poles which are not
related to any bound state, or, more generally, any poles which do not
have apparently the standard diagrammatic interpretation of
refs.\,\cite{multiple}. Consider for example the $S$--matrices
listed in the tables of Appendix B relative
to the integrable deformations of the models ${\cal M}(2,7)$ and
${\cal M}(2,9)$ where the anomalous poles have been labelled
with ${\cal B}$, ${\cal D}$ or $*$. The origin of these poles
may be explained according to the ideas put forward in \cite{generalized}.
In particular, poles of type ${\cal B}$ and ${\cal D}$
are due to multiparticle processes of the kind described respectively
by the ``butterfly'' and ``dragonfly'' diagrams drawn in
Figures 2 and 3 respectively. These multi--loop processes
induce in the $S$--matrix simple poles rather than higher order
ones because the internal lines of these diagrams cross at relative
rapidity values relative to some zeros of their corresponding
two--particle $S$--matrix element: this gives rise to a partial
cancellation of the poles.
The adopted parameterization for two--particle FF's is directly related
to the pole structure of the $S$--matrix. This yields to the
expression (\ref{Fab}) whose functional form is set except for the
coefficients $a_{ab,\Phi}^{(k)}$ appearing in the expansion (\ref{qgen})
of the polynomials $Q_{ab}^{\Phi}(\theta)$. The degree $k_{ab,\Phi}^{\rm max}$
of these polynomials is fixed by the asymptotic behavior of the FF's
for large rapidities which depends, of course, on the field $\Phi$ \cite{DM}.
For the case of two--particle FF's of cluster operators, it is easy to
see that they are subject to have for large $\theta$ at most a constant
limit\footnote{The limit may vanish in the presence of symmetries.}.
In fact, for two--particle FF's eqs. (\ref{cluster}) read
\begin{equation} \label{cluster2}
\lim_{\theta \rightarrow \infty} F_{ab}^\Phi(\theta) =
F_a^\Phi \, F_b^\Phi\,\,.
\end{equation}
Hereafter we deal with dimensionless cluster operators
which are normalized in such a way as to have a vacuum expectation value
equal to one\footnote{Since the relevant primary operators will be identified
with the cluster ones
(except their dimensional factors which can be easily restored),
in the sequel we will adopt the same normalization also for them.}
\begin{equation}
\langle 0 | \Phi (0) | 0 \rangle = F_0^\Phi = 1 \,\,.
\end{equation}
In order to fully determine the FF's of the cluster operators
we have chosen to focus on the set of all one-- and two--particle FF's.
Listing all the relations among them, one obtains a system
of equations in the unknown parameters $F_a^\Phi$ and $a_{ab,\Phi}^{(k)}$.
Let us see then all information we have on the FF's.
The first equations that one must consider are the {\em dynamical residue
equations} resulting from the detailed analysis of the poles they are
endowed with. These equations relate FF's with different external particles
and may have a different origin. In particular, for every simple bound state
pole of the amplitude $S_{ab}$ at angle $\theta = i u_{ab}^{c}$ relative
to the particle $A_c$ (see Figure 1), we have
\begin{equation}
\label{boundfpole}
\lim_{\theta \rightarrow i u_{ab}^{c}}(\theta -iu_{ab}^{c})
F^{\Phi}_{ab}(\theta)= i \,\Gamma_{ab}^{c} F^{\Phi}_{c} \,\, ,
\end{equation}
where the on--mass--shell three--point coupling constant $\Gamma_{ab}^c$
is given by the residue on the pole of the $S$--matrix
\begin{equation}
\label{gamma}
\lim_{\theta\rightarrow i u_{ab}^{c}}
(\theta - iu_{ab}^{c})
S_{ab}(\theta)= i\, (\Gamma_{ab}^{c})^2 \,\,\,.
\end{equation}
Dynamical residue equations are also provided by double order poles and
simple order poles of type ${\cal B}$. Both of them are related
to diagrams of the kind shown in Figure 2. For each such diagram,
one can write the following equation
\begin{equation}
\label{doubfpole}
\lim_{\theta_{ab}\rightarrow i\varphi}(\theta_{ab}-i\varphi)
F_{ab}^{\Phi}(\theta_{ab}) \,=\,i\,
\Gamma_{ad}^c \,\Gamma_{db}^e \,F^{\Phi}_{ce}(i\gamma)\,\, ,
\end{equation}
where $\gamma=\pi - u_{cd}^{a}- u_{de}^{b}$. In the case of ${\cal B}$ poles
one can always verify that the amplitude $S_{ce}(\theta)$ has a simple zero at
$\theta = i\gamma$. More complicated residue equations can be in general obtained
with reference to ${\cal D}$ poles and higher order ones whose explicit
expressions -- not reported here -- can be however easily written, once
the corresponding multi--scattering diagrams have been identified.
It must be stressed that the above set of equations just depend on the
dynamics of the model through its $S$--matrix and hold identical for
every operator $\Phi (x)$. Therefore, in general, some residual freedom
on the parameters is still expected after imposing these equations,
because they must be satisfied by the FF's of all operators compatible
with the assumed asymptotic behaviour.
Adding to this system of {\em linear} equations the {\em non--linear}
cluster equations (\ref{cluster2}) of the two--particle FF's, one obtains
in general a redundant set of compatible equations in all the unknown
parameters of the one-- and two--particle FF's. Due to its non--linearity,
the system allows a multiplicity of solutions which define the
so--called {\em cluster operators} of the theory\footnote{In all cases
analyzed, the smallest system of equations among different FF's which
is sufficient to determine their coefficients turns out to involve just
a subset of the two--particle FF's. This suggests that also in the general
case it should be possible to predict the final number of cluster
solutions already from a ``minimal'' system, avoiding in this way to deal
with systems of equations involving a huge number of unknown variables.}.
If the number of solutions of the system matches the cardinality
of the Kac table of the model one is led to identify them with the families
of FF's of the relevant primaries.
Among the cluster solutions, one can first of all identify the FF's
of the deforming field $\phi_{1,k}$. This operator is known to
be essentially the trace of the energy--momentum tensor $\Theta(x)$
since
\begin{equation} \label{Th}
\Theta (x) = 4 \pi\, {\cal E}_{vac} \, \phi_{1,k} \,\, ,
\end{equation}
${\cal E}_{vac}$ being the vacuum energy density which
can be easily computed by TBA computations \cite{TBA}
\begin{equation}
{\cal E}_{vac} = - \frac{m_1^{\,2}}{8 \,\sum_{x \in P_{11}} \sin (\pi x)}
\,\,\, .
\end{equation}
Here the set $P_{11}$ is defined in eq.\,(\ref{Sab}) and $m_1$ is the
lightest particle mass. In view of the proportionality (\ref{Th}), the
FF's of $\phi_{1,k}$ can be identified among the cluster solutions by
checking the peculiar equations which characterize the two--particle
FF's of $\Theta (x)$ in virtue of the conservation of the energy--momentum
tensor, namely the normalization of the diagonal two--particle FF's
\begin{equation}
\label{FThetadiag}
F^\Theta_{aa}(i \pi) = 2 \pi m_a^2 \:,
\end{equation}
and the factorization of the polynomial $Q^{\Theta}_{ab}$
for non--diagonal two--particle FF's ($a \neq b$) into
\begin{equation}
\label{FThetanondiag}
Q^{\Theta}_{ab} (\cosh \theta) \,= \,
\left( 2\,m_a \,m_b \:\cosh \theta + m_a^2 +
m_b^2 \right) \: R^{\Theta}_{ab}(\cosh\theta)\, ,
\end{equation}
where $R^{\Theta}_{ab}$ is a suitable polynomial \cite{Smirnov,DM}.
Knowing the FF's of $\Theta (x)$, one is then enabled to make use of
the sum--rule (\ref{sumrule1}) to compute the conformal dimension
of the operators defined by the remaining cluster solutions in order
to identify them with all the relevant primaries of the theory.
This sum--rule can be evaluated by using the spectral representation
of the correlator
\begin{equation}
\label{formexp}
\langle\,\Theta(x) \phi(0)\,\rangle_c\, =
\sum_{n=1}^{\infty} \sum_{a_i}\int_{\theta_1>\theta_2\ldots>\theta_n}
\frac{\mbox{d}^n\theta}
{(2\pi)^n}\, F_{a_1,\ldots,a_n}^{\Theta}
({\bf\theta}) \,F_{a_1,\ldots,a_n}^{\phi}
(i\pi-\theta)\,
e^{-|x|\sum_{k=1}^{n}m_k\cosh\theta_k} \,\,\, .
\end{equation}
In all the models we have studied, the corresponding series for the
sum--rule (\ref{sumrule1}) displays a very fast convergence behaviour
for any of the cluster operators. The truncated sums obtained by
including just very few contributions have proved sufficient to attain
a good approximation of all the values expected by the Kac table of
conformal dimensions. In this way, the one--to--one correspondence
between cluster solutions and primary relevant operators can been
easily set.
Finally, having obtained the FF's of all the relevant fields in each
integrable deformation, as a further check of their correct identification,
one may employ the formulas (\ref{nif}) relative to the universal ratios
of the nearby non--integrable quantum field theories. These predictions
can then be compared against their numerical estimates obtained from
Truncated Conformal Space (TCS) approach developed in \cite{TCS}.
The agreement between numerical estimates and theoretical predictions
of the non--integrable effects may provide additional confirmation
and may remove all possible remaining doubts about the validity of
the cluster hypothesis for these models.
\resection{Integrable Deformations of ${\cal M}{(2,7)}$}
The minimal conformal model ${\cal M}{(2,7)}$ has, in addition to
the identity operator $\phi_{1,1}$, only two primary operators,
$\phi_{1,2}$ and $\phi_{1,3}$, both of them relevant with conformal
weights given by $-2/7$ and $-3/7$ respectively \cite{BPZ}.
The perturbations of the conformal action either by the
``magnetic operator'' $\phi_{1,2}$ or by the ``thermal operator''
$\phi_{1,3}$ are both known to be, separately, integrable
\cite{KMM}. The $S$--matrices and the mass ratios of the
two integrable models are given in tables B1 and B2.
In their massive phase, both perturbations have two stable
massive particles denoted by $A_1$ and $A_2$, with a mass
ratio and a scattering matrix which depend on the integrable
direction considered. In each case, we expect to find two
non--trivial independent families of Form Factors solutions to
the cluster equations (\ref{cluster}) (in addition to the family of
the null Form Factors relative to the identity operator).
The Form Factors of the primary operators of the model relative
to the thermal deformation have already been considered in
\cite{koubek}. Here, we have performed an {\em ab--initio} calculation
by imposing the cluster equations: our result has been in perfect
agreement with the FF's of ref.\,\cite{koubek}, proving in this way that
these cluster solutions are also unique.
The result of the computation of Form Factors in the
two integrable deformations $[{\cal M}(2,7)]_{(1,2)}$
and $[{\cal M}(2,7)]_{(1,3)}$ are summarised in tables
1--2 and 3--4 respectively where we list the values of
the one--particle FF's and the coefficients $a_{ab,\phi}^{(k)}$
of the two--particle FF's relative to some of the lightest
two--particle states. As expected, we find two non--trivial
solutions of Form Factors families. In each deformation, the FF's
of the deforming operator suitably rescaled by (\ref{Th}),
can be immediately identified because they satisfy
the peculiar equations characterizing the trace of the
energy--momentum tensor (\ref{FThetadiag}) and (\ref{FThetanondiag}).
This is further confirmed by employing the spectral representation
of the correlator $\langle\Theta (x) \Theta (0)\rangle_c$ in
the sum--rule (\ref{sumrule2}), which provides in both deformations
the value of the central charge with a very high precision
(the relative error being of order $10^{-4}$--$10^{-5}$).
The identification of both the solutions with the primaries
$\phi_{1,2}$ and $\phi_{1,3}$ is easily established after computing for
each solution its UV anomalous dimension by means of the sum
rule (\ref{sumrule1}). The contributions to this sum rule coming from the
dominant lightest multiparticle states are given in
Tables 5 and 6 for the two deformations (the contributions
are ordered according to increasing values of the Mandelstam variable
$s$ of the multi--particle state). The agreement of the truncated sums
with the known values of the anomalous dimensions is very satisfactory
given the fast convergency behaviour of the spectral series.
In the computation of these sum rules, some three--particle FF
contributions have been inserted as well, although we do give here
their exact expression for sake of simplicity (their general
parameterization follows the one adopted for instance,
in \cite{DM}). It should be noticed that the oscillating
behaviour of these sums is typical of non--unitary theories
where one expects, in general, both positive and negative terms.
\subsection{Non--Integrable Deformations of ${\cal M}(2,7)$}
For each possible integrable deformation of the model, the addition of
a further orthogonal deformation breaks its integrability leading,
among other things, to corrections of the mass spectrum and of
the vacuum energy. Both corrections can be independently computed
by performing a numerical diagonalization of the off--critical
Hamiltonian by means of the so--called Truncation Method \cite{TCS}.
We have carried out this analysis comparing these non--integrable data
with the theoretical predictions by eqs.\,(\ref{nif}). Let us briefly
describe the output of these studies.
The double non--integrable deformation
\[
[{\cal M}(2,7)]_{(1,3)} + \varepsilon \phi_{1,2}\, ,
\]
for small values of $\varepsilon m_1^{2\Delta_{1,2} -2}$ has
already been studied in \cite{NIM}, where a good agreement between
numerical and theoretical values has been found. Having obtained the
FF's for the $\phi_{1,2}$ deformation, we are now able to complete
the analysis by testing the opposite deformation
\[
[{\cal M}(2,7)]_{(1,2)} + \varepsilon \phi_{1,3}\,\,\,.
\]
The numerical determination of the two universal ratios of eq.\,(\ref{nif})
(for small values of $\varepsilon m_1^{2\Delta_{1,3} -2}$)
gives $\frac{\delta m_1}{\delta m_2}= 0.675$ and
$\frac{\delta {\cal E}_{vac}}{\delta m_1}= -0.244 \, m_1^{(0)}$
with a precision estimated to be up to a few percents. This values fully
agree with the computed theoretical values $\frac{\delta m_1}{\delta m_2}
= 0.68404$ and $\frac{\delta {\cal E}_{vac}}{\delta m_1} = -0.24365 \,
m_1^{(0)}$ (see, for instance Figure 4 and 5 where the data relative to
the ratios $\frac{\delta m_1}{\delta m_2}$ and
$\frac{\delta {\cal E}_{vac}}{\delta m_1}$ respectively
are reported for different values of $\varepsilon$).
\resection{Integrable Deformations of ${\cal M}(2,9)$}
In this section, we turn our attention to the ${\cal M}{(2,9)}$ minimal
model which displays a richer structure in the RG space of relevant couplings.
This model has in fact, besides the identity, three primary operators
$\phi_{1,2}$, $\phi_{1,3}$ and $\phi_{1,4}$ which are all relevant with
conformal dimensions $-1/3$, $-5/9$ and $-2/3$ respectively. These fields
taken separately give rise to different integrable deformations of the
conformal model, each of them characterized by a different mass spectrum and
$S$--matrix (see tables B3, B4 and B5 in Appendix B). In particular, the
first two deformations produce three--particle mass spectra (with different
mass ratios) while the last one gives a four--particle spectrum.
The FF's of the primary operators in the $\phi_{1,3}$--deformation had
already been obtained in ref.\,\cite{koubek} and were known to satisfy
the cluster property. Again, our derivation of these FF's as solutions
of the cluster equations proves that the FF's found in \cite{koubek}
are the only possible cluster solutions.
The Form Factors of the cluster solutions for each of the three
above mentioned deformations have been computed according to the
strategy explained in Section 2. The resulting one--particle FF's and
two--particle FF's coefficients are given in tables 7--8, 9--10 and
11--12 respectively. The important result is that in each integrable
deformation of this model, three families of non--trivial solutions
have been found. Among the solutions, we have firstly identified the FF's of
the deforming field by checking the exact fulfillment of
eqs.\,(\ref{FThetadiag}) and (\ref{FThetanondiag}), after the
appropriate rescaling (\ref{Th}). Moreover, the $c$--sum--rule
(\ref{sumrule2}) can be easily shown to give very precise approximations
of the central charge in each of the three separate deformations.
As for the other solutions, they have been successfully identified
with the FF's of the primary operators by computing their anomalous
dimension by means of eq. (\ref{sumrule1}). The first contributions
to these sums are given in tables 13, 14 and 15. In all cases the
agreement with the expected anomalous dimensions of the primaries is
established, even though the convergence of the series is observed
to be noticeably faster for lower absolute values of the anomalous
dimension of the deforming field. This observed trend is indeed expected
from the short--distance behavior of the correlator (\ref{formexp}),
as predicted by the Operator Product Expansion of the fields. In fact,
in the models ${\cal M}{(2,2\,n+1)}$ where the fields have negative
anomalous dimensions, this correlator displays a zero at the origin
whose power law exponent is larger for lower absolute values of the
anomalous dimension of $\Theta(x)$; correspondingly, the small $x$ region of
integration in (\ref{sumrule1}) is less relevant making the lightest
multiparticle states more dominant in the series.
\subsection{Non--Integrable Deformations of ${\cal M}(2,9)$}
The availability of the FF's of all the primary fields of the model
has allowed us, in each of the three separate integrable deformations,
to consider two different orthogonal non--integrable deformations.
We have had then the possibility of testing the theoretical values
obtained for the universal quantities (\ref{nif}) versus their numerical
TCS estimates in six different multiple deformations, exploring in this
way the non--integrable region around the conformal point of the model.
The outcome of the analysis in all the deformations is summarized in
Table 16. Since the precision of TCS data is expected to be of approximately
a few percents, the comparison with the computed theoretical values is in
all cases quite satisfactory.
\resection{Conclusions}
The main purpose of this work has been to substantiate by means of concrete
{\em ab--initio} calculations the cluster hypothesis for the Form Factors
of the relevant operators in integrable quantum field theories obtained as
deformation of a conformal action. We have studied, in particular, the
matrix elements of the primary operators in the integrable deformations
of the first models of the non--unitary series ${\cal M}(2,2n+1)$.
In all cases analysed, we have confirmed the cluster hypothesis since
we have found a one--to--one correspondence between the independent
solutions of the cluster equations and the relevant fields.
It should be said that the absence of internal symmetries of the above models
has played an important role in carrying out our computations. In fact, in
this situation one can exploit the cluster equations (\ref{cluster}) in their
full generality. It would be interesting to see how the results of
ref.\,\cite{DSC} generalize to the case of quantum field theories with
internal symmetries which induce selection rules on the matrix
elements. Another important open problem is also to understand the meaning of
the cluster properties in quantum field theories which cannot be
regarded as deformation of conformal models. A complete understanding of all
these aspects of the Form Factors would allow us to better understand the
asymptotic high--energy regime of quantum theories and their operator
content.
\vspace{3mm}
{\em Acknowledgements.} We are grateful to G. Delfino and P. Simonetti
for useful discussions.
\newpage
| proofpile-arXiv_065-334 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Thomas-Fermi Theory}
The Thomas-Fermi method \cite{T} \cite{F}
was designed for the calculation of
the electron density in a heavy atom, by treating the electrons
as locally free. Lieb and Simon \cite{LS}
showed that the treatment is exact in the limit when
the atomic number goes to infinity.
Application to a confined Bose condensate was pioneered
by Goldman, Silvera, and Legget \cite{GSL}, and by Oliva \cite{O},
and recently reconsidered by Chou, Yang, and Yu \cite{CYY}.
I shall describe some work on this subject, done in collaboration with
E. Timmermans and P. Tommasini \cite{TTH}.
First, let us review the original method of Thomas and Fermi. Suppose
$V(r)=-e\Phi(r)$ denotes the effective potential energy of an electron
in an atom, at a distance $r$ from the nucleus. (See Fig.1).
\medskip
\begin{figure}[htbp]
\centerline{\BoxedEPSF{kfig1.eps scaled 500}}
\caption{Potential energy of an electron in atom.}
\end{figure}
\medskip
\noindent The condition that
the electron is in a bound orbit is that
\begin{equation}
{\hbar^2 k^2\over 2m} +V(r) \le 0
\end{equation}
where $k$ is the local wave vector. Assume that all available states
are occupied. Then the local Fermi momentum $\hbar k_F(r)$ is given by
\begin{equation}
{\hbar^2 k^2_F(r)\over 2m} +V(r) = 0
\end{equation}
which gives
\begin{equation}
\hbar k_F=\sqrt{2me\Phi(r)}
\end{equation}
We know that $k_F(r)$ is related to tke local density $n(r)$ by
\begin{equation}
2(4\pi/3)k_F^3=n(r)
\end{equation}
where the factor 2 comes from spin. This relates the density to
the potential $\Phi(r)$. We now use the Poisson equation
\begin{equation}
\nabla^2 \Phi(r) =-4\pi[Ze\delta({\bf r}) -e n(r)]
\end{equation}
For $r\ne 0$ this equation is of the form
\begin{equation}
\nabla^2\Phi(r) + C \Phi^{3/2}(r)=0
\end{equation}
where $C$ is a constant. One can solve this equation, and then obtain
$n(r)$. A comparison between Thomas-Fermi and Hartree results
for Rb is sketched in Fig.2.
\medskip
\begin{figure}[htbp]
\centerline{\BoxedEPSF{kfig2.eps scaled 800}}
\caption{Electron density in Rb: Comparison between Thomas-Fermi and
Hartree approximations.}
\end{figure}
\medskip
The essence of the method is that one assumes there is a local chemical
potential $\mu_{\rm eff}(r)$, related to the true chemical potential $\mu$
by
\begin{equation}
\mu_{\rm eff}(r) = \mu-V(r)
\end{equation}
In the earlier discussion, the chemical potential (Fermi energy) was
taken to be zero.
\section{Ideal Bose Gas in External Potential}
Can we apply this idea to a Bose gas? Let us first concsider
an ideal Bose gas in an external potential, at a temperature $T$
above the transition point. Take the potential to be harmonic:
\begin{equation}
V(r) = {1\over 2}m\omega^2 r^2
\end{equation}
For the Thomas-Fermi idea to be applicable, we require
\begin{equation}
{\hbar\omega\over kT}\ll1
\end{equation}
Above the transition temperature, the density is related to
the fugacity $z=\exp(-\mu/kT)$ through
\begin{equation}
n={1\over \lambda^3} g_{3/2}(z)
\end{equation}
where $\lambda=\sqrt{2\pi\hbar^2/mkT}$ is the thermal wavelength, and
\begin{equation}
g_{3/2}(z)=\sum_{\ell=1}^\infty {z^\ell\over \ell^{3/2}}
\end{equation}
This suggests that in the presence of an external potential we
take the local density to be
\begin{equation}
n(r) = {1\over \lambda^3} g_{3/2}\left(z e^{-\beta V(r)}\right)
\end{equation}
where $\beta=1/kT$. Integrating both sides over all space, we obtain
an expression for the total number of particles:
\begin{equation}
N = {1\over \lambda^3} \int d^3r g_{3/2}\left(z e^{-\beta V(r)}\right)
\end{equation}
We know that $g_{3/2}(z)$ is bounded for $0\le z\le 1$. So the right
side is bounded. This forces Bose-Einstein condensation when $N$
exceeds the bound. The number of atoms in the condensate $N_0$ is
given through
\begin{equation}
N = N_0 + {1\over \lambda^3} \int d^3r g_{3/2}\left(z e^{-\beta V(r)}\right)
\end{equation}
In this intuitive approach, however, the Bose condensate was not described
accurately.
As it turns out, the problem can be solved exactly [5]:
\begin{equation}
n(r) = {z_1\over 1-z_1}|\psi_0(r)|^2+{1\over\lambda^3}G(z,r)
\end{equation}
where $\psi_0(r)$ is the ground-state wave function in the potential,
and
\begin{equation}
G(z_1,r) = {1\over (2\epsilon)^{3/2}}\sum_{\ell=1}^\infty z_1^\ell
\left\{ {\exp[-(r/r_0)^2 \tanh(\epsilon\ell/2)]\over
[1-\exp(-2\epsilon\ell)]^{3/2} }
-\exp[-(r/r_0)^2] \right\}
\end{equation}
with
\begin{eqnarray}
z_1 &=& z e^{-3\hbar\omega/2kT}\nonumber\\
\epsilon &=& \hbar\omega/kT\nonumber\\
r_0&=&\sqrt{kT/2\pi m\omega^2}
\end{eqnarray}
The explicit occurence of $\psi_0(r)$ shows that Bose condensation
occurs in the ground state $\psi_0$ of the potential. The zero-momentum state
is irrelevant here. The total number of particles is
\begin{equation}
N = {z_1\over 1-z_1}+{1\over\lambda^3}\int d^3r G(z,r)
\end{equation}
which shows that the condensation is a continuous process, though
it may appear to be abrupt, when $N$ is so large that the first
term can be neglected except near $z_1=1$.
The Thomas-Fermi approximation is good when $\epsilon \ll 1$. In that case
we have
\begin{equation}
G(z_1,r)\approx g_{3/2}\left( z_1 e^{-\beta V(r)}\right)
\end{equation}
and therefore
\begin{equation}
n(r) \approx {z_1\over 1-z_1}|\psi_0(r)|^2+{1\over\lambda^3}
g_{3/2}\left( z_1 e^{-\beta V(r)}\right)
\end{equation}
which is similar to the naive formula, except for a better representation
of the condensate. (The replacement of $z$ by $z_1$ is inconsequential.)
The lesson is that a purely intuitive approach is not satisfactory, and we
need a systematic method.
\section{Uniform Dilute Interacting Bose Gas}
The underlying idea of the Thomas-Fermi approach is to treat
a nonuniform condensate as locally uniform, with a slowly varying
density. I will first review the properties of a uniform Bose gas
in the dilute limit, with interparticle interactions taken into
account through a scattering length $a\ge 0$.
The annihilation operator $a_k$ of a particle of
momentum $\hbar{\bf k}$ satisfies the commutation relation
\begin{equation}
[a_k, a_{k'}^\dagger]=\delta_{k k'}
\end{equation}
We make a Bogolubov transformation to quasiparticle operators $\eta_k$:
\begin{equation}
a_k=x_k \eta_k -y_k \eta_{-k}^\dagger \quad ({\bf k}\ne 0)
\end{equation}
and require that the transformation be canonical, {\it i.e.}
\begin{equation}
[\eta_k, \eta_{k'}^\dagger]=\delta_{k k'}
\end{equation}
This leads to the condition
\begin{equation}
x_k^2-y_k^2=1
\end{equation}
which can be satisfied by putting
\begin{eqnarray}
x_k &=& \cosh\sigma_k\nonumber\\
y_k &=& \sinh \sigma_k
\end{eqnarray}
This a convenient parametrization, because interesting quantities find
simple expressions:
\begin{eqnarray}
\rho_k&\equiv&\langle a_k^\dagger a_k\rangle
={1\over 2}[\cosh (2\sigma_k) -1]\nonumber\\
\Delta_k&\equiv& -\langle a_k a_{-k}\rangle ={1\over 2}\sinh(2\sigma_k)
\end{eqnarray}
where $\rho_k$ measures the depletion of the unperturbed condensate:
\begin{equation}
N_0=N-\sum_{{\bf k}\ne 0}\rho_k
\end{equation}
and $\Delta_k$, is a measure of off-diagonal long range order.
In the Bogolubov method, the annihilation operator for the
zero-momentum state $a_0$ is equated to the c-number $\sqrt{N}$.
Explicit solution of the problem gives
\begin{equation}
\tanh(2\sigma_k)={\mu\over (\hbar^2 k^2/2m) +\mu}
\end{equation}
where $\mu$ is the chemical potential:
\begin{equation}
\mu={4\pi a\hbar^2n\over m}\left(1+{32\over 3}\sqrt{na^3\over \pi}\right)
\end{equation}
with $n$ the particle density. Note that this cannot be
continued to negative $a$; apparently, new physics arises when the
scattering length turns negative. The excitation energy of a quasiparticle
of momentum ${\bf p}$ is given by
\begin{equation}
\epsilon_p =\sqrt{\left({p^2\over 2m} +\mu\right)^2-\mu^2}
\end{equation}
\section{Quasiparticle Field Operator}
In the uniform case, the field operator $\Psi({\bf r})$ can be put in the form
\begin{equation}
\Psi({\bf r})=a_0+\psi({\bf r})
\end{equation}
with
\begin{equation}
\psi({\bf r})=\Omega^{-1/2}\sum_{{\bf k}\ne 0} a_k e^{i{\bf k}\cdot{\bf r}}
\end{equation}
where $\Omega$ is the spatial volume. We have
\begin{equation}
[\psi({\bf r}),\psi^\dagger({\bf r}')]=\delta({\bf r}-{\bf r}')
\end{equation}
since $a_0$ is treated as a c-number. We can introduce a quasiparticle
field operator:
\begin{equation}
\xi({\bf r})\equiv\Omega^{-1/2}\sum_{{\bf k}\ne 0} \eta_k e^{i{\bf k}\cdot{\bf r}}
\end{equation}
Note that the relation between $\psi$ and $\xi$ is non-local:
\begin{equation}
\psi({\bf r})= \int d^3 y [X({\bf x}-{\bf y})\xi({\bf y})-Y^\ast({\bf x}-{\bf y})\xi^\dagger({\bf y})]
\end{equation}
where
\begin{eqnarray}
X({\bf x}-{\bf y}) &=& \Omega^{-1/2}\sum_{{\bf k}\ne 0} x_k e^{i{\bf k}\cdot({\bf x}-{\bf y})}\nonumber\\
Y({\bf x}-{\bf y}) &=& \Omega^{-1/2}\sum_{{\bf k}\ne 0} y_k e^{i{\bf k}\cdot({\bf x}-{\bf y})}
\end{eqnarray}
For a non-uniform Bose gas, we write
\begin{equation}
\Psi({\bf r})=\phi({\bf r}) +\psi({\bf r})
\end{equation}
where $\phi({\bf r})$ is a c-number function, such that
\begin{equation}
\langle\psi({\bf r})\rangle =0
\end{equation}
where $\langle\rangle$ denotes ground state expectation value. We
transform to quasiparticle operators by putting
\begin{equation}
\psi({\bf r})= \int d^3 y [X({\bf x},{\bf y})\xi({\bf y})-Y^\ast({\bf x},{\bf y})\xi^\dagger({\bf y})]
\end{equation}
The requirement
\begin{equation}
[\xi({\bf r}),\xi^\dagger({\bf r}')]=\delta({\bf r}-{\bf r}')
\end{equation}
leads to the condition
\begin{equation}
\int d^3z [X({\bf x},{\bf z}) X({\bf z},{\bf y})-Y({\bf x},{\bf z}) Y({\bf z},{\bf y})]=\delta({\bf x}-{\bf y})
\end{equation}
The fulfillment of this condition in a simple fashion will guide our
formulation of the Thomas-Fermi approximation.
\section{Wigner Representation}
In quantum mechanics, the Wigner distribution associated with a
wave funcition $\psi({\bf r})$ is defined by
\begin{equation}
\rho_W({\bf R},{\bf p})\equiv\int d^3r \psi^\ast({\bf R}+{\bf r}/2)\psi({\bf R}-{\bf r}/2)e^{i{\bf p}\cdot{\bf r}/\hbar}
\end{equation}
That is, we take the off-diagonal density at two different
points in space, and Fourier analyze with respect to the
relative separation. This is illustrated in Fig.3.
\medskip
\begin{figure}[htbp]
\centerline{\BoxedEPSF{kfig3.eps scaled 650}}
\caption{To get the Wigner distribution, Fourier analyze with respect
to relative coordinate.}
\end{figure}
\medskip
\noindent The Wigner distribution
is not positive-definite, and hence not a probability; but it
acts as a quasi-distribution function in phase space:
\begin{eqnarray}
(\psi,f\psi)&\equiv&\int d^3r \psi^\ast({\bf r}) f({\bf r}) \psi({\bf r})=
\int{d^3R d^3p\over h^3} f({\bf R})\rho_W({\bf R},{\bf p})\nonumber\\
(\psi,{\bf p}\psi)&\equiv&\int d^3 r \psi^\ast({\bf r}) {\hbar\over i}\nabla\psi({\bf r})=
\int{d^3R d^3p\over h^3} {\hbar\over i}\nabla\rho_W({\bf R},{\bf p})
\end{eqnarray}
For a function $X({\bf x},{\bf y})$ that depends on two spatial points,
we define its Wigner transform as
\begin{equation}
X_W({\bf R},{\bf p})\equiv\int d^3r X({\bf R}+{\bf r}/2, {\bf R}-{\bf r}/2) e^{i{\bf p}\cdot{\bf r}}
\end{equation}
with the inverse transform
\begin{equation}
X({\bf x},{\bf y})=\int{d^3p\over (2\pi)^3}e^{-i{\bf p}\cdot ({\bf x}-{\bf y})}X_W(({\bf x}+{\bf y})/2,{\bf p})
\end{equation}
If $C({\bf x},{\bf y})$ has the form
\begin{equation}
C({\bf x},{\bf y})=\int d^3z A({\bf x},{\bf z}) B({\bf z},{\bf y})
\end{equation}
then its Wigner transform takes the form
\begin{eqnarray}
C_W({\bf R},{\bf p})&=&A_W({\bf R},{\bf p}) B_W({\bf R},{\bf p}) +
{1\over 2i}\sum_{j=1}^3
\left( {\partial A_W\over\partial R_j}{\partial B_W\over\partial p_j}
-{\partial B_W\over\partial R_j}{\partial A_W\over\partial p_j}\right)\nonumber\\
&&-{1\over 8}\sum_{j=1}^3
\left( {\partial^2 A_W\over\partial R_j^2}{\partial^2 B_W\over\partial p_j^2}
+{\partial^2 B_W\over\partial R_j^2}{\partial^2 A_W\over\partial p_j^2}
-2{\partial^2 A_W\over\partial R_j\partial p_j}
{\partial^2 B_W\over\partial R_j\partial p_j^2}\right)\nonumber\\
&&+\cdots
\end{eqnarray}
The second term is the classical
Poisson bracket $\{A_W,B_W\}_{\rm PB}$. It and the subsequent terms all
depend on spatial derivatives, and would be small if the system is
nearly unform. Thus our version of
Thomas-Fermi approximation consists of
keeping only the first term. Errors incurred
can be estimated by calculating the next non-vanishing term.
In terms of the Wigner transform, we can write
\begin{equation}
\int d^3z X({\bf x},{\bf z}) X({\bf z},{\bf y})=X_W({\bf R},{\bf p})X_W({\bf R},{\bf p})
+{1\over 2i}\{X_W,X_W\}_{\rm PB}+\cdots
\end{equation}
where the second terms vanishes identically. Thus,
for such an integral, errors incurred in using the
Thomas-Fermi approximation starts with subsequent terms.
The condition on $X$ and $Y$ therefore reads
\begin{equation}
X_W^2({\bf R},{\bf p})-Y_W^2({\bf R},{\bf p})\approx 1
\end{equation}
and is solved by setting
\begin{eqnarray}
X_W({\bf R},{\bf p})&=&\cosh\sigma({\bf R},{\bf p})\nonumber\\
Y_W({\bf R},{\bf p})&=&\sinh\sigma({\bf R},{\bf p})
\end{eqnarray}
This make the problem very similar to the uniform case.
At zero temperature, the criterion for the validity of
the Thomas-Fermi approximation is
\begin{equation}
\hbar\omega/ \mu\ll1
\end{equation}
where $\hbar\omega$ is the characteristic energy of the external
potenial, and $\mu$ is the chemical potential. For the dilute interacting
Bose gas, $\mu$ is of order of the scattering length. Thus, the Thomas-Fermi
approximation can be used only when there are interparticle interactions.
\section{Variational Calculation}
We study the system defined by the Hamiltonian $H$, with
\begin{equation}
H-\mu N = \int d^3x \Psi^\dagger h\Psi +{1\over 2}
\int d^3x d^3y \Psi^\dagger\Psi^\dagger V\Psi\Psi
\end{equation}
where $V({\bf x})$ is the interparticle potential, and
\begin{equation}
h=-{\hbar^2\over 2m}\nabla^2 + V_{\rm ext}({\bf x}) -\mu
\end{equation}
with $V_{\rm ext}({\bf x})$ the external potential. The ground state free
energy is
\begin{equation}
F=\langle H-\mu N\rangle
\end{equation}
where $\langle\rangle$ means expectation value with respect to
the ground state of $H-\mu N$. As mentioned before, we displace the field
by writing $\Psi=\phi+\psi$, where $\phi$ is a c-number function,
such that $\langle\psi\rangle=0$.
We assume a trial form for the ground state, so that $\langle F\rangle$
has the same form as in mean-field theory, {\it i.e.}, we can put
\begin{equation}
\langle\psi^\dagger({\bf y})\psi^\dagger({\bf x})\psi({\bf x})\psi({\bf y})\rangle
=\Delta^\ast({\bf y},{\bf x})\Delta({\bf x},{\bf y})+\rho({\bf y},{\bf x})\rho({\bf x},{\bf y})+\rho({\bf y},{\bf y})\rho({\bf x},{\bf x})
\end{equation}
where
\begin{eqnarray}
\rho({\bf x},{\bf y})&=&\langle\psi^\dagger({\bf x})\psi({\bf y})\rangle\nonumber\\
\Delta({\bf x},{\bf y})&=&-\langle\psi({\bf x})\psi({\bf y})\rangle
\end{eqnarray}
The ground state free energy $F[\phi,\rho,\Delta;\mu]$ is
a functional of $\phi$, $\rho$, and $\Delta$, and also depends on $\mu$
as a parameter. The requirement $\langle\psi\rangle=0$ means that there
are no terms in $F$ linear in $\phi$.
Although we do not need the trial state explicitly,
it can be explicitly constructed if desired. One can show that the wave
functional of this state is of Gaussian form \cite{HT}. Thus, we have a true
variational problem.
We rewrite the functions $\rho$ and $\Delta$ in $F[\phi,\rho,\Delta;\mu]$
in terms of their Wigner transforms, and implement our
version of the Thomas-Fermi approximation, as explained before.
We transform to quasiparticle field operators, and find that,
as in the uniform case, $\rho_W$ and $\Delta$ are parametrized by a
single function:
\begin{eqnarray}
\rho_W({\bf R},{\bf p})&=&{1\over 2}[\cosh(2\sigma({\bf R},{\bf p}))-1]\nonumber\\
\Delta_W({\bf R},{\bf p})&=&{1\over 2}\sinh(2\sigma({\bf R},{\bf p}))
\end{eqnarray}
The Free energy reduces to the form $F=\int d^3R f({\bf R})$.
We obtain equations for $\sigma$ and $\phi$
by minimizing $F$. The equation for $\phi$ is a modified
Gross-Pitaevskii or non-linear Schr\"odinger equation (NLSE):
\begin{equation}
\left[-{\hbar^2\over 2m}\nabla^2 +V_{\rm ext}({\bf r})+U({\bf r})-\mu
+v(0)\phi^2({\bf r})\right]\phi({\bf r})=0
\end{equation}
where $U({\bf r})$ is a self-consistent potential that depends on $\sigma$.
It is unimportant for low densities.
\section{Dilute Interacting Gas in Harmonic Trap}
I will just quote some results for a dilute gas in a harmonic trap. The
external potential is
\begin{equation}
V_{\rm ext}={\hbar\omega\over 2}\left(r\over L\right)^2
\end{equation}
For particles of mass $m$,
\begin{equation}
L =\sqrt{\hbar/m\omega}
\end{equation}
which is the extend of the ground state wave function.
For the interparticle interation, we use a pseudopotential
\begin{equation}
{4\pi a\hbar^2\over m}\delta({\bf r}){\partial\over \partial r}r
\end{equation}
The sole effect of the differential operator above is
the removal of a divergence in the ground
state energy. The three important lengths in the problem are
\begin{eqnarray}
L&\quad&\mbox{(Extend of ground state wave function)}\nonumber\\
a&\quad&\mbox{(Scattering length)}\nonumber\\
R_0&\quad&\mbox{(Extend of condensate)}
\end{eqnarray}
They are illustrated in Fig.4.
\medskip
\begin{figure}[htbp]
\centerline{\BoxedEPSF{kfig4.eps scaled 650}}
\caption{Length scales in atomic trap. Groundstate wave function is
$\psi_0(r)$. Condensate wave function is $\phi_0(r)$.}
\end{figure}
\medskip
For low densities, the non-linear coupled equations for $\sigma$ and
$\phi$ are solved by iteration, and one iteration suffices.
The chemical potential is found to be
\begin{equation}
\mu={\hbar\omega\over 2}\left(15 a\over 2L\right)^{2/5}
\left[1+{\sqrt{2}\over 60}\left(15a\over L\right)^{6/5} N^{1/5}\right]
\end{equation}
The requirement $\hbar\omega/\mu\ll1$ means
\begin{equation}
{L\over a N}\ll1
\end{equation}
The extend of the condensate is given by
\begin{equation}
{R_0\over L}=\left(15aN\over L\right)^{1/5}
\end{equation}
For the method to be valid we must have $R_0>>L$.
By neglecting the term $\nabla^2\phi$ in the NLSE, we find
\begin{equation}
\phi^2(r)={R_0^2\over 8\pi aL^4}\left[1-\left(r\over R_0\right)^2 \right]
\end{equation}
In Fig.5 we show the shape of the condensate and estimated errors.
Fig.5(a) shows $\phi(r)$ as a function of $r$ in units of $L$, for
N=$10^3$, and $10^6$.
Fig.5(b) shows the errors arising from the
neglect of $\nabla^2\phi$. This is ``trivial,'' as
it can be corrected through numerical computation.
Fig.5(c) shows the errors incurred due to the Thomas-Fermi approximation,
and are intrinsic to the method. They are small except at the edge of
the condensate.
\medskip
\begin{figure}[htbp]
\centerline{\BoxedEPSF{kfig5.eps scaled 750}}
\caption{(a) Condensate wave functions for $N=10^3$ and $10^6$;
(b) Error incurred in neglecting kinetic term in NLSE;
(c) Error incurred in Thomas-Fermi approximation.
Length scale on horizontal axis is in units of $L$,
the extend of the groundstate wave function. Calculations are
done for $L=10^{-4}$ cm, scattering length=$5\times 10^{-7}$ cm.}
\end{figure}
\medskip
\section{Quasiparticle Excitation}
The local excitation energy should be measured from the
chemical potential:
\begin{equation}
\epsilon_p(r)=\mu+\sqrt{\left[{p^2\over 2m}+\mu_{\rm eff}(r)\right]^2
-\mu_{\rm eff}^2(r)}
\end{equation}
where
\begin{equation}
\mu_{\rm eff}(r)=\mu-V_{\rm ext}(r)
\end{equation}
It describes a phonon with a position-dependent sound velocity.
The excitation energy density of the system is given by
\begin{equation}
g(\epsilon)=\sum_i\delta(\epsilon-\epsilon_i)
\end{equation}
where $\epsilon_i$ is the energy of the $i$th excited state. In the
spirit of the Thomas-Fermi approximation, we take
\begin{equation}
g(\epsilon)=\int{d^3r d^3p\over h^3}\delta(\epsilon-\epsilon_p(r))
\end{equation}
The results for $N=10^3$ and $10^6$ are shown in Fig.6 and Fig.7,
with comparison to the ideal gas.
\medskip
\begin{figure}[p]
\centerline{\BoxedEPSF{kfig6.eps scaled 750}}
\caption{Density of states in harmonic trap, for $N=10^3$.}
\end{figure}
\medskip
\medskip
\begin{figure}[p]
\centerline{\BoxedEPSF{kfig7.eps scaled 750}}
\caption{Same as Fig.6, but for $N=10^6$.}
\end{figure}
\medskip
Further details can be found in \cite{TTH}.
This work was supported in part by funds provided by
the U.S. Department of Energy under cooperative agreement
\# DE-FC02-94ER40818.
\newpage
| proofpile-arXiv_065-335 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\chapter{Introduction}
The reasons for studying 2-dimensional $N=2$ superconformal field
theories are numerous and well known (e.g. see \cite{nw}): the
areas of application include string theory, mirror symmetry,
topological field theories, exactly solvable models, quantum
and $W$-gravity.
Since holomorphic factorization represents
a fundamental property of many of these models \cite{fms},
it is particularly
interesting to have a
field theoretic approach in which
holomorphic factorization is realized in a manifest way by virtue of
an appropriate parametrization of the basic variables.
The goal of the present work is to develop such an
approach to the superspace formulation
of (2,2) and (2,0) superconformal models.
In order to describe this approach and its relationship
to other formulations in more detail,
it is useful to summarize briefly previous
work in this field.
The $d=2, N=2$ superconformally invariant coupling of matter fields
to gravity was first discussed in the context of the fermionic
string \cite{ade, bris}. Later on, the analogous
(2,0) supersymmetric theory has been introduced and
sigma-model couplings have been investigated \cite{hw,
bmg, evo}. Some of
this work has been done in component field formalism,
some other in superspace formalism.
The latter has the advantage
that {\em supersymmetry is manifestly realized}
and that {\em field-dependent symmetry algebras are avoided}.
(Such algebras usually occur in the component field formalism (WZ-gauge)
\cite{geo}.)
The geometry of $d=2, N=2$ superspace and the classification
of irreducible multiplets has been analyzed by the authors of
references \cite{ghr, hp, glo, ggw}.
As is well known \cite{ggrs, pcw}, the
quantization of supergravity in superspace requires the explicit
solution of the constraints imposed on the geometry
in terms of prepotential superfields.
In two dimensions, these prepotentials (parametrizing superconformal
classes of metrics) represent superspace expressions of the
Beltrami differentials \cite{gg}. The determination of an
explicit solution
for the (2,0) and (2,2) constraints has been studied
in references \cite{eo, xu, kl, l} and \cite{gz, ot, gw},
respectively.
On the other hand, a field theoretic approach to (ordinary)
conformal models in which holomorphic factorization is
manifestly realized
was initiated by R.Stora and developed
by several authors
\cite{ls, cb}.
This formalism comes in two versions.
One may formulate the theory on a Riemannian manifold
in which case one has to deal with Weyl rescalings of the
metric and with conformal classes of metrics parametrized
by Beltrami coefficients. Alternatively, one may work
on a Riemann surface in which case one simply deals with
complex structures which are equivalent to conformal
classes of metrics.
This Riemannian surface approach
enjoys the following properties.
{\em Locality} is properly taken into account, {\em holomorphic
factorization} is realized manifestly due to a judicious
choice of variables and the theory is {\em globally defined} on a
compact Riemann surface of arbitrary genus. Furthermore,
the fact of working right away on a
Riemann surface (i.e. with a conformal class of metrics)
renders this approach more
{\em economical} since there is no need for introducing
Weyl rescalings and eliminating these degrees of freedom in the
sequel.
The Riemannian manifold approach \cite{cb} has been generalized to
the $N=1$ supersymmetric case in reference
\cite{bbg} and to the $(2,2)$ and $(2,0)$ supersymmetric cases
in references \cite{ot} and \cite{kl}, respectively.
The Riemannian surface approach \cite{ls} has
been extended to the $N=1$ supersymmetric theory in reference \cite{dg}
and was used to prove the
superholomorphic factorization theorem for partition functions on Riemann
surfaces \cite{agn}.
Both of these approaches to superconformal models
are formulated in terms of Beltrami
superfields
(`prepotentials') and their relationship
with the usual (Siegel-Gates like) solution of supergravity constraints
has been discussed in references \cite{dg} and \cite{gg}.
We will come back to this issue in the concluding section where
we also mention further applications. It should be noted
that the generalization to $N=2$ supersymmetry is more subtle
than the one to the $N=1$ theory
due to the appearance of an extra U(1)-symmetry.
Our paper is organized as follows.
We first consider the (2,0) theory since
it allows for simpler
notation and calculations.
Many results for the
$z$-sector of the (2,0) theory have the same form
as those of the $z$-sector of the (2,2) theory
(the corresponding
results for the $\bar z$-sector being obtained by complex conjugation).
After a detailed presentation of the (2,0) theory, we simply
summarize the results for the (2,2) theory. Comparison
of our results with those of other approaches will be made within
the text and in
the concluding section.
\chapter{$N=2$ Superconformal symmetry}
In this chapter, we introduce $N=2$ superconformal transformations
and some related notions
\cite{f, cr, jc, bmg, pcw}.
To keep supersymmetry manifest,
all considerations will be carried out in superspace
\cite{wb, ggrs, pcw, geo}, but the
projection of the results to ordinary space will be outlined
in the end.
\section{Superconformal transformations and SRS's}
\subsubsection{Notation and basic relations}
An $N=2$ super Riemann surface (SRS)
is locally parametrized by coordinates
\begin{equation}
\label{coo}
( {\cal Z} ; {\bar{{\cal Z}}} ) \equiv (z, \theta, \bar{\theta} ;
\bar z , \theta^-, {\bar{\theta}}^- ) \equiv
(x^{++} , \theta^+ , \bar{\theta}^+ ;
x^{--} , \theta^-, {\bar{\theta}}^- )
\ \ ,
\end{equation}
with $z, \bar z$ even and $\theta, \bar{\theta}, \theta^-, {\bar{\theta}}^-$ odd.
The variables are complex and related by complex conjugation (denoted
by $\ast$):
\[
z^{\ast} = \bar z \qquad , \qquad
(\theta^+ )^{\ast} = \theta ^- \qquad , \qquad
(\bar{\theta}^+ )^{\ast} = \bar{\theta} ^-
\ \ .
\]
As indicated in (\ref{coo}),
we will omit the plus-indices of $\theta^+$ and $\bar{\theta}^+$ to simplify
the notation.
The canonical basis of the
tangent space is defined by
$( \partial , \,
D , \,
\bar D ; \,
\bar{\partial} , \,
D_- , \,
\bar D _- )$ with
\begin{eqnarray}
\label{1}
\partial & =& \frac{\partial}{\partial z}
\quad , \quad
D \ = \ \frac{\partial}{\partial \theta} + \frac{1}{2} \, \bar{\theta} \partial
\qquad \ , \quad \
\bar D \ = \ \frac{\partial}{\partial \bar{\theta}} + \frac{1}{2} \, \theta \partial
\\
\bar{\partial} & =& \frac{\partial}{\partial \bar z}
\quad , \quad
D_- \ = \ \frac{\partial}{\partial \theta^-} + \frac{1}{2} \, {\bar{\theta}}^- \bar{\partial}
\ \ , \quad
\bar D_- \ = \ \frac{\partial}{\partial {\bar{\theta}}^-} + \frac{1}{2} \, \theta^- \bar{\partial}
\ \ .
\nonumber
\end{eqnarray}
The graded Lie brackets between these vector fields are given by
\begin{equation}
\label{2}
\{ D , \bar D \} = \partial
\qquad , \qquad
\{ D_- , \bar D_- \} = \bar{\partial}
\ \ \ ,
\end{equation}
all others brackets being zero, in particular,
\begin{equation}
\label{3}
D^2 = 0 = \bar D^2
\qquad , \qquad
( D_- ) ^2 = 0 =
( \bar D_- ) ^2
\ \ .
\end{equation}
For later reference, we note that this set of equations implies
\begin{equation}
[D,\bar D ]^2 = \partial^2
\qquad , \qquad
[D_-,\bar D_- ]^2 = \bar{\partial} ^2
\ \ .
\end{equation}
The cotangent vectors which are dual to the canonical
tangent vectors (\ref{1})
are given by the 1-forms
\begin{eqnarray}
\label{9}
e^z & =& dz + \frac{1}{2} \, \theta d\bar{\theta} +
\frac{1}{2} \, \bar{\theta} d\theta
\qquad \quad , \quad \
e^{\theta} = d \theta
\quad \ \ \ , \quad \
e^{\bar{\theta}}= d \bar{\theta}
\\
e^{\bar z} & = & d \bar z
+ \frac{1}{2} \, \theta^- d{\bar{\theta}}^- +
\frac{1}{2} \, {\bar{\theta}}^- d\theta^-
\ \ , \quad
e^{\theta^-} = d \theta^-
\quad , \quad
e^{{\bar{\theta}}^-}= d {\bar{\theta}}^-
\nonumber
\end{eqnarray}
and that the graded commutation relations (\ref{2})(\ref{3}) are
equivalent to the {\em structure equations}
\begin{eqnarray}
\label{10}
0 & = & de^z + e^{\theta} \, e^{\bar{\theta}}
\quad \quad , \quad \quad \
de^{\theta} = 0 =
de^{\bar{\theta}}
\\
0 & = & de^{\bar z} + e^{\theta^-} \, e^{{\bar{\theta}}^-}
\quad , \qquad
de^{\theta^-} = 0 =
de^{{\bar{\theta}}^-}
\ \ .
\nonumber
\end{eqnarray}
\subsubsection{Superconformal transformations}
By definition of the SRS,
any two sets of local coordinates, say $( {\cal Z} ; \bar{\cal Z} )$
and $( {\cal Z}^{\prime} ; \bar{\cal Z}^{\prime} )$,
are related by a
superconformal transformation, i.e.
a mapping for which
$D, \, \bar D$ transform
among themselves and similarly
$D_-, \, \bar D_-$:
\begin{eqnarray}
\label{3f}
D & = & [\,
D \theta^{\prime} \, ] \, D ^{\prime} \, + \,
[ \, D \bar{\theta}^{\prime} \, ] \, \bar D ^{\prime}
\quad , \quad
D_- \ = \ [\,
D_- \theta^{-\prime} \, ] \, D_- ^{\prime} \, + \,
[ \, D_- \bar{\theta}^{-\prime} \, ] \, \bar D_- ^{\prime}
\\
\bar D & = & [\,
\bar D \theta^{\prime} \, ] \, D ^{\prime} \, + \,
[ \, \bar D \bar{\theta}^{\prime} \, ] \, \bar D ^{\prime}
\quad , \quad
\bar D_- \ = \ [\,
\bar D_- \theta^{-\prime} \, ] \, D_- ^{\prime} \, + \,
[ \, \bar D_- \bar{\theta}^{-\prime} \, ] \, \bar D_- ^{\prime}
\ \ .
\nonumber
\end{eqnarray}
These properties are equivalent to the following two conditions :
\noindent (i)
\begin{eqnarray}
\label{4f}
{\cal Z}^{\prime} & = & {\cal Z} ^{\prime} ( {\cal Z} )
\quad \Longleftrightarrow \quad
D_- {\cal Z}^{\prime} = 0 =
\bar D_- {\cal Z}^{\prime}
\\
\bar{\cal Z} ^{\prime} & = & \bar{\cal Z} ^{\prime} (\bar{\cal Z} )
\quad \Longleftrightarrow \quad
D \bar{\cal Z} ^{\prime} = 0 =
\bar D \bar{\cal Z} ^{\prime}
\ \ ,
\nonumber
\end{eqnarray}
\noindent (ii)
\begin{eqnarray}
\label{5f}
D z^{\prime} & = & \frac{1}{2}
\theta ^{\prime} (D \bar{\theta} ^{\prime} ) + \frac{1}{2}
\bar{\theta} ^{\prime} (D \theta ^{\prime} )
\qquad \qquad , \quad
\bar D z^{\prime} \ = \ \frac{1}{2}
\theta ^{\prime} (\bar D \bar{\theta} ^{\prime} ) + \frac{1}{2}
\bar{\theta} ^{\prime} (\bar D \theta ^{\prime} )
\\
D_- \bar z^{\prime} & = & \frac{1}{2}
\theta ^{-\prime} (D_- \bar{\theta} ^{-\prime} ) + \frac{1}{2}
\bar{\theta} ^{-\prime} (D_- \theta ^{-\prime} )
\ \ , \ \
\bar D_- \bar z^{\prime} \ = \ \frac{1}{2}
\theta ^{-\prime} (\bar D_- \bar{\theta} ^{-\prime} ) + \frac{1}{2}
\bar{\theta} ^{-\prime} (\bar D_- \theta ^{-\prime} )
.
\nonumber
\end{eqnarray}
Application of the algebra (\ref{2})(\ref{3}) to eqs.(\ref{5f})
yields a set of integrability conditions,
\begin{eqnarray}
0 & = &
(D \theta^{\prime} \, ) \,
( D \bar{\theta}^{\prime} \, )
\nonumber
\\
0 & = &
(\bar D \bar{\theta}^{\prime} \, ) \,
( \bar D \theta^{\prime} \, )
\label{3h}
\\
0 & = &
(D \theta^{\prime} ) \,
( \bar D\bar{\theta}^{\prime} ) +
(D \bar{\theta}^{\prime} ) \,
( \bar D \theta^{\prime} ) \, - \,
\left[ \, \partial z^{\prime}
+ \frac{1}{2} \, \bar{\theta}^{\prime} \, \partial \theta^{\prime}
+ \frac{1}{2} \, \theta^{\prime} \, \partial \bar{\theta}^{\prime}
\, \right]
\nonumber
\end{eqnarray}
(and similarly for the $\bar z$-sector).
Obviously, there are four possibilities to satisfy the first two
of these equations. The two solutions $D\theta^{\prime} = 0 =
\bar D \theta^{\prime}$ and
$\bar D \bar{\theta}^{\prime} = 0 =
D\bar{\theta}^{\prime}$
are not acceptable, because they would
imply that the change of coordinates
is non-invertible (the associated Berezinian would vanish).
The third possibility,
$D\theta^{\prime} = 0 =
\bar D \bar{\theta}^{\prime}$
amounts to interchanging the r\^ole of $\theta$ and $\bar{\theta}$, since
it leads to
$D \propto \bar D ^{\prime}$ and
$\bar D \propto D^{\prime}$.
The remaining solution is
\begin{equation}
\label{3i}
D \bar{\theta}^{\prime} \ = \ 0 \ =
\bar D \theta^{\prime}
\ \ \ ,
\end{equation}
which implies that $D$ and $\bar D$ separately transform
into themselves. The resulting transformation laws can be written as
\begin{eqnarray}
D ^{\prime} & = & {\rm e} ^w \ D
\nonumber
\\
\bar D ^{\prime} & = & {\rm e} ^{\bar{w}} \ \bar D
\label{7a}
\\
\partial ^{\prime} & = &
\{ D^{\prime} , \bar D ^{\prime} \} =
{\rm e} ^{w+\bar{w}} \, [ \partial +
( \bar D w ) D +
(D \bar{w} ) \bar D ]
\nonumber
\end{eqnarray}
with
\begin{eqnarray}
\label{8a}
{\rm e}^{-w} & \equiv & D \theta^{\prime}
\ \ \ \ \ , \ \ \ \ \
D w \ = \ 0
\\
{\rm e}^{-\bar{w}} & \equiv & \bar D \bar{\theta}^{\prime}
\ \ \ \ \ , \ \ \ \ \
\bar D \bar{w} \ = \ 0
\ \ \ .
\nonumber
\end{eqnarray}
The last equation in (\ref{3h}) then leads to
\begin{equation}
\label{w}
{\rm e}^{-w-\bar{w}} \ = \
\partial z^{\prime}
\, + \, \frac{1}{2} \, \bar{\theta}^{\prime} \, \partial \theta^{\prime}
\, + \, \frac{1}{2} \, \theta^{\prime} \, \partial \bar{\theta}^{\prime}
\ \ \ .
\end{equation}
In the remainder of the text, {\em superconformal transformations}
are assumed to satisfy conditions (\ref{4f})(\ref{5f}) and
(\ref{3i}). Analogous equations hold in the $\bar z$-sector,
\begin{eqnarray}
\label{8b}
D_- ^{\prime} &=& {\rm e}^{w^-} D_-
\qquad , \qquad
{\rm e}^{-w^-} \equiv D_-\theta^{-\prime}
\qquad , \qquad
D_-w^- =0
\\
\bar D_- ^{\prime} &=& {\rm e}^{\bar{w} ^-} \bar D_-
\qquad , \qquad
{\rm e}^{-\bar{w} ^-} \equiv \bar D_- \bar{\theta}^{-\prime}
\qquad , \qquad
\bar D_- \bar{w} ^- =0
\nonumber
\end{eqnarray}
with the relation
\begin{equation}
\label{ww}
{\rm e}^{-w^--\bar{w} ^-} = \bar{\partial} \bar z^{\prime}
+{1\over 2} \bar{\theta}^{-\prime} \bar{\partial} \theta^{-\prime}
+{1\over 2} \theta^{-\prime} \bar{\partial} \bar{\theta}^{-\prime}
\ \ .
\end{equation}
To conclude our discussion,
we note that
the superconformal transformations
of the canonical 1-forms read
\begin{eqnarray}
\label{12a}
e^{z^{\prime}} & = & {\rm e} ^{-w-\bar{w}} \, e^z
\qquad \qquad \qquad , \qquad
e^{\bar z^{\prime}} \ \ = \ {\rm e} ^{-w^--\bar{w}^-} \, e^{\bar z}
\\
e^{\theta ^{\prime}} & = & {\rm e} ^{-w} \, [ e^{\theta} - e^z
( \bar D w ) ]
\qquad , \qquad
e^{\theta ^{-\prime}} \ = \ {\rm e} ^{-w^-} \, [ e^{\theta^-} - e^{\bar z}
( \bar D_- w^- ) ]
\nonumber
\\
e^{\bar{\theta}^{\prime}} & = & {\rm e} ^{-\bar{w}} \, [ e^{\bar{\theta}} - e^{z}
(D \bar{w} ) ]
\qquad , \qquad
e^{\bar{\theta}^{-\prime}} \ = \ {\rm e} ^{-\bar{w}^-} \, [e^{{\bar{\theta}}^-} - e^{\bar z}
(D_- \bar{w}^- ) ]
\nonumber
\end{eqnarray}
with
$w, \bar{w}$ and $w^-, \bar{w}^-$ given by eqs.(\ref{8a}) and (\ref{8b}),
respectively.
\subsubsection{$U(1)$-symmetry and
complex conjugation}
The $N=2$ supersymmetry algebra
admits a $U(1) \otimes U(1)$ automorphism group.
In the {\em Minkowskian framework},
the latter may be viewed
as $SO(1,1) \otimes SO(1,1)$ in which case the Grassmannian coordinates
$\theta, \bar{\theta}, \theta^- , \bar{\theta}^-$ are all real and independent
or it may be regarded as $SO(2) \otimes SO(2)$ in which case the
Grassmannian coordinates are complex and related by
$\theta^{\ast} = \bar{\theta}$ and
$(\theta^-)^{\ast} = \bar{\theta}^-$.
\section{Projection to component fields}
A generic $N=2$ superfield admits the $\theta$-expansion
\begin{eqnarray}
\nonumber
F({\cal Z}\, ; \bar{{\cal Z}} ) &=&
a+ \theta \alpha + \bar{\theta} \beta + \theta^- \gamma + {\bar{\theta}}^- \delta \\
\nonumber &&+
\theta\bar{\theta} b+ \theta\theta^- c+\theta{\bar{\theta}}^- d+\bar{\theta}\theta^- e+\bar{\theta}{\bar{\theta}}^- f+\theta^-{\bar{\theta}}^- g \\
\nonumber &&+
\theta \bar{\theta}
\theta^- \epsilon +\theta\bar{\theta}{\bar{\theta}}^- \zeta +\theta\theta^-{\bar{\theta}}^- \eta +\bar{\theta}\theta^-{\bar{\theta}}^- \lambda \\
&&+
\theta\bar{\theta}\theta^-{\bar{\theta}}^- h
\ \ ,
\end{eqnarray}
where the component fields $a,\alpha, \beta,...$ depend on
$z$ and $\bar z$. Equivalently, these space-time fields can be introduced
by means of projection,
\begin{eqnarray}
F \! \mid &=& a
\nonumber
\\
DF \! \mid &=& \alpha \ \ \ \ , \ \ \ \ \bar D F \! \mid =\beta \ \
\ \ , \ \ \ \ D_- F \! \mid \, = \gamma
\ \ \ \ , \ \ \ \ \bar D_- F \! \mid \, = \delta
\nonumber
\\
{[ D,\bar D ]} F \! \mid &=& -2b \ \ \ \ \ \ , \ \ \ \ \ \
DD_- F \! \mid \, =-c \ \ \ \ \ \ , \ \ \ \ \ \ \ D\bar D_- F \!
\mid \, = -d
\nonumber
\\
\bar D D_-F\! \mid &=&
-e \ \ \ \ \ \ \ \, , \ \ \ \ \ \ \bar D \bar D_-F\!
\mid \, =-f
\ \ \ \ \ ,\ \ \ \ \ \ \ [ D_-,\bar D_-] F\! \mid \, = -2g
\nonumber \\
{[ D,\bar D ]} D_- F \! \mid &=& -2 \epsilon
\ \ \ \ \ \ , \ \ \ \ \ \
{[ D,\bar D ]} \bar D_- F\! \mid \, = -2 \zeta
\nonumber \\
D [ D_-,\bar D_-] F \! \mid &=& -2 \eta \ \ \ \ \ \; , \ \ \ \
\bar D [ D_-,\bar D_-] F\! \mid \, = -2 \lambda
\\
{[ D,\bar D ] [ D_-,\bar D_-]} F\! \mid &=& 4h \ \ \ ,
\nonumber
\end{eqnarray}
where the bar denotes the projection onto the lowest component
of the corres\-ponding superfield.
\chapter{(2,0) Theory}
In this chapter, we discuss
(2,0) SRS's
and super Beltrami differentials.
The projection of superspace results
to ordinary space will be performed in the end.
\section{(2,0) Super Riemann Surfaces}
A $(2,0)$ SRS
is locally parametrized by coordinates
$(z, \bar z , \theta , \bar{\theta} )$, the notation being the same as the one
for the $N=2$ theory discussed in the last chapter.
The basic geometric quantities and
relations are obtained from those of the $N=2$ theory by dropping the
terms involving $\theta^-$ and ${\bar{\theta}}^-$. Thus, in the
$z$-sector, one has the same equations as in the $N=2$ case.
For later reference, we now summarize all relations which hold
in the present case in terms of a generic system of coordinates
$(Z, \bar Z , \Theta , \bar{\Theta} )$.
The canonical basis of the tangent space and of the cotangent space
are respectively given by
\begin{equation}
\partial_Z = \frac{\partial}{\partial Z}
\quad , \quad
\partial_{\bar Z} = \frac{\partial}{\partial \bar Z}
\quad , \quad
D_{\Theta} = \frac{\partial}{\partial \Theta} + {1\over 2} \, \bar{\Theta} \partial_Z
\quad , \quad
D_{\bar{\Theta}} = \frac{\partial}{\partial \bar{\Theta}} + {1\over 2} \, \Theta \partial_Z
\end{equation}
and
\begin{equation}
e^Z = dZ + {1\over 2} \, \Theta d\bar{\Theta} + {1\over 2} \, \bar{\Theta} d\Theta
\quad , \quad
e^{\bar Z} = d\bar Z
\quad , \quad
e^{\Theta} = d\Theta
\quad , \quad
e^{\bar{\Theta}} = d\bar{\Theta}
\ \ ,
\label{cota}
\end{equation}
the {\em structure relations} having the form
\begin{equation}
\{D_{\Theta} , D_{\bar{\Theta}} \} = \partial_Z
\qquad , \qquad
(D_{\Theta}) ^2 \ = 0 = ( D_{\bar{\Theta}}) ^2
\qquad , \quad
...
\end{equation}
and
\begin{equation}
\label{strr}
0 = de^Z + e^{\Theta} e^{\bar{\Theta}}
\qquad , \qquad
0 = de^{\bar Z} = de^{\Theta} = de^{\bar{\Theta}}
\ \ .
\end{equation}
A change of coordinates
$(Z, \bar Z , \Theta, \bar{\Theta}) \to
(Z^{\prime} , \bar Z^{\prime} , \Theta^{\prime} , \bar{\Theta}^{\prime})$
is a {\em superconformal transformation} if it
satisfies the conditions
\begin{eqnarray}
Z^{\prime} & = & Z^{\prime}(Z , \Theta , \bar{\Theta})
\quad \Longleftrightarrow \quad
0=
\partial_{\bar Z} Z^{\prime}
\nonumber
\\
\Theta^{\prime} & =& \Theta^{\prime}(Z , \Theta , \bar{\Theta})
\quad \Longleftrightarrow \quad
0= \partial_{\bar Z} \Theta^{\prime}
\label{4}
\\
\bar{\Theta}^{\prime} & = & \bar{\Theta}^{\prime}(Z , \Theta , \bar{\Theta})
\quad \Longleftrightarrow \quad
0= \partial_{\bar Z} \bar{\Theta}^{\prime}
\nonumber
\\
\bar Z^{\prime} & = & \bar Z ^{\prime} (\bar Z )
\quad \quad \ \,
\quad \Longleftrightarrow \quad
0=
D_{\Theta} \bar Z^{\prime} =
D_{\bar{\Theta}} \bar Z^{\prime}
\nonumber
\end{eqnarray}
and
\begin{eqnarray}
\label{5}
D_{\Theta} Z^{\prime} & = & \frac{1}{2} \,
\Theta ^{\prime} (D_{\Theta} \bar{\Theta} ^{\prime} ) + \frac{1}{2} \,
\bar{\Theta} ^{\prime} (D_{\Theta} \Theta ^{\prime} )
\\
D_{\bar{\Theta}} Z^{\prime} & = & \frac{1}{2} \,
\Theta ^{\prime} (D_{\bar{\Theta}} \bar{\Theta} ^{\prime} ) + \frac{1}{2} \,
\bar{\Theta} ^{\prime} (D_{\bar{\Theta}} \Theta ^{\prime} )
\ \ ,
\nonumber
\end{eqnarray}
as well as
\begin{equation}
\label{3c}
D_{\Theta} \bar{\Theta}^{\prime} \ = \ 0 \ =
D_{\bar{\Theta}} \Theta^{\prime}
\ \ .
\end{equation}
The induced change of the canonical tangent and cotangent vectors reads
\begin{eqnarray}
D_{\Theta} ^{\prime} & = & {\rm e} ^W \, D_{\Theta}
\qquad , \qquad
\partial_{Z} ^{\prime} \ = \
{\rm e} ^{W+\bar{W}} \, [ \partial _Z +
(D_{\bar{\Theta}} W ) D_{\Theta} +
(D_{\Theta} \bar{W} ) D_{\bar{\Theta}} ]
\nonumber
\\
D_{\bar{\Theta}} ^{\prime} & = & {\rm e} ^{\bar{W}} \, D_{\bar{\Theta}}
\qquad , \qquad
\partial_{\bar Z}^{\prime} \ = \ (\partial_{\bar Z} \bar Z^{\prime} )^{-1} \,
\partial_{\bar Z}
\label{12}
\end{eqnarray}
and
\begin{eqnarray}
e^{Z^{\prime}} & = & {\rm e} ^{-W-\bar{W}} \, e^Z
\qquad \, , \qquad
e^{\Theta ^{\prime}} \ = \ {\rm e} ^{-W} \, [ e^{\Theta} - e^Z \,
(D_{\bar{\Theta}} W ) ]
\nonumber
\\
e^{\bar Z^{\prime}} & = &
(\partial_{\bar Z} \bar Z^{\prime} ) \, e^{\bar Z}
\qquad \ , \qquad\,
e^{\bar{\Theta} ^{\prime}} \ = \ {\rm e} ^{-\bar{W}} \, [ e^{\bar{\Theta}} - e^Z \,
(D_{\Theta} \bar{W} ) ]
\label{12m}
\end{eqnarray}
with
\begin{eqnarray}
\label{8}
{\rm e}^{-W} & \equiv & D_{\Theta} \Theta^{\prime}
\ \ \ \ \ , \ \ \ \ \
D_{\Theta} W \, = \, 0
\\
{\rm e}^{-\bar{W}} & \equiv & D_{\bar{\Theta}} \bar{\Theta}^{\prime}
\ \ \ \ \ , \ \ \ \ \
D_{\bar{\Theta}} \bar{W} \, = \, 0
\nonumber
\end{eqnarray}
and
\begin{equation}
{\rm e}^{-W-\bar{W}} =
\partial_Z Z^{\prime}
+ \frac{1}{2} \, \bar{\Theta}^{\prime} \partial_Z \Theta^{\prime}
+ \frac{1}{2} \, \Theta^{\prime} \partial_Z \bar{\Theta}^{\prime}
\ \ .
\end{equation}
In the Euclidean framework, $\Theta$ and $\bar{\Theta}$ are independent
complex variables and the action functional will also represent a
complex quantity. In the Minkowskian setting,
one either deals with real independent coordinates $\Theta$ and $\bar{\Theta}$
($SO(1,1)$ automorphism group) or with complex conjugate
variables $\Theta$ and $\Theta^{\ast} = \bar{\Theta}$ ($SO(2)$ automorphism group).
\section{Beltrami superfields and U(1)-symmetry}
Beltrami (super)fields parametrize (super)conformal structures
with respect to a given (super)conformal structure.
Thus, we start from a reference complex structure corresponding
to a certain choice of local coordinates
$(z , \bar z , \theta , \bar{\theta} )$ for which we denote the canonical tangent
vectors by
\[
\partial = \frac{\partial}{\partial z} \ \ \ , \ \ \
\bar{\partial} = \frac{\partial}{\partial \bar z} \ \ \ , \ \ \
D
\equiv D_{\theta} = \frac{\partial}{\partial \theta} + \frac{1}{2} \, \bar{\theta} \partial
\ \ \ , \ \ \
\bar D \equiv D_{\bar{\theta}} = \frac{\partial}{ \partial \bar{\theta}} + \frac{1}{2} \, \theta
\partial
\ \ .
\]
Then, we pass over to an arbitrary complex structure (corresponding
to local coordinates
$(Z , \bar Z , \Theta , \bar{\Theta} )$) by a smooth change of coordinates
\begin{equation}
\label{13}
(z , \bar z , \theta , \bar{\theta})
\longrightarrow
\left( Z
(z , \bar z , \theta , \bar{\theta}) ,
\bar Z
(z , \bar z , \theta , \bar{\theta}) ,
\Theta
(z , \bar z , \theta , \bar{\theta}) ,
\bar{\Theta}
(z , \bar z , \theta , \bar{\theta}) \right)
\ \ .
\end{equation}
To simplify the notation, we label the small coordinates
by small indices $a,\, b $, e.g.
$(e^a ) = ( e^z , e^{\bar z} , e^{\theta} , e^{\bar{\theta}} ) ,\
(D_a ) = ( \partial , \bar{\partial} , D , \bar D )$
and the capital
coordinates by capital indices $A,\, B$.
The transformation of the canonical 1-forms induced by the change
of coordinates (\ref{13}) reads
\[
e^B \ = \ \sum_{a=z,\bar z ,\theta , \bar{\theta}} e^a \, E_a ^{\ B}
\ \ \ \ \ \ \ {\rm for} \ \ \ B \, = \, Z , \bar Z , \Theta , \bar{\Theta}
\ \ \ .
\]
Here, the $E_a ^{\ B}$ are superfields whose explicit form is easy to
determine from the expressions (\ref{cota}) and $d=e^a D_a$:
for $a=z,\bar z,\theta,\bar{\theta}$, one finds
\begin{eqnarray}
E_a ^{\ Z} & = & D_a Z \, - \, \frac{1}{2} \, (D_a \Theta) \bar{\Theta}
\, - \, \frac{1}{2} \, (D_a \bar{\Theta}) \Theta
\label{13a}
\\
E_a ^{\ \Theta} & = & D_a \Theta \quad , \quad
E_a ^{\ \bar{\Theta}} \ = \ D_a \bar{\Theta}\quad , \quad
E_a ^{\ \bar Z} \ = \ D_a \bar Z
\ \ .
\nonumber
\end{eqnarray}
Since $e^Z$ and $e^{\bar Z}$ transform homogeneously under the
superconformal transformations (\ref{4})-(\ref{3c}), one can extract
from them some Beltrami variables
$H_a^{\ b}$ which are inert under these transformations: to do so, we
factorize $E_z ^{\ Z}$ and $E_{\bar z} ^{\ \bar Z}$ in
$e^Z$ and $e^{\bar Z}$, respectively :
\begin{equation}
\label{14}
e^Z = [ \, e^z \, + \sum_{a\neq z}
e^{a} \, H_{a} ^{\ z} \, ]
\, E_z ^{\ Z}
\quad , \quad
e^{\bar Z} = [ \, e^{\bar z} \, + \sum_{a\neq \bar z}
e^{a} \, H_{a} ^{\ \bar z} \, ]
\, E_{\bar z} ^{\ \bar Z}
\end{equation}
with
\begin{equation}
\label{15}
H_a ^{\ z} \equiv \frac{E_a ^{\ Z}}{E_z ^{\ Z}} \ \ \ \
{\rm for} \ a = \bar z , \theta , \bar{\theta}
\quad {\rm and} \quad
H_a ^{\ \bar z} \equiv \frac{E_a ^{\ \bar Z}}{E_{\bar z} ^{\ \bar Z}} \ \ \ \
{\rm for} \ a = z , \theta , \bar{\theta}
\ .
\end{equation}
By construction,
$E_a ^{\ Z}$ and $E_a ^{\ \bar Z}$ vary homogeneously under the
transformations (\ref{4})-(\ref{3c}), in particular
\[
E_z ^{\ Z^{\prime}} \ = \ {\rm e}^{-W-\bar{W}} \ E_z ^{\ Z}
\ \ \ .
\]
This transformation law
and the index structure of $E_z^{\ Z}$ advises us to decompose
this complex variable as
\begin{equation}
\label{16}
E_z ^{\ Z} \ \equiv \ \Lambda_{\theta} ^{\ \Theta} \; \bar{\LA}_{\bar{\theta}} ^{\ \bar{\Theta}}
\ \equiv \ \Lambda \; \bar{\LA}
\end{equation}
with $\Lambda , \bar{\LA}$ transforming according to
\begin{equation}
\label{17}
\Lambda ^{\Theta^{\prime}}\ = \ {\rm e}^{-W} \
\Lambda ^{\Theta}
\ \ \ \ \ , \ \ \ \ \
\bar{\LA} ^{\bar{\Theta}^{\prime}}\ = \ {\rm e}^{-\bar{W}} \
\bar{\LA} ^{\bar{\Theta}}
\ \ \ .
\end{equation}
Then, we can use $\Lambda$ and $\bar{\LA}$ to extract Beltrami coefficients
from
$e^{\Theta}$ and $e^{\bar{\Theta}}$, respectively, in analogy to $N=1$ supersymmetry
\cite{dg} :
\begin{equation}
H_a ^{\ \theta} \ = \ \frac{1}{\Lambda} \ [ \, E_a ^{\ \Theta} \; - \;
H_a ^{\ z} \, E_z ^{\ \Theta} \, ]
\ \ \ \ , \ \ \ \
H_a ^{\ \bar{\theta}} \ = \ \frac{1}{\bar{\LA}} \ [ \, E_a ^{\ \bar{\Theta}} \; - \;
H_a ^{\ z} \, E_z ^{\ \bar{\Theta}} \, ]
\ \ \ \ \ {\rm for} \ \ a \, = \, \bar z , \theta , \bar{\theta}
\ \ .
\label{ana}
\end{equation}
The {\em final result} is best summarized in matrix form,
\begin{equation}
\label{18}
\left( \ e^Z \ ,\ e^{\bar{Z}} \ ,\ e^{\Theta} \ , \ e^{\bar{\Theta}} \ \right) \ =\
\left( \ e^z \ ,\ e^{\bar{z}} \ ,\ e^{\theta} \ ,\ e^{\bar{\theta}} \ \right) \ \cdot
M \cdot Q
\end{equation}
with
\begin{equation}
\label{19}
M \ = \ \left( \begin{array}{clccr}
1 & {H_z}^{\bar z} & 0 & 0 \\
{H_{\bar z}}^z & 1 & {H_{\bar z}}^{\theta} & H_{\bar z} ^{\ \bar{\theta}} \\
{H_{\th}}^z & {H_{\theta}}^{\bar z}&{H_{\theta}}^{\theta}& {H_{\theta}}^{\bar{\theta}}
\\
{H_{\tb}}^z & {H_{\bar{\theta}}}^{\bar z}&{H_{\bar{\theta}}}^{\theta}& {H_{\bar{\theta}}}^{\bar{\theta}}
\end{array} \right)
\ \ \ \ \ , \ \ \ \ \
Q \ = \ \left( \begin{array}{clccr}
\Lambda \bar{\LA} & 0 & \tau & \bar{\tau} \\
0 & \Omega & 0 & 0 \\
0 & 0 & \Lambda & 0 \\
0 & 0 & 0 & \bar{\LA}
\end{array} \right)
\end{equation}
where
\begin{equation}
\label{20}
\Omega \equiv \Omega_{\bar z} ^{\ \bar Z}
\equiv E_{\bar z} ^{\ \bar Z}
\ \ \ \ , \ \ \ \
\tau \equiv
\tau_{z} ^{\ \Theta} \equiv
E_{z} ^{\ \Theta}
\ \ \ \ , \ \ \ \
\bar{\tau} \equiv
\bar{\tau}_{z} ^{\ \bar{\Theta}} \equiv
E_{z} ^{\ \bar{\Theta}}
\ \ \ .
\end{equation}
All the `$H$' are invariant under the superconformal transformations
(\ref{4})-(\ref{3c}). Under the latter, the factors $\Lambda, \bar{\LA}$ change
according to
eqs.(\ref{17}) while
$\Omega$ and $\tau , \bar{\tau}$ vary according to
$\Omega^{\bar Z^{\prime}} = \Omega^{\bar Z} \partial \bar Z^{\prime} / \partial \bar Z$ and
\begin{eqnarray}
\label{21}
\tau^{\Theta ^{\prime}} & = & {\rm e} ^{-W} \ [ \, \tau ^{\Theta} \ - \
\Lambda ^{\Theta} \, \bar{\LA}^{\bar{\Theta}} \ (D_{\bar{\Theta}} W ) \,]
\\
\bar{\tau}^{\bar{\Theta} ^{\prime}} & = & {\rm e} ^{-\bar{W}} \ [ \, \bar{\tau} ^{\bar{\Theta}} \ - \
\Lambda ^{\Theta} \, \bar{\LA}^{\bar{\Theta}} \ (D_{\Theta} \bar{W} ) \,]
\ \ \ .
\nonumber
\end{eqnarray}
Obviously, the decomposition (\ref{16}) has introduced a U(1)-symmetry
which leaves $e^Z , e^{\bar Z} , e^{\Theta} , e^{\bar{\Theta}}$ invariant and
which is given by
\begin{eqnarray}
\label{22}
\Lambda ^{\prime} & = & {\rm e} ^K \ \Lambda
\ \ \ \ \ \ \ \ \ , \ \ \ \ \ \ \ \ \ \ \ \
\bar{\LA} ^{\prime} \ = \ {\rm e} ^{-K} \ \bar{\LA}
\\
(H_a ^{\ \bar{\theta}} )^{\prime} & = &
{\rm e}^{K} \
H_a ^{\ \bar{\theta}}
\ \ \ \ \ \ , \ \ \ \ \ \ \
(H_a ^{\ \theta} )^{\prime} \ = \
{\rm e}^{-K} \
H_a ^{\ \theta}
\ \ \ \ \ {\rm for} \ \ a \, = \, \bar z , \theta , \bar{\theta}
\ \ ,
\nonumber
\end{eqnarray}
where $K$ is an unconstrained superfield. In the sequel, we will
encounter this symmetry in other places and forms.
Besides the transformations we have considered so far, there are the
superconformal variations of the small coordinates under which the
basis 1-forms change according to
\begin{eqnarray}
\label{23}
e^{z^{\prime}} & = & {\rm e} ^{-w-\bar{w}} \ e^z
\qquad , \qquad
e^{\theta ^{\prime}} \ = \ {\rm e} ^{-w} \ [ \, e^{\theta} \ - \ e^z \
(\bar D w ) \,]
\\
e^{\bar z^{\prime}} & = &
e^{\bar z} \ \bar{\partial} \bar z^{\prime}
\ \qquad \quad , \quad\quad
e^{\bar{\theta} ^{\prime}} \ =\ {\rm e} ^{-\bar{w}} \ [ \, e^{\bar{\theta}} \ - \ e^z \
(D \bar{w} ) \,]
\nonumber
\end{eqnarray}
with $D w \, = \, 0 \, = \, \bar D \bar{w}$.
The determination of the induced transformations of the `$H$' and of
$\Lambda, \bar{\LA} , \Omega , \tau , \bar{\tau}$ is straightforward
and we only present the results to which we will refer later on.
In terms of the quantity
\[
Y = 1 + (\bar D w) \, {H_{\th}}^z + (D \bar{w}) {H_{\tb}}^z
\ \ ,
\]
the combined
superconformal and $U(1)$ transformation laws have the form
\begin{eqnarray}
\Lambda ^{\prime} & = & {\rm e} ^{K} \,
{\rm e} ^{w} \, Y^{1/2} \, \Lambda
\qquad , \qquad
\bar{\Lambda} ^{\prime} \ = \ {\rm e} ^{-K} \,
{\rm e} ^{\bar w} \, Y^{1/2} \, \bar{\Lambda}
\qquad , \qquad
\Omega ^{\prime} \ = \ ( \bar{\partial} \bar z ^{\prime} )^{-1} \, \Omega
\nonumber \\
H_{\theta^{\prime}} ^{\ \, z^{\prime}} & = & {\rm e}^{- \bar w} \,
Y^{-1} \, {H_{\th}}^z
\qquad , \qquad
H_{\bar{\theta}^{\prime}} ^{\ \, z^{\prime}} \ = \ {\rm e}^{-w} \,
Y^{-1} \, {H_{\tb}}^z
\nonumber
\\
H_{\theta^{\prime}} ^{\ \, \bar{\theta}^{\prime}} & = & {\rm e}^{+K} \,
{\rm e}^{+w- \bar w} \,
Y^{-1/2} \, \left\{ H_{\theta} ^{\ \bar{\theta}} + Y^{-1}
[ \, (\bar D w)\, H_{\theta}^{\ \bar{\theta}} + (D\bar w ) H_{\bar{\theta}}^{\ \bar{\theta}}
] {H_{\th}}^z \right\}
\nonumber \\
H_{\bar{\theta}^{\prime}} ^{\ \, \theta^{\prime}} & = & {\rm e}^{-K} \,
{\rm e}^{-w + \bar w} \,
Y^{-1/2} \, \left\{ H_{\bar{\theta}} ^{\ \theta} + Y^{-1}
[ \, (D \bar w)\, H_{\bar{\theta}}^{\ \theta} + (\bar D w ) H_{\theta}^{\ \theta}
] {H_{\tb}}^z \right\}
\nonumber \\
H_{\bar{\theta}^{\prime}} ^{\ \, \bar{\theta}^{\prime}} & = &
{\rm e}^{+K} \,
Y^{-1/2} \, \left\{ H_{\bar{\theta}} ^{\ \bar{\theta}} + Y^{-1}
[ \, (D \bar w)\, H_{\bar{\theta}}^{\ \bar{\theta}} + (\bar D w ) H_{\theta}^{\ \bar{\theta}}
] {H_{\tb}}^z \right\}
\nonumber \\
H_{\theta^{\prime}} ^{\ \, \theta^{\prime}} & = & {\rm e}^{-K} \,
Y^{-1/2} \, \left\{ H_{\theta}^{\ \theta} + Y^{-1}
[ \, (\bar D w)\, H_{\theta}^{\ \theta} + (D\bar w ) H_{\bar{\theta}}^{\ \theta}
] {H_{\th}}^z \right\}
\nonumber \\
H_{\bar z^{\prime}} ^{\ \, z^{\prime}} & = & {\rm e}^{-w-\bar{w}}
\, ( \bar{\partial} \bar z ^{\prime} )^{-1} \, Y^{-1} \, H_{\zb} ^{\ z}
\label{24}
\\
H_{\theta^{\prime}} ^{\ \, \bar z^{\prime}} & = & {\rm e}^{w}
\, ( \bar{\partial} \bar z ^{\prime} ) \, H_{\theta}^{\ \bar z}
\qquad , \qquad
H_{\bar{\theta}^{\prime}} ^{\ \, \bar z^{\prime}} \ = \ {\rm e}^{\bar w}
\, ( \bar{\partial} \bar z ^{\prime} ) \, H_{\bar{\theta}}^{\ \bar z}
\nonumber \\
H_{z^{\prime}} ^{\ \, \bar z^{\prime}} & = & {\rm e}^{w + \bar{w}}
\, ( \bar{\partial} \bar z ^{\prime} ) \left[ H_z^{\ \bar z} +
(\bar D w) H_{\theta} ^{\ \bar z} +
(D \bar w) H_{\bar{\theta}} ^{\ \bar z} \right]
\ \ .
\nonumber
\end{eqnarray}
The given variations of $\Lambda, \bar{\LA}$
and $H_a^{\ \theta} , H_a ^{\ \bar{\theta}}$ result from a symmetric splitting
of the transformation law
\[
(\Lambda \bar{\LA} )^{\prime} = {\rm e}^{w+ \bar w} Y (\Lambda \bar{\LA} )
\ \ .
\]
The ambiguity involved in this decomposition is precisely
the $U(1)$-symmetry (\ref{22}):
\[
\Lambda ^{\prime} = {\rm e}^K {\rm e}^w Y^{1/2} \Lambda
\quad , \quad
\bar{\LA} ^{\prime} = {\rm e}^{-K} {\rm e}^{\bar w} Y^{1/2} \bar{\LA}
\ \ .
\]
Due to the structure relations (\ref{strr}), not all of the
{\em super Beltrami coefficients} $H_a ^{\ b}$ and of the
{\em integrating factors}
$\Lambda, \bar{\LA} , \Omega , \tau , \bar{\tau}$ are independent variables.
For instance, the structure relation
$0 = d e^{\bar Z}$ is equivalent to the set of equations
\begin{eqnarray}
0 & = & ( \, D_a \, - \, H_{a} ^{\ \bar z} \, \bar{\partial} \, - \, \bar{\partial}
H_a ^{\ \bar z} \, ) \, \Omega
\ \ \ \ \ \ \ \ \ \ \ \ {\rm for } \ \ \ a = z , \theta , \bar{\theta}
\nonumber
\\
0 & = & D_a (H_z ^{\ \bar z} \Omega ) \ - \ \partial ( H_a ^{\ \bar z } \Omega )
\ \ \ \ \
\ \ \ \ \ \ \ \ \ \ {\rm for } \ \ \ a = \theta , \bar{\theta}
\nonumber
\\
0 & = & D (H_{\theta} ^{\ \bar z} \Omega )
\nonumber
\\
0 & = & \bar D (H_{\bar{\theta}} ^{\ \bar z} \Omega )
\label{app}
\\
0 & = & \bar D (H_{\theta} ^{\ \bar z} \Omega ) \ + \
D (H_{\bar{\theta}} ^{\ \bar z} \Omega ) \ - \ H_z ^{ \ \bar z} \Omega
\ \ .
\nonumber
\end{eqnarray}
The last equation can be solved for $H_z ^{\ \bar z}$ and the two
equations
preceding it provide constraints for the fields $H_{\theta} ^{\ \bar z},
\, H_{\bar{\theta}} ^{\ \bar z}$.
In summary, by solving all resulting
equations
which are algebraic, we
find the following result. In the
$\bar z$-sector, there is
one integrating factor
($\Omega$) and two independent
Beltrami superfields ($H_{\theta} ^{\ \bar z}$ and $H_{\bar{\theta}} ^{\ \bar z} $),
each of which satisfies a constraint reducing the number of its
independent component fields by a factor 1/2.
In section 3.9,
the constraints on $H_{\theta}^{\ \bar z}$ and $H_{\bar{\theta}}^{\ \bar z}$
will be explicitly solved in terms of `prepotential' superfields
$H^{\bar z}$ and $\hat{H} ^{\bar z}$.
In the
$z$-sector, there are two
integrating factors
($\Lambda, \, \bar{\LA}$) and four independent and unconstrained Beltrami variables
($H_{\bar z} ^{\ z}, \, {H_{\th}}^z , \, {H_{\tb}}^z $ and a non-U(1)-invariant
combination of
$H_{\theta} ^{\ \theta} , \,
H_{\bar{\theta}} ^{\ \bar{\theta}}$, e.g.
$H_{\theta} ^{\ \theta} /
H_{\bar{\theta}} ^{\ \bar{\theta}}$).
The dependent Beltrami fields only depend on the others
and {\em not} on the integrating factors.
This is an important point, since
the integrating factors represent non-local
functionals of the `$H$' by virtue of the differential
equations that they satisfy, see below.
To be more explicit,
in the $z$-sector, one finds
\begin{eqnarray}
H_{\bar{\theta}} ^{\ \theta} H_{\bar{\theta}} ^{\ \bar{\theta}}
& = & -\,
(\bar D - {H_{\tb}}^z \partial )
{H_{\tb}}^z
\ \ \ \ \ , \ \ \ \ \
H_{\theta} ^{\ \bar{\theta}} H_{\th} ^{\ \th}
\ = \ -\, (D - {H_{\th}}^z \partial )
{H_{\th}}^z
\nonumber
\\
H_{\th} ^{\ \th} H_{\bar{\theta}} ^{\ \bar{\theta}} \, + \, H_{\bar{\theta}} ^{\ \theta} H_{\theta} ^{\ \bar{\theta}}
& = & 1\, - \,
( \bar D - {H_{\tb}}^z \partial ) {H_{\th}}^z \, - \,
( D - {H_{\th}}^z \partial ) {H_{\tb}}^z
\nonumber
\\
H_{\zb} ^{\ \th} H_{\theta} ^{\ \bar{\theta}} +
H_{\bar z}^{\ \bar{\theta}} H_{\th} ^{\ \th}
& = &
( D - {H_{\th}}^z \partial ) H_{\zb} ^{\ z} \, - \,
( \bar{\partial} - H_{\zb} ^{\ z} \partial ) {H_{\th}}^z
\label{26}
\\
H_{\zb} ^{\ \th} H_{\bar{\theta}} ^{\ \bar{\theta}} +
H_{\bar z}^{\ \bar{\theta}} H_{\tb} ^{\ \th}
& = &
( \bar D - {H_{\tb}}^z \partial ) H_{\zb} ^{\ z} \, - \,
( \bar{\partial} - H_{\zb} ^{\ z} \partial ) {H_{\tb}}^z
\nonumber
\end{eqnarray}
and
\begin{eqnarray}
\label{tau}
\tau & = &
( H_{\th} ^{\ \th} H_{\bar{\theta}} ^{\ \bar{\theta}} + H_{\bar{\theta}} ^{\ \theta} H_{\theta} ^{\ \bar{\theta}}
)^{-1} \left[ ( \bar D - {H_{\tb}}^z \partial )
( H_{\th} ^{\ \th} \Lambda ) +
( D - {H_{\th}}^z \partial )
( H_{\bar{\theta}} ^{\ \theta} \Lambda ) \right]
\quad
\\
\bar{\tau} & = &
( H_{\th} ^{\ \th} H_{\bar{\theta}} ^{\ \bar{\theta}} + H_{\bar{\theta}} ^{\ \theta} H_{\theta} ^{\ \bar{\theta}}
)^{-1} \left[ ( D - {H_{\th}}^z \partial )
( H_{\bar{\theta}} ^{\ \bar{\theta}} \bar{\LA} ) +
( \bar D - {H_{\tb}}^z \partial )
( H_{\theta} ^{\ \bar{\theta}} \bar{\LA} ) \right] .
\nonumber
\end{eqnarray}
The determination of the independent fields in the set of equations
(\ref{26}) is best done by linearizing the variables according
to
$H_{\theta} ^{\ \theta} = 1+ h_{\theta} ^{\ \theta},
H_{\bar{\theta}} ^{\ \bar{\theta}} = 1+ h_{\bar{\theta}} ^{\ \bar{\theta}}$ and
$H_a ^{\ b} = h_a ^{\ b}$ otherwise. The conclusion is the one
summarized above.
Let us complete our discussion
of the $z$-sector. The first
of the structure relations (\ref{strr}) yields, amongst others,
the following differential equation:
\begin{equation}
\label{26a}
0 \, = \, (\, D_a - H_a ^{\ z} \partial \, ) \, (\Lambda \bar{\LA} )
\, - \, (\partial H_a ^{\ z})\,
\Lambda \bar{\LA} \, - \, H_a ^{\ \bar{\theta}} \, \tau \, \bar{\LA} \, - \, H_a ^{\ \theta} \,
\Lambda \, \bar{\tau}
\ \ \ \ \ \ {\rm for} \ \ a \, = \, \bar z , \theta , \bar{\theta}
.
\end{equation}
We note that this equation also holds for $a=z$ if we
write the generic
elements of the Beltrami matrix $M$ of equation (\ref{19})
as $H_a^{\ b}$ so that
$H_z^{\ z} =1$ and
$H_z^{\ \theta} = 0 = H_z^{\ \bar{\theta}}$. The previous
relation can be decomposed in a symmetric way with respect to
$\Lambda$ and $\bar{\LA}$ which leads to the {\em integrating factor
equations} (IFEQ's)
\begin{eqnarray}
0 & = &
(\, D_a - H_a ^{\ z} \partial - \frac{1}{2} \, \partial H_a ^{\ z}
- V_a ) \, \Lambda
\, - \, H_a ^{\ \bar{\theta}} \, \tau
\nonumber
\\
0 & =& (\, D_a - H_a ^{\ z}
\partial - \frac{1}{2} \, \partial H_a ^{\ z}
+ V_a ) \, \bar{\LA}
\, - \, H_a ^{\ \theta} \, \bar{\tau}
\ \ .
\label{27}
\end{eqnarray}
The latter decomposition introduces a
vector field $V_a$ (with $V_z =0$)
which is to be interpreted
as a connection for the U(1)-symmetry
due to its transformation law under
U(1)-transformations (see next section).
It should be noted that
$V_a$ is not an independent variable, rather it is determined
in terms of the `$H$' by the structure
equations:
\begin{eqnarray}
V_{\theta} & = &
\frac{-1}{H_{\theta} ^{\ \theta}} \ [ D - {H_{\th}}^z \partial
\, + \, \frac{1}{2} \, (\partial {H_{\th}}^z ) ] \, H_{\theta} ^{\ \theta}
\nonumber
\\
V_{\bar{\theta}} & = &
\frac{1}{H_{\bar{\theta}} ^{\ \bar{\theta}}} \ [ \bar D - {H_{\tb}}^z \partial
\, + \, \frac{1}{2} \, (\partial {H_{\tb}}^z ) ] \, H_{\bar{\theta}} ^{\ \bar{\theta}}
\label{28}
\\
V_{\bar z} & = &
\frac{1}{H_{\theta} ^{\ \theta}} \, \left\{ [ D - {H_{\th}}^z \partial
+ \frac{1}{2} \, (\partial {H_{\th}}^z ) + V_{\theta} ] \, H_{\zb} ^{\ \th} -
[ \bar{\partial} - H_{\zb} ^{\ z} \partial + \frac{1}{2} (\partial H_{\zb} ^{\ z} ) ] \, H_{\th} ^{\ \th}
\right\}
\nonumber
\\
& = &
\frac{-1}{H_{\bar{\theta}} ^{\ \bar{\theta}}} \, \left\{ [ \bar D - {H_{\tb}}^z \partial
+ \frac{1}{2} \, (\partial {H_{\tb}}^z ) - V_{\bar{\theta}} ] \, H_{\zb}^{\ \tb} -
[ \bar{\partial} - H_{\zb} ^{\ z} \partial + \frac{1}{2} (\partial H_{\zb} ^{\ z} ) ] \, H_{\tb}^{\ \tb}
\right\}
.
\nonumber
\end{eqnarray}
By virtue of the relations between the `$H$', the previous
expressions can be rewritten in various other ways, for
instance
\begin{eqnarray}
\label{rew}
- H_{\bar{\theta}} ^{\ \theta} \, V_{\bar{\theta}} & = & [ \bar D - {H_{\tb}}^z \partial
\, + \, \frac{1}{2} \, (\partial {H_{\tb}}^z ) ] \, H_{\bar{\theta}} ^{\ \theta}
\\
H_{\theta} ^{\ \bar{\theta}} \, V_{\theta} & = & [ D - {H_{\th}}^z \partial
\, + \, \frac{1}{2} \, (\partial {H_{\th}}^z ) ] \, H_{\theta} ^{\ \bar{\theta}}
\ \ .
\nonumber
\end{eqnarray}
This finishes our discussion of the $z$-sector.
In the $\bar z$-sector, we have
\begin{equation}
\label{26b}
H_z ^{\ \bar z} \, = \, ( \bar D \, - \, H_{\bar{\theta}} ^{\ \bar z} \bar{\partial} )
H_{\theta} ^{\ \bar z}
\, + \, ( D \, - \, H_{\theta} ^{\ \bar z} \bar{\partial} ) H_{\bar{\theta}} ^{\ \bar z}
\ \ ,
\end{equation}
where $H_{\theta}^{\ \bar z}$ and
$H_{\bar{\theta}}^{\ \bar z}$ satisfy the covariant chirality conditions
\begin{equation}
\label{26c}
( \, D - H_{\theta} ^{\ \bar z} \bar{\partial} \, ) \, H_{\theta}^{\ \bar z}
\ = \ 0 \ =\
(\, \bar D - H_{\bar{\theta}} ^{\ \bar z} \bar{\partial} \, )\, H_{\bar{\theta}}^{\ \bar z}
\ \ .
\nonumber
\end{equation}
The first condition simply relates the component fields of
$H_{\theta}^{\ \bar z}$ among themselves and the second those of
$H_{\bar{\theta}}^{\ \bar z}$. Thereby, each of these superfields contains one
independent
bosonic and fermionic space-time component.
The factor $\Omega$ satisfies the IFEQ's
\begin{equation}
\label{28a}
0 \ = \ ( \, D_a \, - \, H_{a} ^{\ \bar z} \, \bar{\partial} \, - \, \bar{\partial}
H_a ^{\ \bar z} \, ) \, \Omega
\ \ \ \ \ \ \ \ \ \ {\rm for } \ \ \ a = z , \theta , \bar{\theta}
\ \ \ ,
\end{equation}
the equation for $z$ being a consequence of the ones for $\theta$ and $\bar{\theta}$.
\section{Symmetry transformations}
To deduce the
transformation laws of the basic fields under
infinitesimal superdiffeomorphisms, we proceed as in the $N=0$ and
$N=1$ theories \cite{dg}. In the
course of this process, the U(1)-transformations manifest themselves
in a natural way.
Thus, we start from the ghost vector field
\[
\Xi \cdot \partial \ \equiv \
\Xi^{z} (z , \bar z , \theta , \bar{\theta} )\, \partial \ + \
\Xi^{\bar z} (z , \bar z , \theta , \bar{\theta} )\, \bar{\partial} \ + \
\Xi^{\theta} (z , \bar z , \theta , \bar{\theta} )\, D \ + \
\Xi^{\bar{\theta}} (z , \bar z , \theta , \bar{\theta} )\, \bar D
\ \ \ ,
\]
which generates
an infinitesimal change of the coordinates $(z, \bar z , \theta , \bar{\theta} )$.
Following C.Becchi \cite{cb, ls},
we consider a reparametrization of
the ghosts,
\begin{equation}
\label{31}
\left( \, C^z \, ,\, C^{\bar z} \, ,\, C^{\theta} \, ,\, C^{\bar{\theta}}
\, \right) \ = \
\left( \, \Xi^z \, ,\, \Xi^{\bar z} \, ,\, \Xi ^{\theta} \, ,\,
\Xi ^{\bar{\theta}} \, \right) \cdot M
\ \ \ ,
\end{equation}
where $M$ denotes the Beltrami matrix introduced
in equation (\ref{19}). Explicitly,
\begin{eqnarray}
C^z & = & \Xi^z \ + \ \Xi^{\bar z} \, H_{\bar z} ^{\ z} \ + \
\Xi ^{\theta} \, {H_{\th}}^z \ + \ \Xi ^{\bar{\theta}} \, {H_{\tb}}^z
\nonumber
\\
C^{\bar z} & = & \Xi^{\bar z} \ + \ \Xi^z \, H_z ^{\ \bar z} \ + \
\Xi ^{\theta} \, H_{\theta} ^{\ \bar z} \ + \ \Xi ^{\bar{\theta}} \, H_{\bar{\theta}} ^{\ \bar z}
\nonumber
\\
C^{\theta} & = &
\Xi ^{\theta} \, H_{\th} ^{\ \th} \ + \ \Xi ^{\bar z} \, H_{\zb} ^{\ \th} \ + \
\Xi ^{\bar{\theta}} \, H_{\bar{\theta}} ^{\ \theta}
\label{32}
\\
C^{\bar{\theta}} & = &
\Xi ^{\bar{\theta}} \, H_{\bar{\theta}} ^{\ \bar{\theta}} \ +\ \Xi ^{\bar z} \, H_{\bar z} ^{\ \bar{\theta}} \ +\
\Xi ^{\theta} \, H_{\theta} ^{\ \bar{\theta}}
\ \ \ .
\nonumber
\end{eqnarray}
We note that
the U(1)-transformations of the `$H$', eqs.(\ref{22}), induce those
of the `$C$',
\[
(C^z )^{\prime} \, = \, C^z
\ \ \ , \ \ \
(C^{\bar z} )^{\prime} \, = \, C^{\bar z}
\ \ \ , \ \ \
(C^{\theta} )^{\prime} \, = \, {\rm e}^{-K} \, C^{\theta}
\ \ \ , \ \ \
(C^{\bar{\theta}} )^{\prime} \, = \, {\rm e}^{K} \, C^{\bar{\theta}}
\ \ ,
\]
but, for
the moment being, we will not consider this symmetry and restrict our
attention to the superdiffeomorphisms.
Contraction of the basis 1-forms (\ref{18})
along the vector field $\Xi \cdot \partial$ gives
\begin{eqnarray}
i_{\Xi \cdot \partial} ( e^Z ) & = & \left[ \, \Xi^z +
\Xi^{\bar z} {H_{\bar z}}^z + \Xi^{\theta} {H_{\th}}^z +
\Xi^{\bar{\theta}} {H_{\tb}}^z \, \right] \Lambda_{\theta}^{\ \Theta} \bar{\LA}_{\bar{\theta}} ^{\ \bar{\Theta}}
\nonumber \\
& = & C^z \Lambda_{\theta} ^{\ \Theta} \bar{\LA}_{\bar{\theta}} ^{\ \bar{\Theta}}
\label{33}
\\
i_{\Xi \cdot \partial} ( e^{\Theta} ) & = & \left[ \,
\Xi^z + \Xi^{\bar z} H_{\bar z} ^{\ z} +
\Xi ^{\theta} {H_{\th}}^z + \Xi ^{\bar{\theta}} {H_{\tb}}^z \, \right] \tau_z ^{\ \Theta}
+ \left[ \, \Xi ^{\theta} H_{\th} ^{\ \th} + \Xi ^{\bar z} H_{\zb} ^{\ \th} +
\Xi ^{\bar{\theta}} H_{\bar{\theta}} ^{\ \theta} \, \right] \Lambda_{\theta} ^{\ \Theta}
\nonumber \\
& = & C^z \tau_z ^{\ \Theta} + C^{\theta} \Lambda_{\theta} ^{\ \Theta}
\nonumber
\end{eqnarray}
and similarly
\[
i_{\Xi \cdot \partial} ( e^{\bar{\Theta}} ) \ = \
C^z \, \bar{\tau}_z ^{\ \bar{\Theta}} \, + \, C^{\bar{\theta}} \, \bar{\LA}_{\bar{\theta}} ^{\ \bar{\Theta}}
\ \ \ \ \ , \ \ \ \ \
i_{\Xi \cdot \partial} ( e^{\bar Z} ) \ = \
C^{\bar z} \, \Omega_{\bar z} ^{\ \bar Z}
\ \ .
\]
Thereby\footnote{
In superspace, the BRS-operator $s$ is supposed to act as an
antiderivation from the right and the ghost-number is added
to the form degree, the Grassmann parity being $s$-inert \cite{geo}.},
\begin{eqnarray*}
s \Theta & = &
i_{\Xi \cdot \partial} \, d \Theta \, =\, i_{\Xi \cdot \partial} \, e^{\Theta}
\, = \,
C^z \tau + C^{\theta} \Lambda \\
sZ & = & i_{\Xi \cdot \partial} \,
d Z \, = \, i_{\Xi \cdot \partial} [\, e^Z -
\frac{1}{2} \, \bar{\Theta} e^{\Theta} -
\frac{1}{2} \, \Theta e^{\bar{\Theta}} \, ]
\, = \, C^z \Lambda \bar{\LA}
- \frac{1}{2} \, \bar{\Theta} ( s \Theta )
- \frac{1}{2} \, \Theta ( s \bar{\Theta} )
\end{eqnarray*}
and analogously
\[
s\bar{\Theta} \ = \ C^z \, \bar{\tau} \ + \ C^{\bar{\theta}} \, \bar{\LA}
\ \ \ \ \ , \ \ \ \ \
s\bar Z \ = \ C^{\bar z} \, \Omega
\ \ .
\]
From the nilpotency of the $s$-operation,
$0 = s^2 Z = s^2 \bar Z = s^2 \Theta = s^2 \bar{\Theta}$, we now deduce
\begin{eqnarray}
s C^{z} & = & -\, C^z \, (\Lambda \bar{\LA} )^{-1} \,
\left[ \, s(\Lambda \bar{\LA} ) \, - \, \, C^{\bar{\theta}} \, \bar{\LA} \, \tau \, - \,
C^{\theta} \, \Lambda \, \bar{\tau}
\, \right] \ - \ C^{\theta} \, C^{\bar{\theta}}
\nonumber
\\
s C^{\bar z} & = & -\, C^{\bar z} \, \Omega ^{-1} \, \left[ \, s\Omega \,
\right]
\nonumber
\\
s C^{\theta} & = & - \, \Lambda ^{-1} \, \left[ \, (sC^z ) \, \tau
\, + \, C^z \, (s\tau ) \, + \, C^{\theta} \,
(s \Lambda ) \, \right]
\label{34}
\\
s C^{\bar{\theta}} & = & - \, \bar{\LA} ^{-1} \, \left[ \, (sC^z ) \, \bar{\tau}
\, + \, C^z \, (s\bar{\tau} ) \, + \, C^{\bar{\theta}} \,
(s \bar{\LA} ) \, \right]
\ \ \ .
\nonumber
\end{eqnarray}
The transformation laws of the
integrating factors and Beltrami coefficients follow
by evaluating in two different ways the variations of the
differentials $dZ, d\bar Z, d\Theta , d\bar{\Theta}$; for instance\footnote{For
the action of the exterior differential
$d$ on ghost fields, see reference \cite{geo}.},
\[
s(d \Theta ) \ = \ -d (s \Theta ) \ = \
+ [ \, e^z \partial \ + \ e^{\bar z} \bar{\partial} \ + \ e^{\theta} D
\ + \ e^{\bar{\theta}} \bar D \, ] \, [ \,
C^z \, \tau \, + \, C^{\theta} \, \Lambda \, ]
\]
and
\begin{eqnarray*}
s(d \Theta ) = s e^{\Theta} & = &
\left[ e^z + e^{\bar z} \, H_{\zb} ^{\ z} + e^{\theta} \, {H_{\th}}^z +
e^{\bar{\theta}} \, {H_{\tb}}^z \right] s\tau +
\left[ e^{\bar z} \, s H_{\zb} ^{\ z} + e^{\theta} \, s {H_{\th}}^z +
e^{\bar{\theta}} \, s {H_{\tb}}^z \right] \tau
\\
& & +
\left[ e^{\theta} \, H_{\th} ^{\ \th} + e^{\bar z} \, H_{\zb} ^{\ \th} + e^{\bar{\theta}} \, H_{\tb} ^{\ \th}
\right] s \Lambda +
\left[ e^{\theta} s H_{\th} ^{\ \th} + e^{\bar z} s H_{\zb} ^{\ \th}
+ e^{\bar{\theta}} s H_{\tb} ^{\ \th} \right] \Lambda
\end{eqnarray*}
lead to the variations of $\tau$ and $ H_{\th} ^{\ \th} , {H_{\bar z}}^{\theta} ,
{H_{\bar{\theta}}}^{\theta}$.
More explicitly, comparison of the
coefficients of $e^z$ in both expressions for
$s(d\Theta)$ yields
\begin{eqnarray}
\label{35a}
s \tau & = & \partial \, ( \, C^z \tau + C^{\theta}
\Lambda \, )
\\
s \bar{\tau} & = & \partial \, ( \, C^z \bar{\tau} + C^{\bar{\theta}}
\bar{\LA} \, )
\ \ ,
\nonumber
\end{eqnarray}
where the second equation follows
from $s(d\bar{\Theta})$
by the same lines of reasoning.
From the coefficients of $e^z$ in $s(dZ)$, one finds
\begin{equation}
s \, (\Lambda \bar{\LA} ) \ = \ \partial \, ( C^z \Lambda \bar{\LA} )
\ + \ C^{\bar{\theta}} \, \bar{\LA} \, \tau
\ + \ C^{\theta} \, \Lambda \, \bar{\tau}
\ \ \ .
\end{equation}
In analogy to eqs.(\ref{26a})(\ref{27}), we decompose
this variation in a symmetric way,
\begin{eqnarray}
\label{35}
s \Lambda & = & C^z \, \partial \Lambda \ + \ \frac{1}{2} \ (\partial C^z) \, \Lambda
\ + \ C^{\bar{\theta}} \, \tau \ + \ K\, \Lambda
\\
s \bar{\LA} & = & C^z \, \partial \bar{\LA} \ + \ \frac{1}{2} \ (\partial C^z) \, \bar{\LA}
\ + \ C^{\theta} \, \bar{\tau} \ - \ K\, \bar{\LA}
\ \ \ ,
\nonumber
\end{eqnarray}
where $K$ denotes a ghost superfield. The $K$-terms
which naturally appear in this decomposition
represent an infinitesimal version of the
U(1)-symmetry (\ref{22}). The variation of the $K$-parameter
follows from the
requirement that the $s$-operator is nilpotent:
\begin{equation}
s K \, = \, - \left[ \, C^z \partial K - \frac{1}{2} \,
C^{\theta} (\partial C^{\bar{\theta}} ) + \frac{1}{2} \, C^{\bar{\theta}}
(\partial C^{\theta} ) \, \right]
\ \ .
\end{equation}
By
substituting the expressions (\ref{35a})-(\ref{35}) into eqs.(\ref{34}),
we get
\begin{eqnarray}
s C^z & = & - \left[ \, C^z \partial C^z + C^{\theta}
C^{\bar{\theta}} \, \right]
\nonumber
\\
s C^{\theta} & = & - \left[ \, C^z \partial C^{\theta} + \frac{1}{2}
\, C^{\theta} (\partial C^z ) - K C^{\theta} \, \right]
\label{36}
\\
s C^{\bar{\theta}} & = & - \left[ \, C^z \partial C^{\bar{\theta}} + \frac{1}{2}
\, C^{\bar{\theta}} (\partial C^z ) + K C^{\bar{\theta}} \, \right]
\ \ .
\nonumber
\end{eqnarray}
The variations of the Beltrami coefficients
follow by taking into account the previous relations, the
structure equations and eqs.(\ref{27}) where the vector field $V_a$
was introduced.
They take the form
\begin{eqnarray}
\label{37}
sH_a ^{\ z} & = &
(\, D_a - H_a ^{\ z} \partial
+ \partial H_a ^{\ z} \, )\, C^z - H_a ^{\ \theta}
C^{\bar{\theta}} - H_a ^{\ \bar{\theta}}
C^{\theta}
\\
s H_a ^{\ \theta} & = & ( \, D_a - H_a ^{\ z} \partial
+ \frac{1}{2} \, \partial H_a ^{\ z} + V_a \, ) \, C^{\theta}
+ C^z \partial H_a ^{\ \theta} -
\frac{1}{2} \,
H_a ^{\ \theta} ( \partial C^z ) - H_a ^{\ \theta} K
\nonumber
\\
s H_a ^{\ \bar{\theta}} & = & ( \, D_a - H_a ^{\ z} \partial
+ \frac{1}{2} \, \partial H_a ^{\ z} - V_a \, ) \, C^{\bar{\theta}}
+ C^z \partial H_a ^{\ \bar{\theta}} -
\frac{1}{2} \,
H_a ^{\ \bar{\theta}} ( \partial C^z ) + H_a ^{\ \bar{\theta}} K .
\nonumber
\end{eqnarray}
Finally, the variation of $V_a$ follows by requiring the
nilpotency of the $s$-operations (\ref{37}):
\begin{equation}
\label{38a}
sV_a = C^z \partial V_a + \frac{1}{2} \, H_a ^{\ \theta}
\partial C^{\bar{\theta}}
- \frac{1}{2} \, (\partial H_a ^{\ \theta}) C^{\bar{\theta}}
- \frac{1}{2} \, H_a ^{\ \bar{\theta}}
\partial C^{\theta}
+ \frac{1}{2} \, (\partial H_a ^{\ \bar{\theta}} ) C^{\theta} +
( D_a - H_a ^{\ z} \partial ) K .
\end{equation}
Equivalently, this transformation law
can be deduced from the variations of the
`$H$' since $V_a$ depends on these variables according to
equations (\ref{28}).
The derivative of $K$ in the
variation (\ref{38a})
confirms the interpretation of $V_a$
as a gauge field for the $U(1)$-symmetry.
In the $\bar z$-sector, the same procedure leads to the following
results:
\begin{eqnarray}
s H_a ^{\ \bar z} & = & ( \, D_a - H_a ^{\ \bar z} \bar{\partial}
+ \bar{\partial} H_a ^{\ \bar z} \, ) C^{\bar z}
\ \ \ \ \ \ \ \ {\rm for} \ \ a \, =\, z , \theta , \bar{\theta}
\nonumber
\\
s C^{\bar z} & = & - [ \, C^{\bar z} \bar{\partial} C^{\bar z} \, ]
\label{35b}
\\
s \Omega & = & C^{\bar z} \bar{\partial} \Omega + ( \bar{\partial} C^{\bar z}) \Omega
\nonumber
\ \ .
\end{eqnarray}
Altogether, the
number of symmetry parameters and independent space-time fields
coincide and the correspondence between them is given by
\begin{equation}
\begin{array}{cccccc}
C^{z} & C^{\theta} & C^{\bar{\theta}} & K & ; & C^{\bar z} \\
H_{\bar z}^{\ z} & H_{\bar{\theta}}^{\ z} & H_{\theta}^{\ z} &
H_{\theta}^{\ \theta} / H_{\bar{\theta}}^{\ \bar{\theta}} & ; &
H_{\theta}^{\ \bar z} , H_{\bar{\theta}}^{\ \bar z} \ .
\end{array}
\end{equation}
Here, the superfields
$H_{\theta}^{\ \bar z}$ and $H_{\bar{\theta}}^{\ \bar z}$
are constrained by chirality-type
conditions which reduce the number of their components
by a factor 1/2.
We note that
the {\em holomorphic factorization} is manifestly
realized for the $s$-variations (\ref{35a})-(\ref{35b})
which have explicitly been verified to be nilpotent.
The underlying {\em symmetry group} is the semi-direct product
of superdiffeomorphisms and $U(1)$ transformations:
this fact is best seen by rewriting the infinitesimal
transformations of the ghost fields in terms of the ghost vector
field $\Xi \cdot \partial \,$,
\begin{eqnarray}
s \, ( \Xi \cdot \partial ) & = & - {1 \over 2} \;
[ \, \Xi \cdot \partial \, , \, \Xi \cdot \partial \, ]
\nonumber
\\
s \hat K & = &
- \, (\Xi \cdot \partial) \, \hat K
\ \ .
\end{eqnarray}
Here, $[ \ , \ ]$ denotes the graded Lie bracket and
$\hat K = K - i_{\Xi \cdot \partial} V$
is a reparametrization of $K$ involving the
the $U(1)$ gauge field $V= e^a V_a$.
More explicitly, we have
\begin{eqnarray}
s \Xi^z & = & - \left[
\, (\Xi \cdot \partial ) \, \Xi^z \, - \, \Xi^{\theta} \, \Xi^{\bar{\theta}} \, \right]
\\
s \Xi^a & = &
- \, (\Xi \cdot \partial) \, \Xi^a \qquad \qquad \qquad
{\rm for} \ \; a= \bar z, \theta,\bar{\theta}
\ \ ,
\nonumber
\end{eqnarray}
where the quadratic term
$\Xi^{\theta} \Xi^{\bar{\theta}}$ is due to the fact that the $\Xi^a$ are the
vector components with respect to the canonical tangent space basis
$(D_a)$ rather than the coordinate basis $(\partial_a)$.
Equations (\ref{36})(\ref{35b}) and some of the
variations (\ref{37})-(\ref{38a}) involve only space-time derivatives
and can be projected to component field expressions in a
straightforward way \cite{bbg, dg}. From the definitions
\begin{eqnarray}
\label{39}
{H_{\bar z}}^z \vert & \equiv & {\mu_{\bar z}}^z \ \ \ \ \ ,
\ \ \ \ \ H_{\zb} ^{\ \th} \vert \ \equiv \ {\alpha_{\bar z}}^{\theta}
\\
H_{z} ^{\ \bar z} \vert & \equiv & \bar{\mu}_{z} ^{\ \bar z}\ \ \ \ \ ,
\ \ \ \ \ H_{\bar z} ^{\ \bar{\theta}}
\vert \ \equiv \ \bar{\alpha}_{\bar z} ^{\ \bar{\theta}}
\ \ \ \ \ , \ \ \ \ \
V_{\bar z} \vert \ \equiv \ \bar{v}_{\bar z}
\nonumber
\end{eqnarray}
and
\begin{eqnarray}
C^z \vert & \equiv & c^z \ \equiv \ \xi^z \, + \, \xi^{\bar z} \,
{\mu_{\bar z}}^z \ \ \ \ , \ \ \ \
C^{\theta} \vert \ \equiv \ \epsilon^{\theta}
\ \equiv \ \xi^{\theta} \, + \, \xi^{\bar z} \, \alpha_{\bar z} ^{\ \theta}
\nonumber
\\
C^{\bar z} \vert & \equiv & \bar{c} ^{\bar z}
\ \equiv \ \xi^{\bar z} \, + \, \xi^z \,
\bar{\mu}_z ^{\ \bar z} \ \ \ \ , \ \ \ \
C^{\bar{\theta}} \vert \ \equiv \ \bar{\epsilon} ^{\bar{\theta}}
\ \equiv \ \xi^{\bar{\theta}} \, + \, \xi^{\bar z} \, \bar{\alpha}_{\bar z} ^{\ \bar{\theta}}
\label{40}
\\
K \vert & \equiv & k
\; \ \equiv \ \hat k \; + \, \xi^{\bar z} \, \bar{v}_{\bar z}
\ \ ,
\nonumber
\end{eqnarray}
we obtain the symmetry algebra of the ordinary Beltrami differentials
($\mu , \bar{\mu}$), of their fermionic partners (the Beltraminos
$\alpha, \bar{\alpha}$) and of the vector $\bar{v}$ :
\begin{eqnarray}
s \mu & = & ( \, \bar{\partial} - \mu \, \partial +
\partial \mu \, ) \, c - \bar{\alpha} \, \epsilon -
\alpha \, \bar{\epsilon}
\nonumber
\\
s \alpha & = &
( \, \bar{\partial} - \mu \, \partial + \frac{1}{2} \,
\partial \mu + \bar v \, ) \, \epsilon
+ c \, \partial \alpha + \frac{1}{2}\,
\alpha \, \partial c + k \, \alpha
\label{41}
\\
s \bar{\alpha} & = &
( \, \bar{\partial} - \mu \, \partial + \frac{1}{2}\,
\partial \mu - \bar v \, ) \, \bar{\epsilon} + c \, \partial
\bar{\alpha} + \frac{1}{2} \, \bar{\alpha} \,
\, \partial c - k \, \bar{\alpha}
\nonumber
\\
s \bar{v} & = &
c\, \partial \bar{v} + \frac{1}{2} \, \alpha \, \partial \bar{\epsilon} -
\frac{1}{2} \,
\bar{\epsilon} \, \partial \alpha
- \frac{1}{2} \,
\bar{\alpha} \, \partial \epsilon
+ \frac{1}{2} \,
\epsilon \, \partial \bar{\alpha}
- (\, \bar{\partial} - \mu \, \partial \, ) \, k
\nonumber
\\
\nonumber
\\
sc & = & c \, \partial c + \epsilon \, \bar{\epsilon}
\nonumber
\\
s \epsilon & = & c \, \partial \epsilon
- \frac{1}{2} \, \epsilon
\, \partial c + k \, \epsilon
\nonumber
\\
s \bar{\epsilon} &=& c \, \partial \bar{\epsilon}
- \frac{1}{2} \, \bar{\epsilon}
\, \partial c - k \, \bar{\epsilon}
\nonumber
\\
sk & = & c\, \partial k
+ \frac{1}{2} \, \epsilon \, \partial \bar{\epsilon}
- \frac{1}{2} \, \bar{\epsilon} \, \partial \epsilon
\nonumber
\end{eqnarray}
and, for the $\bar z$-sector,
\begin{eqnarray}
s \bar{\mu} & = & ( \, \partial - \bar{\mu} \, \bar{\partial} +
\bar{\partial} \bar{\mu} \, ) \, \bar{c}
\label{42}
\\
s\bar{c} & = & \bar{c} \, \bar{\partial} \bar{c}
\ \ .
\nonumber
\end{eqnarray}
Thus,
the holomorphic factorization remains manifestly realized
at the component field level\footnote{
In equations (\ref{41})(\ref{42}),
$s$ is supposed to act from the
left as usual in component field formalism and the graduation is
given by the sum of the ghost-number and the Grassmann parity;
the signs following from the superspace algebra have been
modified so as to ensure nilpotency of the $s$-operation
with these conventions.}.
\section{Scalar superfields}
In
(2,0) supersymmetry, ordinary scalar fields $X^i (z, \bar z)$ generalize to
complex superfields ${\cal X}^i , \, \bar{{\cal X}} ^{\bar{\imath}}
= ({\cal X}^i)^{\ast}$
satisfying the (anti-) chirality conditions
\begin{equation}
\label{50}
D_{\bar{\Theta}} {\cal X}^i \, = \, 0 \, = \,
D_{\Theta} \bar{{\cal X}} ^{\bar{\imath}}
\ \ \ .
\end{equation}
The coupling of such fields to
a superconformal class of metrics
on the SRS ${\bf S\Sigma}$
is described by a sigma-model action
\cite{bmg, evo}:
\begin{eqnarray}
S_{inv} [ {\cal X}, \bar{{\cal X}} ] & = & -{i \over 2} \,
\int_{\bf {S\Sigma}}d^4 Z \
[ \, K_j ({\cal X}, \bar{{\cal X}} ) \,
\partial_{\bar Z}
{\cal X}^j \; - \;
\bar K _{\bar \jmath} ({\cal X}, \bar{{\cal X}} ) \,
\partial_{\bar Z}
\bar{{\cal X}}^{\bar \jmath} \, ]
\nonumber
\\
& = &
-{i \over 2} \,
\int_{\bf {S\Sigma}}d^4 Z \
K_j ({\cal X}, \bar{{\cal X}} ) \,
\partial_{\bar Z}
{\cal X}^j \; + \; {\rm h.c.}
\ \ .
\label{51}
\end{eqnarray}
Here,
$d^4 Z = dZ \, d\bar{Z} \, d\Theta \, d\bar{\Theta}$ and
$K_j$ denotes an arbitrary complex function (and
$\bar K _{\bar \jmath} = (K_j)^{\ast}$ in the Minkowskian setting).
The functional (\ref{51})
is invariant under superconformal changes of coordinates
for which the measure $d^4Z$ transforms with
$(D_{\Theta} \Theta ^{\prime} )^{-1} \,
(D_{\bar{\Theta}} \bar{\Theta} ^{\prime} )^{-1}$, i.e. the Berezinian associated to the
superconformal transformation (\ref{4})-(\ref{3c}).
We now rewrite the expression (\ref{51})
in terms of the reference coordinates $(z,\bar z,\theta,\bar{\theta})$ by means of
Beltrami superfields. The passage from the small to the capital
coordinates reads
\begin{equation}
\label{52}
\left( \begin{array}{c}
\partial_Z \\ \partial_{\bar Z} \\ D_{\Theta} \\ D_{\bar{\Theta}}
\end{array} \right)
\ = \ Q^{-1} \ M^{-1} \
\left( \begin{array}{c}
\partial \\ \bar{\partial} \\ D \\ \bar{D}
\end{array} \right)
\end{equation}
and the Berezinian of this change of variables is
\begin{equation}
\label{53}
\left| \frac {\partial (Z,\bar Z,\Theta,\bar{\Theta})}{\partial (z,\bar z,\theta,\bar{\theta})} \right|
\ =\ {\rm sdet}\, (M\,Q) \ = \ \Omega \, {\rm sdet}\, M \ \ .
\end{equation}
The inverse of $Q$ is easily determined:
\begin{equation}
\label{54}
Q^{-1} \ = \ \left( \begin{array}{cccc}
\Lambda^{-1}\bar{\LA}^{-1} & 0 & -\Lambda^{-2}\bar{\LA}^{-1} \tau & -\Lambda^{-1}\bar{\LA}^{-2}
\bar{\tau} \\
0 & \Omega^{-1} & 0 & 0 \\
0 & 0 & \Lambda^{-1} & 0 \\
0 & 0 & 0 & \bar{\LA}^{-1} \\
\end{array} \right)
\ \ .
\end{equation}
In order to calculate sdet\,$M$ and $M^{-1}$, we
decompose $M$ according to
\begin{equation}
\label{55}
M =
\left( \begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
h_{\th}^{\ z} & h_{\th}^{\ \zb} & 1 & 0 \\
h_{\tb}^{\ z } & h_{\tb}^{\ \zb} & 0 & 1 \\
\end{array} \right)
\left( \begin{array}{cccc}
1 & H_{z}^{\ \bar z} & 0 & 0 \\
H_{\zb} ^{\ z} & 1 & 0 & 0 \\
0 & 0 & h_{\th}^{\ \th} & h_{\th}^{\ \tb} \\
0 & 0 & h_{\tb}^{\ \th} & h_{\tb}^{\ \tb} \\
\end{array} \right)
\left( \begin{array}{cccc}
1 & 0 & h_z^{\ \th} & h_z^{\ \tb} \\
0 & 1 & h_{\zb}^{\ \th} & h_{\zb}^{\ \tb} \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
\end{array} \right)
\ .
\end{equation}
The explicit expressions for the `$h$' are
\begin{equation}
\label{delt}
\begin{array}{lcl}
h_{\th}^{\ z} \ = \ \Delta^{-1}( {H_{\th}}^z - H_{\theta}^{\ \bar z} H_{\zb} ^{\ z} ) & ,
& h_{\th}^{\ \zb} \ = \ \Delta^{-1}(H_{\theta}^{\ \bar z}- {H_{\th}}^z H_{z}^{\ \bar z} ) \\
h_{\tb}^{\ z } \ = \ \Delta^{-1}( {H_{\tb}}^z - H_{\bar{\theta}}^{\ \bar z} H_{\zb} ^{\ z} ) & ,
& h_{\tb}^{\ \zb} \ = \ \Delta^{-1}(H_{\bar{\theta}}^{\ \bar z}- {H_{\tb}}^z H_{z}^{\ \bar z} )\\
h_{\th}^{\ \th} \ = \ H_{\th} ^{\ \th} - h_{\th}^{\ \zb} H_{\zb} ^{\ \th} & ,
& h_{\th}^{\ \tb} \ = \ H_{\theta}^{\ \bar{\theta}}- h_{\th}^{\ \zb} H_{\bar z}^{\ \bar{\theta}}\\
h_{\tb}^{\ \th} \ = \ H_{\tb} ^{\ \th} - h_{\tb}^{\ \zb} H_{\zb} ^{\ \th} & ,
& h_{\tb}^{\ \tb} \ = \ H_{\bar{\theta}}^{\ \bar{\theta}}- h_{\tb}^{\ \zb} H_{\bar z}^{\ \bar{\theta}}\\
h_z^{\ \th} \ = \ -\Delta^{-1}H_{z}^{\ \bar z} H_{\zb} ^{\ \th} & ,
& h_z^{\ \tb} \ = \ -\Delta^{-1}H_{z}^{\ \bar z}H_{\bar z}^{\ \bar{\theta}}\\
h_{\zb}^{\ \th} \ = \ \Delta^{-1} H_{\zb} ^{\ \th} & ,& h_{\zb}^{\ \tb} \ = \ \Delta^{-1}H_{\bar z}^{\ \bar{\theta}}
\ \ ,
\end{array}
\end{equation}
where $\Delta = 1 - H_z^{\ \bar z}
H_{\bar z}^{\ z}$.
It follows that sdet$\, M =\Delta/h$ with $h= h_{\th}^{\ \th} h_{\tb}^{\ \tb} - h_{\tb}^{\ \th} h_{\th}^{\ \tb} $
and that
\[
M^{-1} =
\left( \begin{array}{cccc}
1 & 0 & - h_z^{\ \th} & - h_z^{\ \tb} \\
0 & 1 & - h_{\zb}^{\ \th} & - h_{\zb}^{\ \tb} \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{array} \right)
\qquad \qquad \qquad \qquad \qquad
\qquad \qquad \qquad\qquad \qquad
\]
\[
\qquad \qquad
\times
\left( \begin{array}{cccc}
1/\Delta & -H_{z}^{\ \bar z}/\Delta & 0 & 0 \\
- H_{\zb} ^{\ z} /\Delta & 1/\Delta & 0 & 0 \\
0 & 0 & h_{\tb}^{\ \tb} /h & - h_{\th}^{\ \tb} /h \\
0 & 0 & - h_{\tb}^{\ \th} /h & h_{\th}^{\ \th} /h
\end{array} \right)
\left( \begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
- h_{\th}^{\ z} & - h_{\th}^{\ \zb} & 1 & 0 \\
- h_{\tb}^{\ z } & - h_{\tb}^{\ \zb} & 0 & 1
\end{array} \right)
.
\]
From these results and equation (\ref{52}),
we can derive explicit expressions for
$\partial_Z, \partial_{\bar Z}, D_{\Theta}, D_{\bar{\Theta}}$ which imply
\begin{eqnarray}
D_{\bar{\Theta}} {\cal X}^i = 0 & & \Leftrightarrow \ \
h_{\theta} ^{\ \theta} ( \bar D - h_{\bar{\theta}}^{\ z} \partial - h_{\bar{\theta}}^{\ \bar z} \bar{\partial} )
{\cal X}^i \, = \,
h_{\bar{\theta}} ^{\ \theta} ( D - h_{\theta}^{\ z} \partial - h_{\theta}^{\ \bar z} \bar{\partial} )
{\cal X}^i \quad
\nonumber
\\
D_{\Theta} \bar{{\cal X}} ^{\bar \imath} = 0 & & \Leftrightarrow \ \
h_{\bar{\theta}} ^{\ \bar{\theta}} ( D - h_{\theta}^{\ z} \partial - h_{\theta}^{\ \bar z} \bar{\partial} )
\bar{{\cal X}} ^{\bar \imath} \, = \,
h_{\theta} ^{\ \bar{\theta}} ( \bar D - h_{\bar{\theta}}^{\ z} \partial - h_{\bar{\theta}}^{\ \bar z} \bar{\partial} )
\bar{{\cal X}} ^{\bar \imath}
. \quad
\end{eqnarray}
Furthermore,
by substituting $\partial_{\bar Z}$ into the action (\ref{51})
and taking into account the last relation for ${\cal X}^i$,
one obtains the {\em final result}
\begin{equation}
\label{57}
S_{inv} [ {\cal X}, \bar{{\cal X}} ] = -{i \over 2} \,
\int_{\bf {S\Sigma}} d^{4}z \,
K_j ({\cal X} , \bar{{\cal X}} )
\, \bar{\nabla} {\cal X}^j \, + \, {\rm h.c.}
\ \ ,
\end{equation}
where
$d^{4}z \, = \, dz \, d\bar{z} \, d\theta \, d\bar{\theta}$ and
\begin{equation}
\label{57a}
\bar{\nabla} =
\displaystyle{1 \over h} ( \bar{\partial} - H_{\zb} ^{\ z} \partial)
+ \displaystyle{1 \over h^2}
H_{\bar z}^{\ \theta} \left[ h_{\th}^{\ \tb}
(\bar{D}- h_{\tb}^{\ z } \partial- h_{\tb}^{\ \zb} \bar{\partial} ) - h_{\bar{\theta}}^{\ \bar{\theta}}
(D- h_{\th}^{\ z} \partial- h_{\th}^{\ \zb} \bar{\partial} )
\right]
\ .
\end{equation}
\section{Intermediate coordinates}
If we disregard the complex conjugation relating $z$ and $\bar z$,
we can introduce the so-called
intermediate or `tilde' coordinates \cite{dg} by
\[
(z, \bar z , \theta, \bar{\theta} ) \ \stackrel{M_1 Q_1}{\longrightarrow} \
(\tilde z, \tilde{\bar z} , \tilde{\theta} , \tilde{\bar{\theta}} ) = (Z, \bar z , \Theta , \bar{\Theta} )
\ \stackrel{M_2 Q_2}{\longrightarrow}
\ (Z, \bar Z , \Theta , \bar{\Theta} )
\ \ .
\]
The matrix $M_1 Q_1$ describing the passage from
$(z, \bar z , \theta, \bar{\theta} )$ to
$(\tilde z , \tilde{\bar z} ,
\tilde{\theta} , \tilde{\bar{\theta}} )$ is easy to invert: in analogy
to eq.(\ref{52}), we thus obtain the tilde derivatives
\begin{eqnarray}
\tilde D & = & \frac{1}{\Lambda H} \ \left[ H_{\bar{\theta}}^{\ \bar{\theta}}
(D - {H_{\th}}^z \partial ) - H_{\theta}^{\ \bar{\theta}} (\bar D - {H_{\tb}}^z \partial ) \right]
\nonumber
\\
\tilde{\bar D} & = &
\frac{1}{\bar{\LA} H} \ \left[ H_{\theta}^{\ \theta}
(\bar D - {H_{\tb}}^z \partial ) - H_{\bar{\theta}}^{\ \theta} ( D - {H_{\th}}^z \partial ) \right]
\\
\tilde{\partial} & = &
\frac{1}{\Lambda \bar{\LA}} \ \left[ \partial - \tau \tilde D - \bar{\tau} \tilde{\bar D}
\right]
\nonumber
\\
\tilde{\pab} & = & ( \bar{\partial} - H_{\zb} ^{\ z} \partial) - \Lambda H_{\bar z} ^{\ \theta} \tilde D
- \bar{\LA} H_{\bar z} ^{\ \bar{\theta}} \tilde{\bar D}
\nonumber
\ \ ,
\end{eqnarray}
where $H =H_{\theta} ^{\ \theta} H_{\bar{\theta}} ^{\ \bar{\theta}} -
H_{\bar{\theta}} ^{\ \theta} H_{\theta} ^{\ \bar{\theta}}$.
For later reference, we note that
${\rm sdet} \, (M_1 Q_1) = H^{-1}$.
For the passage from the tilde to the capital coordinates,
we have
\begin{eqnarray}
D_{\Theta} & = & \tilde{D} - k_{\theta} ^{\ \bar z} \tilde{ \bar{\partial} }
\qquad , \qquad
\partial_Z \ = \ \tilde{\partial} - k_z ^{\ \bar z} \tilde{ \bar{\partial} }
\nonumber
\\
D_{\bar{\Theta}} & = & \tilde{\bar D} - k_{\bar{\theta}} ^{\ \bar z} \tilde{ \bar{\partial} }
\qquad , \qquad
\partial_{\bar Z} \ = \ \Omega^{-1} \tilde{ \bar{\partial} }
\ \ ,
\nonumber
\end{eqnarray}
where the explicit form of the `$k$' in terms of the `$H$'
and $\Lambda, \bar{\LA}$ follows from the condition
$MQ= (M_1 Q_1)(M_2 Q_2)$.
As a first application of the tilde coordinates, we prove
that the solutions of the IFEQ's (\ref{27}) for $\Lambda$ and $\bar{\LA}$
are determined up to superconformal transformations of the
capital coordinates, i.e. up to the rescalings (\ref{17}).
In fact, substitution of the expressions (\ref{tau}) for $\tau$ and
$\bar{\tau}$ into the IFEQ's (\ref{27})
shows that the homogenous equations associated to the IFEQ's
can be rewritten as
\begin{eqnarray}
0 & = & \tilde D \, {\rm ln} \, \Lambda =
\tilde{ \bar{\partial} }
\, {\rm ln} \, \Lambda
\qquad \Longrightarrow \qquad
0 = D_{\Theta} \, {\rm ln} \, \Lambda =
\partial_{\bar Z}
\, {\rm ln} \, \Lambda
\\
\nonumber
0 & = & \tilde{\dab} \, {\rm ln} \, \bar{\LA} =
\tilde{ \bar{\partial} }
\, {\rm ln} \, \bar{\LA}
\qquad \Longrightarrow \qquad
0 = D_{\bar{\Theta}} \, {\rm ln} \, \bar{\LA} =
\partial_{\bar Z}
\, {\rm ln} \, \bar{\LA}
\ \ .
\end{eqnarray}
Henceforth, the solutions $\Lambda , \bar{\LA}$
of the IFEQ's are determined up to
the rescalings
\begin{eqnarray*}
\Lambda^{\prime} & = & {\rm e}^{\, f(Z, \Theta , \bar{\Theta} )} \Lambda
\qquad {\rm with} \quad D_{\Theta} f = 0
\\
\bar{\LA}^{\prime} & = & {\rm e}^{\, g(Z, \Theta ,\bar{\Theta} )} \bar{\LA}
\qquad {\rm with} \quad D_{\bar{\Theta}} g = 0
\ \ ,
\end{eqnarray*}
which correspond precisely to the superconformal transformations
(\ref{17}).
Another application of the tilde coordinates consists
of the determination of anomalies and effective actions and will be
presented in section 3.8.
Since the $z$- and $\bar z$-sectors do not play a symmetric r\^ole
in the (2,0)-theory, we can introduce a second set of
intermediate coordinates which will be referred to as `hat' coordinates:
\[
(z, \bar z , \theta, \bar{\theta} ) \ \stackrel{\hat M_1 \hat Q_1}{\longrightarrow} \
(\hat{z} , \hat{\bar z} , \hat{\theta} , \hat{\bar{\theta}} ) = (z, \bar Z , \theta , \bar{\theta} )
\ \stackrel{\hat M_2 \hat Q_2}{\longrightarrow}
\ (Z, \bar Z , \Theta , \bar{\Theta} )
\ \ .
\]
Using the hat derivatives
\begin{eqnarray}
\label{coz}
\hat D & = & D - H_{\theta} ^{\ \bar z} \bar{\partial}
\qquad , \qquad
\hat{\partial} \ = \ \partial - H_z^{\ \bar z} \bar{\partial}
\\
\hat{\bar D} & = &\bar D - H_{\bar{\theta}}^{\ \bar z} \bar{\partial}
\qquad , \qquad
\hat{ \bar{\partial} } \ = \ \Omega ^{-1} \bar{\partial}
\ \ ,
\nonumber
\end{eqnarray}
one proves that the ambiguity of the
solutions of the IFEQ's for $\Omega$ coincides
with superconformal rescalings.
By construction, the derivatives (\ref{coz}) satisfy the same algebra
as the basic differential operators $(\partial, \bar{\partial} , D , \bar D)$,
in particular,
\begin{equation}
\label{coa}
\{ \hat D , \hat{\bar D} \} = \hat{\partial}
\qquad , \qquad
\hat D ^2 = 0 = \hat{\bar D} ^2
\qquad , \qquad
[ \hat D , \hat{\partial} ] = 0 = [ \hat{\bar D} , \hat{\partial}]
\ \ .
\end{equation}
By virtue of these derivatives, the solution (\ref{26b})(\ref{26c})
of the structure relations in the $\bar z$-sector can be rewritten
in the compact form
\begin{equation}
\label{soa}
H_z^{\ \bar z} =
\hat{\bar D} H_{\theta} ^{\ \bar z} +
\hat D H_{\bar{\theta}} ^{\ \bar z}
\qquad , \qquad
\hat D H_{\theta} ^{\ \bar z}= 0 =
\hat{\bar D} H_{\bar{\theta}} ^{\ \bar z}
\ \ ,
\end{equation}
which equations will be further exploited in section 3.9.
\section{Restriction of the geometry}
In the study of the $N=1$ theory, it was noted that the choice
$ {H_{\th}}^z =0$ is invariant under superconformal
transformations so that are no global obstructions for
restricting the geometry by this condition. In fact, this
choice greatly simplifies expressions involving Beltrami superfields
and it might even be
compulsory for the study of specific problems \cite{dg1, cco}.
As for the physical interpretation,
the elimination
of $ {H_{\th}}^z $ simply amounts to disregarding some
pure gauge fields.
In the following, we introduce the $(2,0)$-analogon of the
$N=1$ condition
$ {H_{\th}}^z =0$.
In the present case, we have a greater freedom to impose
conditions: this can be illustrated by the fact that a restriction
of the form
$DC^z =0$ on the superdiffeomorphism parameter $C^z$
does not imply $\partial C^z =0$ (i.e. a restricted space-time
dependence of $C^z$) as it does in the $N=1$ theory.
The analogon of the $N=1$ restriction of the geometry is defined by
the relations
\begin{equation}
\label{h1}
{H_{\th}}^z =0 = {H_{\tb}}^z
\qquad {\rm and} \qquad
H_{\theta}^{\ \theta} \, / \,
H_{\bar{\theta}}^{\ \bar{\theta}} = 1
\end{equation}
in the $z$-sector and
\begin{equation}
\label{h2}
H_{\bar{\theta}}^{\ \bar z} = 0
\end{equation}
in the $\bar z$-sector. (The latter condition could also be replaced by
$H_{\theta}^{\ \bar z} =0$ since equations (\ref{app}) following
from the structure relations in the $\bar z$-sector are
symmetric with respect to $\theta$ and $\bar{\theta}$.)
Conditions (\ref{h1}) and (\ref{h2}) are compatible
with the superconformal transformation laws (\ref{24}).
In the remainder of the text, we will consider the geometry constrained
by equations (\ref{h1}) and (\ref{h2}) which will be
referred to as the
{\em restricted geometry}. In this case, there is one
unconstrained Beltrami superfield in the $z$-sector, namely
$ H_{\zb} ^{\ z} $, and one superfield in the $\bar z$-sector, namely
$H_{\theta}^{\ \bar z}$, subject to the condition
$(D-
H_{\theta}^{\ \bar z} \bar{\partial} )
H_{\theta}^{\ \bar z} =0$.
The relations which hold for the other variables become
\begin{eqnarray}
\nonumber
D\Lambda \! & = & \! 0 \quad , \quad \tau = \bar D \Lambda \quad , \quad
H_{\theta}^{\ \theta} = 1 \quad , \quad H_{\bar{\theta}} ^{\ \theta} =0 \quad , \quad
H_{\zb} ^{\ \th} = \bar D H_{\zb} ^{\ z} \\
\nonumber
\bar D\bar{\LA} \! & = & \! 0 \quad , \quad \bar{\tau} = D \bar{\LA} \quad , \quad
H_{\bar{\theta}}^{\ \bar{\theta}} = 1 \quad , \quad H_{\theta} ^{\ \bar{\theta}} =0 \quad , \quad
H_{\bar z}^{\ \bar{\theta}} = D H_{\zb} ^{\ z}
\\
V_{\theta} \! & = & \! 0
\quad , \quad
V_{\bar{\theta}} = 0
\quad \ \ , \quad \ \,
V_{\bar z} = {1 \over 2} \, [ D,\bar D ] H_{\zb} ^{\ z}
\\
&&
\nonumber
\\
\bar D \Omega & = & 0 \quad , \quad
H_z^{\ \bar z} \, = \, \bar D H_{\theta} ^{\ \bar z} \quad , \quad
(D -
H_{\theta} ^{\ \bar z} \bar{\partial} )
H_{\theta} ^{\ \bar z} \, = \, 0
\ \ ,
\nonumber
\end{eqnarray}
while
the superconformal transformation laws now read
\begin{eqnarray}
\Lambda ^{\prime} & = & {\rm e}^w \, \Lambda
\quad , \quad
\bar{\LA} ^{\prime} \, = \, {\rm e}^{\bar w} \, \bar{\LA}
\quad , \quad
H_{\bar z^{\prime}} ^{\ \, z^{\prime}} \, = \, {\rm e}^{-w-\bar{w}}
\, ( \bar{\partial} \bar z ^{\prime} )^{-1} \, H_{\zb} ^{\ z}
\nonumber
\\
&&
\nonumber
\\
\Omega^{\prime} & = &
\, ( \bar{\partial} \bar z ^{\prime} )^{-1} \, \Omega
\quad , \quad
H_{\theta^{\prime}} ^{\ \, \bar z^{\prime}} \, = \, {\rm e}^{w}
\, ( \bar{\partial} \bar z ^{\prime} ) \, H_{\theta}^{\ \bar z}
\ \ .
\nonumber
\end{eqnarray}
Furthermore,
from (\ref{ana}) and (\ref{13a}), we get the local expressions
\begin{eqnarray*}
\Lambda & = & D \Theta \qquad , \qquad \bar{\LA} = \bar D \bar{\Theta}
\\
& &
\\
\Omega & = & \bar{\partial} \bar Z
\qquad ({\rm as} \ {\rm before})
\ \ .
\end{eqnarray*}
In order to be consistent, we have to require that the
conditions (\ref{h1}) and (\ref{h2}) are invariant under the
BRS transformations. This determines the symmetry parameters $C^{\theta},
\, C^{\bar{\theta}}, \, K$ in terms of $C^z$ and eliminates some
components of $C^{\bar z}$:
\begin{eqnarray}
C^{\theta} & = & \bar D C^z
\quad , \quad
C^{\bar{\theta}} \, = \, D C^z
\quad , \quad
K \, = \, {1 \over 2} \, [ D, \bar D ] C^z
\nonumber
\\
&&
\nonumber
\\
\bar D C^{\bar z} & = & 0
\ \ .
\end{eqnarray}
The $s$-variations of the basic variables in the $z$-sector then take
the form
\begin{eqnarray}
s H_{\zb} ^{\ z} &=& [ \, \bar{\partial} - H_{\zb} ^{\ z} \partial - (\bar D H_{\zb} ^{\ z} ) D - (D H_{\zb} ^{\ z} ) \bar D
+( \partial H_{\zb} ^{\ z} ) \, ] \, C^z
\nonumber
\\
s \Lambda &=& [ \, C^z \partial + (DC^z ) \bar D \, ] \, \Lambda \, + \,
(D \bar D C^z ) \, \Lambda
\nonumber
\\
s \bar{\LA} &=& [ \, C^z \partial + (\bar D C^z ) D \, ] \, \bar{\LA} \, + \,
(\bar D D C^z ) \, \bar{\LA}
\\
s C^z & = & - \, [ \, C^z \partial C^z + (\bar D C^z )( DC^z ) \, ]
\ \ ,
\nonumber
\end{eqnarray}
while those in the $\bar z$-sector are still given by equations
(\ref{35b}).
Finite superdiffeomorphisms can be discussed
along the lines of the $N=1$ theory \cite{dg}. Here, we only note
that the restriction (\ref{h1})(\ref{h2})
on the geometry reduces the symmetry
group ${\rm sdiff} \, {\bf S \Sigma} \otimes U(1)$
to a subgroup thereof.
\section{Component field expressions}
In the restricted geometry (defined in the previous section),
the basic variables of the $z$-sector
are the superfields $ H_{\zb} ^{\ z} $ and $C^z$ which have the following
$\theta$-expansions:
\begin{eqnarray}
H_{\zb} ^{\ z} & = & \mu_{\bar z}^{\ z} + \theta \, \bar{\alpha}_{\bar z}^{\ \bar{\theta}}
+ \bar{\theta} \, \alpha_{\bar z} ^{\ \theta} + \bar{\theta} \theta \, \bar v _{\bar z}
\nonumber \\
\label{cf}
C^z & = & c^z + \theta \, \bar{\epsilon}^{\bar{\theta}}
+ \bar{\theta} \, \epsilon ^{\theta}
+ \bar{\theta} \theta \, k
\ \ .
\end{eqnarray}
Here, the bosonic fields $\mu$ and $\bar v$ are the ordinary
Beltrami coefficient and the $U(1)$ vector while $\alpha$
and $\bar{\alpha}$ represent their fermionic partners,
the Beltraminos. These variables transform under general
coordinate, local supersymmetry and local $U(1)$-transformations
parametrized, respectively, by $c, \epsilon,
\bar{\epsilon}$ and $k$.
The basic variables of the $\bar z$-sector are $H_{\theta}^{\ \bar z}$
and $C^{\bar z}$. To discuss their field content, we choose
the {\em WZ-supergauge} in which the only non-vanishing component
fields are
\begin{equation}
\bar D H_{\theta}^{\ \bar z} \! \mid \; = \, \bar{\mu} _z ^{\ \bar z}
\qquad {\rm and} \qquad
C^{\bar z} \! \mid \; = \, \bar{c} ^{\bar z} \quad , \quad
\bar D D C^{\bar z} \! \mid \; = \, \partial \bar{c} ^{\bar z}
\ \ .
\end{equation}
As expected for the (2,0)-supersymmetric theory, the $\bar z$-sector
only involves the complex conjugate of $\mu$ and $c$.
In the remainder of this section, we present the component
field results in the WZ-gauge.
For the matter sector, we consider a single superfield
${\cal X}$ (and its complex conjugate $\bar{\cal X}$) and a flat
target space metric ($K_j = \delta_{j \bar{\imath} } \, \bar{{\cal X}}
^{\bar \imath}$).
Henceforth, we only have one complex scalar and two spinor fields as
component fields:
\begin{eqnarray}
{\cal X} \! \mid & \equiv & X \qquad , \qquad
D{\cal X} \! \mid \ \equiv \ \lambda_{\theta}
\nonumber \\
\bar{{\cal X}} \! \mid & \equiv & \bar X \qquad , \qquad
\bar D \bar{{\cal X}} \! \mid \ \equiv \ \bar{\lambda} _{\bar{\theta}}
\label{xcomp}
\ \ .
\end{eqnarray}
For these fields,
the invariant action (\ref{57}) reduces to the following
functional on the Riemann surface $\bf{{\Sigma}}$:
\begin{eqnarray}
i \, S_{inv} &= &
\int_{\bf{\Sigma}} d^2z \ \left\{
\frac{1}{1 - \mu \bar{\mu} } \ \left[
( \bar{\partial} - \mu \partial ) X \,
(\partial - \bar{\mu} \bar{\partial} ) \bar X
\right. \right.
\\
& & \qquad \qquad \qquad \left.
- \alpha \lambda
(\partial - \bar{\mu} \bar{\partial} ) \bar X -
\bar{\alpha} \bar{\lambda}
(\partial - \bar{\mu} \bar{\partial} ) X - \bar{\mu}
(\alpha \lambda)
(\bar{\alpha} \bar{\lambda}) \right]
\nonumber
\\
& & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \left.
- \bar{\lambda}
( \bar{\partial} - \mu \partial - {1 \over 2} \partial \mu - \bar v ) \lambda \right\}
\ \ .
\nonumber
\end{eqnarray}
The $s$-variations of the matter superfields,
$s{\cal X} = (\Xi \cdot \partial ) {\cal X} , \,
s\bar{{\cal X}} = (\Xi \cdot \partial ) \bar{{\cal X}}$ can be projected
to space-time in a straightforward manner: from the definitions
$\Xi^z \! \mid \, \equiv \, \xi , \,
\Xi^{\bar z} \! \mid \, \equiv \, \bar{\xi} , \,
\Xi^{\theta} \! \mid \, \equiv \, \xi^{\theta} , \,
\Xi^{\bar{\theta}} \! \mid \, \equiv \, \xi^{\bar{\theta}}$ and
(\ref{cf})-(\ref{xcomp}),
it follows that
\begin{eqnarray}
sX \! & \! = \! & \! (\xi \cdot \partial ) X + \xi^{\theta} \lambda
\quad , \quad
s\lambda \, = \, (\xi \cdot \partial ) \lambda
+{1 \over 2} (\partial \xi + \mu \partial \bar{\xi} ) \lambda
+ \hat k \lambda + \xi^{\bar{\theta}} {\cal D} X \quad \ \
\\
s\bar X \! &\! = \! &\! (\xi \cdot \partial ) \bar X + \xi^{\bar{\theta}} \bar{\lambda}
\quad , \quad
s\bar{\lambda} \; = \; (\xi \cdot \partial ) \bar{\lambda}
+{1 \over 2} (\partial \xi + \mu \partial \bar{\xi} ) \bar{\lambda}
- \hat k \bar{\lambda} + \xi^{\theta} {\cal D} \bar X
\ , \quad \ \
\nonumber
\end{eqnarray}
where we introduced the notation
$\xi \cdot \partial \equiv \xi \partial + \bar{\xi} \bar{\partial} , \,
\hat k \equiv k - \bar{\xi} \bar v$ and the supercovariant
derivatives
\begin{equation}
{\cal D} X =
\frac{1}{1 - \mu \bar{\mu} } \ \left[
(\partial - \bar{\mu} \bar{\partial} ) X + \bar{\mu} \alpha \lambda \right]
\quad , \quad
{\cal D} \bar X =
\frac{1}{1 - \mu \bar{\mu} } \ \left[
(\partial - \bar{\mu} \bar{\partial} ) \bar X + \bar{\mu} \bar{\alpha} \bar{\lambda}
\right]
\ .
\end{equation}
\section{Anomalies and effective actions}
For the discussion of the
chirally split form of the superdiffeomorphism anomaly
and of its compensating action, we again
consider the restricted geometry defined in section 3.6.
We follow the procedure developed in reference \cite{dg1}
for the bosonic and $N=1$ supersymmetric cases and we expect that
the results can be extended to the {\em un}restricted geometry at the
expense of technical complications
as in the $N=1$ case.
We will mainly
work on the superplane
${\bf SC}$, but we will also comment on the
generalization to generic compact SRS's.
The results for the $\bar z$-sector
are to be discussed in the next section.
The
{\em holomorphically split form of the superdiffeomorphism anomaly} on
the superplane is given in the $z$-sector by
\begin{eqnarray}
\label{an}
{\cal A}^{(z)} [C^z ; H_{\zb} ^{\ z} ] & = &
\int_{\bf SC} d^4z \ C^z \, \partial [ D, \bar D ] \, H_{\zb} ^{\ z}
\\
& = &
{1 \over 2 } \
\int_{\bf C} d^2z \ \left\{ c \partial^3 \mu
+ 2 \epsilon \partial^2 \bar{\alpha}
+ 2 \bar{\epsilon} \partial^2 \alpha
+ 4 k \partial \bar v \right\}
\ \ .
\nonumber
\end{eqnarray}
It satisfies the Wess-Zumino (WZ) consistency condition
$s{\cal A} = 0$.
An expression which is well defined on a generic compact SRS
is obtained by replacing the operator
$\partial [D, \bar D ]$ by the superconformally covariant operator
\begin{equation}
{\cal L}_2= \partial [ D, \bar D ] + {\cal R} \partial
- (D{\cal R} ) \bar D
- (\bar D {\cal R} ) D
+ (\partial {\cal R} )
\label{bol}
\end{equation}
depending on a superprojective connection ${\cal R}$
\cite{ip}; from
$s{\cal R} =0$, it follows that the so-obtained functional
still satisfies the WZ consistency condition.
We note that
our superspace expression for ${\cal A}$ was previously found in
Polyakov's light-cone gauge \cite{xu} and that the corresponding
component field expression coincides with the
result found in
reference \cite{ot}
by differential geometric methods.
If written in terms
of the tilde coordinates,
the {\em Wess-Zumino-Polyakov (WZP) action}
associated to the chirally split superdiffeomorphism
anomaly on ${\bf SC}$
has the form of a free
scalar field action for the integrating factor
\cite{dg1}. Thus, in the present case, it reads
\begin{equation}
\label{wzp}
S^{(z)} _{WZP} [ H_{\zb} ^{\ z} ] =
\int_{{\bf SC}} d^4 \tilde z \ {\rm ln} \, \bar{\LA} \, (\tilde{\pab} \, {\rm ln} \, \Lambda )
\ \ ,
\end{equation}
where the variables
${\rm ln} \, \Lambda$ and
${\rm ln} \, \bar{\LA}$ represent (anti-) chiral superfields with respect to
the tilde coordinates: $\tilde D \, {\rm ln} \, \Lambda =0 =
\tilde{\bar D} \, {\rm ln} \, \bar{\LA}$. By
rewriting the action in terms of the coordinates $(z, \bar z, \theta, \bar{\theta} )$
and applying the $s$-operation, one reproduces the anomaly (\ref{an}):
\begin{eqnarray}
\label{wzpa}
S^{(z)} _{WZP} [ H_{\zb} ^{\ z} ] & = &
- \int_{{\bf SC}} d^4z \ H_{\zb} ^{\ z} (\partial \, {\rm ln} \, \bar{\LA} )
\\
s S^{(z)} _{WZP} [ H_{\zb} ^{\ z} ] & = &
- {\cal A}^{(z)} [C^z ; H_{\zb} ^{\ z} ]
\ \ .
\nonumber
\end{eqnarray}
The response of the WZP-functional to an infinitesimal variation
of the complex structure $(H_{\bar z}^{\ z} \to
H_{\bar z}^{\ z} + \delta H_{\bar z}^{\ z}$) is given by the super
Schwarzian derivative,
\begin{equation}
\frac{\delta S_{WZP}^{(z)} }{\delta H_{\bar z} ^{\ z} }
= {\cal S} (Z , \Theta ; z , \theta )
\ \ ,
\end{equation}
the latter being defined by \cite{jc, ar, ip}
\begin{equation}
{\cal S} (Z , \Theta ; z , \theta )
= [ D , \bar D ] Q - (DQ )(\bar D Q) \qquad {\rm with} \quad
Q = {\rm ln} \, D\Theta \, + \, {\rm ln} \, \bar D \bar{\Theta}
\ \ .
\end{equation}
The proof of this result proceeds along the lines of reference
\cite{dg1}: it makes use of the IFEQ's for $\Lambda = D\Theta, \,
\bar{\LA} = \bar D \bar{\Theta}$ and of the fact that the functional
(\ref{wzp}) can be rewritten as
\begin{eqnarray}
S^{(z)} _{WZP} [ H_{\zb} ^{\ z} ] & = & {1 \over 2}
\int_{{\bf SC}} d^4 \tilde z \ \left[
\, {\rm ln} \, \bar{\LA} \; \tilde{\pab} \, {\rm ln} \, \Lambda
- {\rm ln} \, \Lambda \; \tilde{\pab} \, {\rm ln} \, \bar{\LA} \, \right]
\nonumber \\
& = & {1 \over 2}
\int_{{\bf SC}} d^4z \ \left[ \, {\rm ln} \, \bar{\LA} \ D\bar D H_{\zb} ^{\ z} -
{\rm ln} \, \Lambda \ \bar D D H_{\zb} ^{\ z} \, \right]
\ \ .
\end{eqnarray}
Within the framework of (2,0) supergravity (i.e. the metric approach),
the effective action $S_{WZP}^{(z)}$
represents a chiral gauge expression (see \cite{dg1} and
references therein): in this approach, it rather takes the form
\begin{equation}
S^{(z)} _{WZP} =
- \int_{{\bf SC}} d^4z \ {\partial \bar{\Theta} \over \bar D \bar{\Theta}} \; \bar D H_{\zb} ^{\ z}
\ \ ,
\end{equation}
which follows from (\ref{wzpa}) by substitution of
$\bar{\LA} = \bar D \bar{\Theta}$.
We note that the
extension of the WZP-action from ${\bf SC}$
to generic super Riemann surfaces has been
discussed for the $N=0$ and $N=1$ cases in references \cite{ls, rz}
and \cite{ak}, respectively.
The {\em anomalous Ward identity} on the superplane reads
\begin{equation}
- \int_{{\bf SC}} d^4z \ (s H_{\zb} ^{\ z} )
\frac{\delta Z_c}{\delta H_{\zb} ^{\ z} } \, = \, k \,
{\cal A}^{(z)} [C^z ; H_{\zb} ^{\ z} ]
\ \ ,
\end{equation}
where $Z_c$ denotes the vertex functional and $k$ a constant.
By substituting the explicit expression for $s H_{\zb} ^{\ z} $ and introducing
the super stress tensor ${\cal T}_{\theta \bar{\theta}} =
\delta Z_c \, / \, \delta H_{\zb} ^{\ z} $, the last equation takes
the local form
\begin{equation}
\left[ \bar{\partial} - H_{\zb} ^{\ z} \partial - (\bar D H_{\zb} ^{\ z} ) D - (D H_{\zb} ^{\ z} ) \bar D - (\partial H_{\zb} ^{\ z} )
\right]
{\cal T}_{\theta \bar{\theta}} \, = \, - k \, \partial [D, \bar D ] H_{\zb} ^{\ z}
\ \ .
\end{equation}
This relation
has previously been derived and discussed in the light-cone gauge
\cite{xu}.
For $k\neq 0$, the redefinition ${\cal T} \to -k {\cal T}$
yields
\[
{\cal L}_2 H_{\zb} ^{\ z} = \bar{\partial} {\cal T}_{\theta \bar{\theta}}
\ \ ,
\]
where ${\cal L}_2$ represents the covariant operator ({\ref{bol})
with ${\cal R} = {\cal T}$.
\section{The $\bar z$-sector revisited}
Since the hat derivatives $\hat D$ and $\hat{\bar D}$ are nilpotent,
the constraint equations (\ref{soa}), i.e.
$\hat D H_{\theta} ^{\ \bar z} = 0 =
\hat{\bar D} H_{\bar{\theta}} ^{\ \bar z}$,
can be solved in terms of superfields
$H^{\bar z}$ and $\check H ^{\bar z}$:
\begin{eqnarray}
H_{\theta} ^{\ \bar z} & = &
\hat D H^{\bar z} \, = \, (D -
H_{\theta} ^{\ \bar z} \bar{\partial} ) H^{\bar z} \, = \,
\sum_{n=0}^{\infty} ( - \bar{\partial} H^{\bar z} )^n
\ D H^{\bar z}
\\
\nonumber
H_{\bar{\theta}} ^{\ \bar z} & = &
\hat{\bar D} \check H ^{\bar z} \, = \, (\bar D -
H_{\bar{\theta}} ^{\ \bar z} \bar{\partial} ) \check H ^{\bar z} \, = \,
\sum_{n=0}^{\infty}
(- \bar{\partial} \check H ^{\bar z} )^n
\ \bar D \check H ^{\bar z}
\ \ .
\end{eqnarray}
The last expression on the r.h.s. of these equations
follows by iteration
of the corresponding equation.
The new variable $H^{\bar z}$ ($\check H ^{\bar z}$) still allows
for the addition of a superfield
$G^{\bar z}$ ($\check G ^{\bar z}$) satisfying
$\hat{D} G^{\bar z} =0$
$(\hat{\bar D} \check G ^{\bar z} =0$).
The infinitesimal transformation laws of $H^{\bar z}$ and
$\check{H} ^{\bar z}$ read
\begin{eqnarray}
s H^{\bar z} & = & C^{\bar z} ( 1 + \bar{\partial} H^{\bar z} ) + B^{\bar z}
\quad , \quad
s B^{\bar z} = - C^{\bar z} \bar{\partial} B^{\bar z}
\qquad {\rm with} \quad
\hat D B^{\bar z} =0 \quad
\nonumber
\\
s \check H ^{\bar z} & = & C^{\bar z} ( 1 + \bar{\partial} \check H ^{\bar z} ) +
\check B ^{\bar z}
\quad , \quad
s \check B^{\bar z} = - C^{\bar z} \bar{\partial} \check B^{\bar z}
\qquad {\rm with} \quad
\hat{\bar D} \check B ^{\bar z} =0 \quad
\end{eqnarray}
and induce the transformation laws (\ref{35b}) of
$H_{\theta}^{\ \bar z}$ and
$H_{\bar{\theta}}^{\ \bar z}$.
We note that the introduction and transformation laws
of $H^{\bar z}$ and $\check H ^{\bar z}$ are very reminiscent
of the prepotential $V$ occuring in 4-dimensional supersymmetric
Yang-Mills theories: in the abelian case, the latter transforms
according to $sV = i (\Lambda - \bar{\LA})$ where
$\Lambda$ ($\bar{\LA}$) represents a chiral (anti-chiral) superfield.
For the restricted geometry, we have $\check H ^{\bar z} = 0$ and,
in the WZ-gauge, the non-vanishing component fields
of $H^{\bar z}$ and $B ^{\bar z}$ are
\[
[ D, \bar D ] H^{\bar z} \vert \, = -2 \bar{\mu}
\qquad {\rm and} \qquad
B^{\bar z} \vert \, = - \bar c \quad , \quad
[D, \bar D ] B^{\bar z} \vert \, = - (\partial - 2 \bar{\mu} \bar{\partial} ) \bar c
\ .
\]
In this gauge,
the {\em superdiffeomorphism anomaly in the $\bar z$-sector} takes
the form
\begin{equation}
\label{azb}
{\cal A}^{(\bar z)} [ C^{\bar z} ; H^{\bar z} ] \; = \;
\int_{{\bf SC}} d^4 z \, C^{\bar z} \bar{\partial} ^3 H^{\bar z} \; = \;
- \int_{{\bf C}} d^2 z \, \bar c \, \bar{\partial} ^3 \bar{\mu}
\ \ .
\end{equation}
\section{Super Beltrami equations}
Substitution of the expressions (\ref{13a}) into the
definitions (\ref{15}) yields the {\em super Beltrami
equations}, e.g. the one involving the basic variable
$ H_{\zb} ^{\ z} $:
\begin{equation}
0= ( \bar{\partial} Z + {1 \over 2} \bar{\Theta} \bar{\partial} \Theta + {1 \over 2} \Theta \bar{\partial} \bar{\Theta} )
- H_{\zb} ^{\ z}
( \partial Z + {1 \over 2} \bar{\Theta} \partial \Theta + {1 \over 2} \Theta \partial \bar{\Theta} )
\ \ .
\end{equation}
These equations can be used to define quasi-superconformal
mappings \cite{ta,jc}; they occur in the supergravity approach
\cite{ar} and have been studied from the mathematical
point of view for the $N=1$ case in reference \cite{cr}.
\chapter{(2,2) Theory}
\section{Introduction}
We now summarize the main results
of the (2,2) theory.
As expected,
most expressions in the $z$-sector are the same as those of the
(2,0) theory, while those
in the $\bar{z}$-sector are simply obtained by complex conjugation.
Therefore, our presentation closely follows the lines of chapter 3
and the new features are pointed
out whenever they show up.
The
general framework for (2,2) SRS's and superconformal transformations
is the one described in chapter 2.
\section{Beltrami superfields}
Starting from a reference complex structure
given by local coordinates
$(z,\theta,\bar{\theta};\bar z,\theta^-,\bar{\theta}^-)$ on a (2,2) SRS, we pass
over to a generic complex structure corresponding to local coordinates
$(Z,\Theta,\bar{\Theta};\bar Z,\Theta^-,\bar{\Theta}^-)$
by a smooth change of coordinates.
The induced transformation law of the
canonical 1-forms has the form
\begin{equation}
(e^Z,e^{\bar Z},e^{\Theta},e^{\bar{\Theta}},e^{\Theta ^-},e^{\bar{\Theta} ^-})\ =\
(e^z,e^{\bar z},e^{\theta},e^{\bar{\theta}},e^{\theta ^-},e^{\bar{\theta} ^-})\ \cdot M \cdot Q
\ \ ,
\end{equation}
where the
matrices $M$ and $Q$ contain the Beltrami superfields and
integrating factors, respectively. More explicitly, $MQ$ reads
\begin{equation}
\left( \begin{array}{llllll}
1 & H_{z}^{\ \zb} & 0 & 0 & H_{z}^{\ \th ^-} & H_{z}^{\ \tb ^-} \\
H_{\zb}^{\ z} & 1 & H_{\zb}^{\ \th} & H_{\zb}^{\ \tb} & 0 & 0 \\
H_{\th}^{\ z} & H_{\th}^{\ \zb} & H_{\th}^{\ \th} & H_{\th}^{\ \tb} & H_{\th}^{\ \th ^-} & H_{\th}^{\ \tb ^-} \\
H_{\tb}^{\ z } & H_{\tb}^{\ \zb } & H_{\tb}^{\ \th} & H_{\tb}^{\ \tb} & H_{\tb}^{\ \th ^-} & H_{\tb}^{\ \tb ^-} \\
H_{\th ^-}^{\ z} & H_{\th ^-}^{\ \zb} & H_{\th ^-}^{\ \th} & H_{\th ^-}^{\ \tb} & H_{\th ^-}^{\ \th ^-} & H_{\th ^-}^{\ \tb ^-} \\
H_{\tb ^-}^{\ z } & H_{\tb ^-}^{\ \zb } & H_{\tb ^-}^{\ \th} & H_{\tb ^-}^{\ \tb} & H_{\tb ^-}^{\ \th ^-} & H_{\tb ^-}^{\ \tb ^-} \\
\end{array} \right)
\left( \begin{array}{ccllll}
\Lambda \bar{\LA} & 0 & \tau & \bar{\tau} & 0 & 0 \\
0 & \LA ^- \LB ^- & 0 & 0 & \tau ^- & \taub ^- \\
0 & 0 & \Lambda & 0 & 0 & 0 \\
0 & 0 & 0 & \bar{\LA} & 0 & 0 \\
0 & 0 & 0 & 0 & \LA ^- & 0 \\
0 & 0 & 0 & 0 & 0 & \LB ^- \\
\end{array} \right)
,
\end{equation}
where the indices $z, \theta, \bar{\theta}$ and $\bar z, \theta^- , \bar{\theta}^-$ are
related by complex conjugation, e.g.
\begin{eqnarray*}
\Lambda^{\ast} & = & \Lambda^-
\quad , \quad
\tau^{\ast} \; = \; \tau^-
\quad , \quad
( H_{\zb}^{\ z} )^{\ast} \; = \; H_z^{\ \bar z}
\quad , \quad
(H_{\bar{\theta}}^{\ \theta} )^{\ast} \; = \;
H_{\bar{\theta}^-}^{\ \theta^-}
\\
\bar{\LA}^{\ast} & = & \bar{\LA}^-
\quad , \quad
\bar{\tau} ^{\ast} \; = \; \bar{\tau} ^-
\quad , \quad
(H_{\theta}^{\ z})^{\ast} \; = \; H_{\theta^-} ^{\ \bar z}
\quad , \qquad \qquad \ ...
\end{eqnarray*}
The
`$H$' are invariant under superconformal transformations of the capital
coordinates while the integrating factors change under the latter
according to
\begin{equation}
\begin{array}{cclcccl}
\Lambda ^{\prime} & = & {\rm e} ^{-W} \ \Lambda
& , &
\bar{\LA} ^{\prime} & = &\ {\rm e} ^{-\bar{W}} \ \bar{\LA}\\
\tau^{\prime} & = & {\rm e} ^{-W} \ [ \, \tau \ - \
\Lambda \, \bar{\LA} \, (D_{\bar{\Theta}} W ) \,]
& , & \,
\bar{\tau}^{\prime} & = & {\rm e} ^{-\bar{W}} \ [ \, \bar{\tau} \ - \
\Lambda \, \bar{\LA} \, (D_{\Theta} \bar{W} ) \,]
\ \ ,
\end{array}
\end{equation}
where
${\rm e}^{- W} \equiv D_{\Theta} \Theta^{\prime}$ and
${\rm e}^{- \bar{W}} \equiv D_{\bar{\Theta}} \bar{\Theta}^{\prime}$.
The transformation laws of $\Lambda ^-,\bar{\LA} ^-, \tau ^-,\bar{\tau} ^-$
are obtained by complex conjugation and involve
$W^{\ast} = W^- , \bar{W} ^{\ast} = \bar W ^-$.
The $U(1)$ symmetry (with parameter $K$)
of the (2,0) theory becomes a $U(1)
\otimes U(1)$-symmetry parametrized by $K$ and $K^- = K^{\ast}$
under which the fields transform according to
\begin{eqnarray}
\Lambda ^{\prime} & = & {\rm e} ^K \ \Lambda
\ \ \ \ \ \ \ \ \ , \ \ \ \ \ \ \ \ \ \ \ \
\bar{\LA} ^{\prime} \ = \ {\rm e} ^{-K} \ \bar{\LA}
\\
(H_a ^{\ \theta} )^{\prime} & = &
{\rm e}^{-K} \
H_a ^{\ \theta}
\ \ \ \ \ , \ \ \ \ \ \
(H_a ^{\ \bar{\theta}} )^{\prime} \ = \
{\rm e}^{K} \
H_a ^{\ \bar{\theta}}
\ \ \ \ \ {\rm for} \ \ a \, \neq \, z
\nonumber
\end{eqnarray}
and the c.c. equations.
Due to the structure relations (\ref{10}),
the `$H$'
satisfy the following set of equations (and their c.c.):
\begin{eqnarray}
H_{\th} ^{\ \th} H_{\bar{\theta}} ^{\ \bar{\theta}} \, + \, H_{\bar{\theta}} ^{\ \theta} H_{\theta} ^{\ \bar{\theta}}
& = & 1\, - \,
( \bar D - {H_{\tb}}^z \partial ) {H_{\th}}^z \, - \,
( D - {H_{\th}}^z \partial ) {H_{\tb}}^z
\nonumber \\
H_{\th ^-}^{\ \th} H_{\bar{\theta} ^-} ^{\ \bar{\theta}} \, + \,
H_{\bar{\theta} ^-} ^{\ \theta} H_{\theta ^-} ^{\ \bar{\theta}}
& = & H_{\zb}^{\ z} \, - \,
( \bar D _- - H_{\tb ^-}^{\ z } \partial ) H_{\th ^-}^{\ z} \, - \,
( D_- - H_{\th ^-}^{\ z} \partial ) H_{\tb ^-}^{\ z }
\label{relH1}\\
H_a ^{\ \theta} H_a ^{\ \bar{\theta}} & = & - \,
(D_a - H_a ^{\ z} \partial ) H_a ^{\ z}
\qquad \qquad \qquad
\qquad {\rm for} \ \; a = \theta , \bar{\theta} , \theta^- , \bar{\theta}^-
\nonumber \\
H_{\zb}^{\ \th} H_a ^{\ \bar{\theta}} \, + \, H_{\zb}^{\ \tb} H_a ^{\ \theta}
& = &
( D_a - H_a ^{\ z} \partial ) H_{\zb} ^{\ z} \, - \,
( \bar{\partial} - H_{\zb} ^{\ z} \partial ) H_a ^{\ z}
\quad
{\rm for} \ \, a = \theta , \bar{\theta} , \theta ^- , \bar{\theta} ^-
\nonumber \\
H_a ^{\ \theta} H_b ^{\ \bar{\theta}} +
H_b ^{\ \theta} H_a ^{\ \bar{\theta}} & = & - \,
(D_a - H_a ^{\ z} \partial ) H_b ^{\ z}
\, - \,
(D_b - H_b ^{\ z} \partial ) H_a ^{\ z}
\nonumber \\
& & \qquad \qquad\quad
{\rm for} \ \, (a,b) =
(\theta , \theta^-) , \,
(\theta , \bar{\theta}^-) , \,
(\bar{\theta} , \theta^-) , \,
(\bar{\theta} , \bar{\theta}^-)
\ .
\nonumber
\end{eqnarray}
By linearizing the variables
($H_{\theta}^{\ \theta} = 1 + h_{\theta} ^{\ \theta}, \,
H_{\bar{\theta}}^{\ \bar{\theta}} = 1 + h_{\bar{\theta}} ^{\ \bar{\theta}}$ and
$H_a ^{\ b} = h_a ^{\ b}$ otherwise), we find that the independent
linearized fields are
$h_{\theta}^{\ z}, \,
h_{\bar{\theta}}^{\ z}, \,
h_{\theta}^{\ \theta} - h_{\bar{\theta}}^{\ \bar{\theta}}, \,
h_{\theta^-}^{\ z}, \,
h_{\bar{\theta}^-}^{\ z}$ where the latter two satisfy (anti-) chirality
conditions
($D_- h_{\theta^-}^{\ z} = 0 = \bar D_-
h_{\bar{\theta}^-}^{\ z}$). Thus, there are 5 independent Beltrami superfields,
$H_{\theta}^{\ z}, \,
H_{\bar{\theta}}^{\ z}, \,
H_{\theta^-}^{\ z}, \,
H_{\bar{\theta}^-}^{\ z}$ and
$H_{\theta}^{\ \theta} / H_{\bar{\theta}}^{\ \bar{\theta}}$, but
$H_{\theta^-}^{\ z}$ and
$H_{\bar{\theta}^-}^{\ z}$ satisfy chirality-type conditions
which reduce the number of their independent component fields
by a factor 1/2.
In section 4.8, these constraints will be explicitly solved in a
special case in terms of an unconstrained superfield $H^z$.
The factors $\tau , \, \bar{\tau}$ are differential polynomials
of the Beltrami coefficients and of the
integrating factors $\Lambda , \bar{\LA}$:
\begin{eqnarray}
\tau & = &
( H_{\th} ^{\ \th} H_{\bar{\theta}} ^{\ \bar{\theta}} + H_{\bar{\theta}} ^{\ \theta} H_{\theta} ^{\ \bar{\theta}}
)^{-1} \left[ ( \bar D - {H_{\tb}}^z \partial )
( H_{\th} ^{\ \th} \Lambda ) +
( D - {H_{\th}}^z \partial )
( H_{\bar{\theta}} ^{\ \theta} \Lambda ) \right] \ \\
\bar{\tau} & = &
( H_{\th} ^{\ \th} H_{\bar{\theta}} ^{\ \bar{\theta}} + H_{\bar{\theta}} ^{\ \theta} H_{\theta} ^{\ \bar{\theta}}
)^{-1} \left[ ( D - {H_{\th}}^z \partial )
( H_{\bar{\theta}} ^{\ \bar{\theta}} \bar{\LA} ) +
( \bar D - {H_{\tb}}^z \partial )
( H_{\theta} ^{\ \bar{\theta}} \bar{\LA} ) \right]
\ .
\nonumber
\end{eqnarray}
As for the factors
$\Lambda , \bar{\LA}$ themselves, they satisfy the IFEQ's
\begin{eqnarray}
0 & = &
(\, D_a - H_a ^{\ z} \partial - \frac{1}{2} \, \partial H_a ^{\ z}
- V_a ) \, \Lambda
\, - \, H_a ^{\ \bar{\theta}} \, \tau
\\
0 & = & (\, D_a - H_a ^{\ z}
\partial - \frac{1}{2} \, \partial H_a ^{\ z}
+ V_a ) \, \bar{\LA}
\, - \, H_a ^{\ \theta} \, \bar{\tau}
\ \ ,
\nonumber
\end{eqnarray}
where it is understood
that $H_z^{\ z} =1$ and $H_z^{\ \theta} = 0 = H_z^{\ \bar{\theta}}$.
The c.c. variables $\Lambda^- , \bar{\Lambda} ^- , \tau^- , \bar{\tau} ^-$
satisfy the c.c. equations and
the $U(1) \otimes U(1)$ connections $V_a$ and $V^- _a$
which appear in the previous set of equations
are given by
\begin{eqnarray}
V_z & = & 0 \nonumber \\
V_{\bar z} & = & \frac{1}{ H_{\th}^{\ \th} } \{[ D - H_{\th}^{\ z} \partial + \frac{1}{2}
(\partial H_{\th}^{\ z} ) + V_{\theta}] \, H_{\bar z} ^{\ \theta}
\, - \, [ \bar{\partial} - H_{\zb}^{\ z} \partial + \frac{1}{2}
(\partial H_{\zb}^{\ z} )] \, H_{\th}^{\ \th} \} \nonumber \\
V_{\theta} & = & - \frac{1}{ H_{\th}^{\ \th} }\ [ D - H_{\th}^{\ z} \partial + \frac{1}{2}
(\partial H_{\th}^{\ z} ) ] \, H_{\th}^{\ \th} \\
V_{\bar{\theta}} & = & \frac{1}{ H_{\tb}^{\ \tb} }\ [ \bar{D} - H_{\tb}^{\ z } \partial + \frac{1}{2}
(\partial H_{\tb}^{\ z } ) ] \, H_{\tb}^{\ \tb} \nonumber \\
V_a & = & -\frac{1}{ H_{\th}^{\ \th} } \{ [ D_a - H_a ^{\ z} \partial + \frac{1}{2}
(\partial H_a ^{\ z}) ] \, H_{\th}^{\ \th} \, +\,
[ D - H_{\th}^{\ z} \partial + \frac{1}{2} (\partial H_{\th}^{\ z} ) + V_{\theta}] \, H_a ^{\ \theta} \}
\nonumber \\
&& \qquad \qquad \qquad \qquad \qquad \qquad
\qquad \qquad \qquad \qquad \qquad
\qquad {\rm for} \ \, a = \theta ^- , \bar{\theta} ^- \ . \nonumber
\end{eqnarray}
We note that
the last equations can also be written in the form
\begin{eqnarray}
H_a ^{\ \theta} V_a & = & \ -[ D_a - H_a ^{\ z} \partial + \frac{1}{2} (\partial H_a
^{\ z}) ]
\, H_a ^{\ \theta} \ \, \ \ \ {\rm for} \ \, a= \bar{\theta} , \theta ^- , \bar{\theta} ^-
\nonumber \\
H_a ^{\ \bar{\theta}} V_a & = & \ [ D_a - H_a ^{\ z} \partial + \frac{1}{2} (\partial H_a
^{\ z}) ] \, H_a ^{\ \bar{\theta}}
\qquad \ {\rm for} \ \, a = \theta , \theta ^- , \bar{\theta} ^- \
.
\end{eqnarray}
\section{Symmetry transformations}
In order to obtain the
transformation laws of the fields
under infinitesimal superdiffeomorphisms
and $U(1) \otimes U(1)$ transformations,
we introduce the ghost vector field
\[
\Xi \cdot \partial \ \equiv \
\Xi^{z} \, \partial \ + \
\Xi^{\bar z} \, \bar{\partial} \ + \
\Xi^{\theta} \, D \ + \
\Xi^{\bar{\theta}} \, \bar D \ + \
\Xi^{\theta ^-} \, D_- \ + \
\Xi^{\bar{\theta} ^-} \, \bar D _-
\ \ ,
\]
(with $\Xi^a =\, \Xi^a (z , \theta , \bar{\theta} \, ; \bar z , \theta ^- , \bar{\theta} ^- )$)
which generates an infinitesimal change of the coordinates
$(z , \theta , \bar{\theta} \, ; \bar z , \theta ^- , \bar{\theta} ^-)$.
The $U(1) \otimes U(1)$ transformations again appear
in a natural way in
the trans\-formation laws of the integrating factors
and are parametrized by
ghost superfields
$K$ and $K ^-$ .
In terms of the reparametrized ghosts
\begin{equation}
\left( \,
C^z \, ,\, C^{\bar z} \, ,\,
C^{\theta} \, ,\, C^{\bar{\theta}}\, ,\, C^{\theta ^-} \, ,\, C^{\bar{\theta} ^-}
\right) \ = \
\left( \, \Xi^z \, , \, \Xi^{\bar z} \, , \,
\Xi ^{\theta} \, ,\, \Xi ^{\bar{\theta}} \, , \,
\Xi ^{\theta ^-} \, ,\, \Xi ^{\bar{\theta} ^-}
\, \right) \cdot M
\ \ ,
\end{equation}
the BRS variations read
\begin{eqnarray}
s \Lambda & = & C^z \, \partial \Lambda \ + \ \frac{1}{2} \ (\partial C^z) \, \Lambda
\ + \ C^{\bar{\theta}} \, \tau \ + \ K\, \Lambda
\nonumber \\
s \bar{\LA} & = & C^z \, \partial \bar{\LA} \ + \ \frac{1}{2} \ (\partial C^z) \, \bar{\LA}
\ + \ C^{\theta} \, \bar{\tau} \ - \ K\, \bar{\LA}
\nonumber \\
s \tau & = & \partial \, ( \, C^z \tau + C^{\theta}
\Lambda \, )
\\
s \bar{\tau} & = & \partial \, ( \, C^z \bar{\tau} + C^{\bar{\theta}}
\bar{\LA} \, )
\ \ ,
\nonumber
\end{eqnarray}
\begin{eqnarray}
\label{ul}
sH_a ^{\ z} & = &
(\, D_a - H_a ^{\ z} \partial
+ \partial H_a ^{\ z} \, )\, C^z - H_a ^{\ \theta}
C^{\bar{\theta}} - H_a ^{\ \bar{\theta}}
C^{\theta}
\\
s H_a ^{\ \theta} & = & ( \, D_a - H_a ^{\ z} \partial
+ \frac{1}{2} \, \partial H_a ^{\ z} + V_a \, ) \, C^{\theta}
+ C^z \partial H_a ^{\ \theta} -
\frac{1}{2} \,
H_a ^{\ \theta} ( \partial C^z ) - H_a ^{\ \theta} K
\nonumber \\
s H_a ^{\ \bar{\theta}} & = & ( \, D_a - H_a ^{\ z} \partial
+ \frac{1}{2} \, \partial H_a ^{\ z} - V_a \, ) \, C^{\bar{\theta}}
+ C^z \partial H_a ^{\ \bar{\theta}} -
\frac{1}{2} \,
H_a ^{\ \bar{\theta}} ( \partial C^z ) + H_a ^{\ \bar{\theta}} K
\nonumber
\\
sV_a & = & C^z \partial V_a + \frac{1}{2} H_a ^{\ \theta}
\partial C^{\bar{\theta}}
- \frac{1}{2} (\partial H_a ^{\ \theta}) C^{\bar{\theta}}
- \frac{1}{2} H_a ^{\ \bar{\theta}}
\partial C^{\theta}
+ \frac{1}{2} (\partial H_a ^{\ \bar{\theta}} ) C^{\theta}
\nonumber \\
& & \qquad \qquad \qquad \qquad \qquad
\qquad \qquad \qquad \qquad \qquad
\quad
+ ( D_a - H_a ^{\ z} \partial ) K
\nonumber
\end{eqnarray}
\begin{eqnarray}
s C^z & = & - \, [ \, C^z \partial C^z + C^{\theta}
C^{\bar{\theta}} \, ]
\nonumber
\\
s C^{\theta} & = & - \, [ \, C^z \partial C^{\theta} + \frac{1}{2}
\, C^{\theta} (\partial C^z ) - K C^{\theta} \, ]
\nonumber
\\
\label{ult}
s C^{\bar{\theta}} & = & - \, [ \, C^z \partial C^{\bar{\theta}} + \frac{1}{2}
\, C^{\bar{\theta}} (\partial C^z ) + K C^{\bar{\theta}} \, ]
\\
s K & = & - \, [ \, C^z \partial K - \frac{1}{2} \,
C^{\theta} (\partial C^{\bar{\theta}} ) + \frac{1}{2} \, C^{\bar{\theta}}
(\partial C^{\theta} ) \, ]
\ \ .
\nonumber
\end{eqnarray}
The variations of the c.c. fields are simply obtained
by complex conjugation and henceforth
the holomorphic factorization is manifestly
realized for the chosen parametrization.
Furthermore, the number of independent Beltrami fields and
the number of
symmetry parameters coincide. By
projecting to space-time fields according to eqs.(\ref{39})(\ref{40}),
one obtains the transformation laws
(\ref{41}).
The variations (\ref{ul})(\ref{ult}) of $H_a^{\ b}, \, V_a ,\, C^a$
and $K$ coincide with
those found in the metric approach in reference \cite{ot}.
\section{Scalar superfields}
We consider complex
superfields
${\cal X}^i$ and $\bar{{\cal X}} ^{\bar{\imath}} =
({\cal X}^i)^{\ast}$
satisfying the (twisted) chirality conditions \cite{ghr}
\begin{equation}
\begin{array}{rcl}
D_{\bar{\Theta}} {\cal X}^i & = & 0 \ = \
D_{\Theta^-} {\cal X}^i
\\
D_{\Theta} \bar{{\cal X}} ^{\bar{\imath}} & = & 0 \ = \
D_{\bar{\Theta} ^-} \bar{{\cal X}} ^{\bar{\imath}}
\ \ .
\end{array}
\end{equation}
Other multiplets have been introduced and discussed in references
\cite{ghr} and \cite{ggw}.
The sigma-model action
describing the coupling of these fields to a superconformal class
of metrics on the SRS ${\bf {S\Sigma}}$ is given by
\cite{bz,ghr}
\begin{equation}
S_{inv} [ {\cal X} , \bar{{\cal X}} ] \, =\int_{\bf {S\Sigma}}d^6 Z
\ K({\cal X} , \bar{{\cal X}})
\ \ ,
\label{action22}
\end{equation}
where
$K$ is a real function of the fields ${\cal X}\, , \, \bar{{\cal X}}$
and
$d^6 Z = dZ \, d\bar{Z} \, d\Theta \, d\bar{\Theta}
\, d\Theta ^- \, d\bar{\Theta} ^- $ is the
superconformally invariant measure.
For a
flat target space metric, the functional (\ref{action22}) reduces to
\cite{ade}
\begin{equation}
S_{inv} [ {\cal X} , \bar{{\cal X}} ] \, =\int_{\bf {S\Sigma}}d^6 Z
\ {\cal X} \bar{{\cal X}}
\ \ .
\label{action22flat}
\end{equation}
\section{Restriction of the geometry}
The restriction of the geometry
is achieved by imposing the following conditions:
\begin{equation}
\label{impo}
H_{\th}^{\ z} \, = \, H_{\tb}^{\ z } \, = \,
H_{\tb ^-}^{\ z } \, = \, 0
\qquad {\rm and} \qquad
H_{\th}^{\ \th} / H_{\tb}^{\ \tb} \, = \, 1
\ \ .
\end{equation}
The addition of the
c.c. equations goes without saying in this whole section.
Equations (\ref{relH1}) then imply that
all Beltrami coefficients depend on $ H_{\th ^-}^{\ z} $ by virtue of the relations
\begin{eqnarray}
H_{\zb}^{\ z} & = & \bar{D}_- H_{\th ^-}^{\ z}
\quad , \quad
H_{\zb}^{\ \th} \, = \, \bar{D} H_{\zb}^{\ z}
\quad , \quad
H_{\th ^-}^{\ \th} \, = \, - \bar{D} H_{\th ^-}^{\ z}
\nonumber
\\
& & \qquad \qquad \quad\ \
H_{\zb}^{\ \tb} \, = \, D H_{\zb}^{\ z}
\quad , \quad
H_{\th ^-}^{\ \tb} \, = \, - D H_{\th ^-}^{\ z}
\\
H_{\theta}^{\ \bar{\theta}} & = &
H_{\bar{\theta}}^{\ \theta} \, = \,
H_{\tb ^-}^{\ \th} \, = \, H_{\tb ^-}^{\ \tb} \, = \, 0
\ \ \quad , \quad
H_{\th}^{\ \th} \, = \, 1 \, = \, H_{\tb}^{\ \tb}
\nonumber
\end{eqnarray}
and that $ H_{\th ^-}^{\ z} $ itself satisfies the covariant chirality condition
\begin{equation}
\label{grig}
(D_- - H_{\th ^-}^{\ z} \partial
+ D H_{\th ^-}^{\ z} \, \bar D ) \, H_{\th ^-}^{\ z} \, = \, 0
\ \ .
\end{equation}
The relations satisfied by the other variables become
\begin{eqnarray}
\tau & = & \bar{D} \Lambda
\quad , \quad
D \Lambda \; = \; 0
\quad , \quad
\bar{D}_- \Lambda \; = \; 0
\quad , \quad
D_- \Lambda \,=\, D\bar{D}( H_{\th ^-}^{\ z} \Lambda )
\nonumber
\\
\bar{\tau} & = & D \bar{\LA}
\quad , \quad
\bar{D} \bar{\Lambda} \; = \; 0
\quad , \quad
\bar{D}_- \bar{\Lambda} \; = \; 0
\quad , \quad
D_- \bar{\LA} \,=\, \bar{D}D( H_{\th ^-}^{\ z} \bar{\LA} )
\nonumber
\\
V_{\theta} & = & 0
\quad \quad , \quad \
V_{\theta^-} \;=\; \frac{1}{2} \, [D,\bar{D}] H_{\th ^-}^{\ z}
\quad \qquad , \qquad
V_{\bar z} \;=\; \bar{D}_- V_{\theta^-}
\\
V_{\bar{\theta}} & = & 0
\ \ \ \quad , \ \quad
V_{{\bar{\theta}}^-}\;=\; 0
\quad .
\nonumber
\end{eqnarray}
and eqs.(\ref{13a})(\ref{ana}) yield the local expressions
\begin{equation}
\Lambda \,=\,D\Theta \ \ \ ,\ \ \ \bar{\LA} \,=\,\bar{D} \bar{\Theta} \ \ .
\end{equation}
The $s$-invariance of conditions (\ref{impo}) implies that the
symmetry parameters $C^{\theta}, \, C^{\bar{\theta}}$ and $K$ depend on $C^z$
according to
\begin{equation}
\begin{array}{lclcl}
C^{\bar{\theta}} \,=\, DC^z &,& C^{\theta} \,=\, \bar{D} C^z &,&
K\,=\,\displaystyle{1 \over 2} \, [ D , \bar{D} ] C^z
\end{array}
\end{equation}
and that $C^z$ itself satisfies the chirality condition
\begin{equation}
\bar{D}_- C^z \,=\, 0 \ \ .
\label{contC}
\end{equation}
Thus, the $s$-variations of the basic variables read
\begin{eqnarray}
s H_{\th ^-}^{\ z} & = & [ D_- - H_{\th ^-}^{\ z} \partial + (\bar D H_{\th ^-}^{\ z} ) D + (D H_{\th ^-}^{\ z} ) \bar D
+ (\partial H_{\th ^-}^{\ z} ) ] \, C^z
\nonumber \\
sC^z & = & - \, [ C^z \partial C^z + (D C^z ) (\bar D C^z ) ]
\ \ .
\end{eqnarray}
\section{Intermediate coordinates}
The intermediate coordinates which are relevant for us are those
obtained by going over from $z$ and $\bar{\theta}$ to capital coordinates
without modifying the other coordinates:
\begin{equation}
(z , \theta , \bar{\theta} \, ; \bar z , \theta^- , \bar{\theta}^- )
\ \stackrel{M_1 Q_1}{\longrightarrow} \
(\tilde z, \tilde{\theta} , \tilde{\bar{\theta}} \, ;
\tilde{\bar z} , \tilde{\theta}^- , \tilde{\bar{\theta}} ^- )
\equiv
(Z, \theta , \bar{\Theta} \, ; \bar z , \theta^- , \bar{\theta}^- )
\ \ .
\end{equation}
For the restricted geometry, we then get the explicit expression
\begin{equation}
\tilde D _- = D_- - H_{\th ^-}^{\ z} \partial + (D H_{\th ^-}^{\ z} ) \bar D
\end{equation}
and by construction we have $(\tilde D _-)^2 =0$. Thus, the covariant
chirality condition (\ref{grig}) for $ H_{\th ^-}^{\ z} $ reads
$\tilde D _- H_{\th ^-}^{\ z} =0$ and may be solved by virtue of the nilpotency
of the operator
$\tilde D _-$ (see section 4.8).
\section{Component field expressions}
To write the action
(\ref{action22}) in terms of the reference coordinates
$(z ,\theta ,\bar{\theta} \, ; \bar z,\theta^- ,{\bar{\theta}}^- )$,
we introduce the following superfields
(as in the $(2,0)$ case):
\begin{equation}
\begin{array}{lllclll}
\nonumber
h_a^{\ z} &=& \Delta ^{-1} (H_a^{\ z}- H_{\zb}^{\ z} H_a^{\ \bar z}) & , &
h_a^{\ \bar z} &=& \Delta ^{-1} (H_a^{\ \bar z}- H_{z}^{\ \zb} H_a^{\ z})\\
h_a^{\ \theta} &=& H_a^{\ \theta}-h_a^{\ \bar z} H_{\zb}^{\ \th} & , &
h_a^{\ \theta^-} &=& H_a^{\ \theta^-}-h_a^{\ z} H_{z}^{\ \th ^-} \\
\nonumber
h_a^{\ \bar{\theta}} &=& H_a^{\ \bar{\theta}}-h_a^{\ \bar z} H_{\zb}^{\ \tb} & , &
h_a^{\ {\bar{\theta}}^-} &=& H_a^{\ {\bar{\theta}}^-}-h_a^{\ z} H_{z}^{\ \tb ^-}
\end{array}
\end{equation}
for $a=\theta,\theta^-,\bar{\theta},{\bar{\theta}}^- $ .
In the
remainder of this section, we will consider the restricted geometry,
for which the Berezinian takes the form
\begin{equation}
\left|
\frac{\partial (Z,\Theta ,\bar{\Theta} \,; \bar Z \,\Theta ^- ,\bar{\Theta} ^- )}{\partial (z ,\theta ,\bar{\theta} \, ;
\bar z , \theta^- , {\bar{\theta}}^- )}
\right|
\ =\ \Delta / h
\end{equation}
with $\Delta = 1- H_{\zb}^{\ z} H_{z}^{\ \zb} $ and $h= h_{\th}^{\ \th} h_{\thm}^{\ \thm} - h_{\thm}^{\ \th} h_{\th}^{\ \thm} $ .
The chirality
conditions for the matter superfields read $\bar D {\cal X} =0$ and
\begin{equation}
h_{\th}^{\ \th} (D_- - h_{\thm}^{\ \zb} \bar{\partial} - h_{\thm}^{\ z} \partial - h_{\thm}^{\ \tbm} \bar D_- ) {\cal X} =
h_{\thm}^{\ \th} (D - h_{\th}^{\ \zb} \bar{\partial} - h_{\th}^{\ z} \partial - h_{\th}^{\ \tbm} \bar D_-) {\cal X}
\end{equation}
and c.c. .
We now choose a {\em WZ-gauge} in which
the basic superfields have the $\theta$-expansions
\begin{eqnarray}
\label{expa}
H_{\th ^-}^{\ z} &=& {\bar{\theta}}^- (\mu + \bar{\theta} \alpha + \theta \bar{\alpha} + \bar{\theta} \theta \bar{v})
\qquad \qquad , \quad
C^z = c + \bar{\theta} \epsilon + \theta \bar{\epsilon} + \bar{\theta} \theta k \\
H_{\th}^{\ \zb}
&=& \bar{\theta} ( \bar{\mu} + {\bar{\theta}}^- \alpha^- + \theta^- \bar{\alpha}^- + {\bar{\theta}}^- \theta^- \bar{v}^-
) \ , \quad
C^{\bar z}
= \bar{c} + {\bar{\theta}}^- \epsilon^- + \theta^- \bar{\epsilon}^- + {\bar{\theta}}^- \theta^- k^-
,
\nonumber
\end{eqnarray}
whose form and physical interpretation is similar to the one of
expressions
(\ref{cf}) of the (2,0) theory.
In fact, we have
$ H_{\th ^-}^{\ z} = {\bar{\theta}}^- {\cal H}_{\bar z}^{\ z}$ where
${\cal H}_{\bar z}^{\ z}$ denotes the basic Beltrami superfield
of the (2,0) theory: a similar relationship holds in the
WZ-gauge between the basic Beltrami superfields
of the (1,1) and (1,0) theories \cite{dg}.
The (twisted chiral)
matter superfields ${\cal X}$ and $\bar{{\cal X}}$
contain one complex scalar, four spinors
and one complex auxiliary fields as component fields \cite{ghr,ggw},
\begin{equation}
\begin{array}{lllllll}
X={\cal X} \! \mid &,& \ \lambda_{\theta}=D{\cal X} \! \mid &,&
\bar{\lambda}^-_{{\bar{\theta}}^-}
=\bar D_-{\cal X} \! \mid &,&F_{\theta {\bar{\theta}}^-}=D\bar D_-{\cal X} \!
\mid
\\
& & & & & &
\\
\bar X
=\bar{{\cal X}} \! \mid &,&\lambda^-_{\theta^-}=D_- \bar{{\cal X}} \! \mid &,&
\ \bar{\lambda}_{\bar{\theta}}
=\bar D \bar{{\cal X}} \! \mid &,&\bar{F}_{\theta^- \bar{\theta}}=D_-\bar D
\bar{{\cal X}} \! \mid
\end{array}
\end{equation}
for which fields the
action (\ref{action22flat}) reduces to the following functional
on the Riemann surface ${\bf \Sigma}$:
\begin{eqnarray}
\label{n2a}
S_{inv} & = & \int_{\bf \Sigma} d^2z \left\{
\displaystyle \frac{1}{1-\mu \bar{\mu} } \ [\
(\partial - \bar{\mu} \bar{\partial} ) \bar{X} \, ( \bar{\partial} -\mu \partial)X
\ - \ \alpha \lambda (\partial - \bar{\mu} \bar{\partial} )\bar{X} \right.
\\
&& \qquad \qquad - \ \alpha^-\lambda^-( \bar{\partial} -\mu \partial)X
\ - \ \bar{\alpha} \bar{\la} (\partial - \bar{\mu} \bar{\partial} )X
\ - \ \bar{\alpha}^- \bar{\la}^- ( \bar{\partial} -\mu \partial)
\bar X
\nonumber
\\
&& \qquad \qquad + \ (\alpha\lambda)(\alpha^-\lambda^-- \bar{\mu} \bar{\alpha} \bar{\la})
\ + \ (\bar{\alpha}^- \bar{\la}^-)(\bar{\alpha} \bar{\la}-\mu\alpha^-\lambda^-)\ ]
\nonumber
\\
&& \qquad \qquad -\ \bar{\la} ( \bar{\partial} -\mu \partial -\frac{1}{2}\partial\mu -\bar{v})\lambda
\ - \ \bar{\la}^- (\partial - \bar{\mu} \bar{\partial} -\frac{1}{2} \bar{\partial} \bar{\mu} -\bar{v}^-)\lambda^-
\nonumber
\\
&& \qquad \qquad \left. - \ (1-\mu \bar{\mu} )\bar{F} F \right\} \ .
\nonumber
\end{eqnarray}
In terms of
$\xi^a = \Xi^a \! \mid$ and the short-hand notation
\begin{eqnarray*}
\xi & \equiv & \xi^z \quad , \quad
\hat{k} \equiv k- \bar{\xi} \bar{v} \quad , \quad
\xi \cdot \partial \equiv \xi \partial + \bar{\xi}
\bar{\partial}
\\
\bar{\xi} & \equiv & \xi^{\bar z} \quad , \quad
\hat{k}^-\equiv k^--\xi \bar{v}^-
\ \ ,
\end{eqnarray*}
the $s$-variations of the matter fields read
\begin{eqnarray}
\nonumber
sX &=&(\xi \cdot \partial )X+\xi^{\theta} \lambda + \xi^{{\bar{\theta}}^-} \bar{\lambda}^- \\
\nonumber
s \lambda
&=& [(\xi \cdot \partial )+\frac{1}{2} (\partial \xi + \mu \partial \bar{\xi} ) +\hat{k}
] \, \lambda \ + \
\xi^{\bar{\theta}}{\cal D}X\ -\
\xi^{{\bar{\theta}}^-} F \\
\label{sba}
s\bar{\lambda}^-
&=& [(\xi \cdot \partial )+\frac{1}{2} ( \bar{\partial} \bar{\xi} + \bar{\mu} \bar{\partial} \xi
) -\hat{k}^- ]
\, \bar{\lambda}^- \ +\
\xi^{\theta^-}\bar{{\cal D}}X\ +\
\xi^{\theta} F \\
\nonumber
sF &=& [(\xi \cdot \partial )+\frac{1}{2} (\partial \xi + \mu \partial \bar{\xi} ) +
\frac{1}{2} ( \bar{\partial} \bar{\xi} + \bar{\mu} \bar{\partial} \xi ) +\hat{k}-\hat{k}^- ] \, F
\\
&& \qquad \qquad \qquad \qquad
\ +\ \xi^{\bar{\theta}} {\cal D}\bar{\lambda}^- \ -\
\xi^{\theta^-}\bar{{\cal D}}\lambda \ ,
\nonumber
\end{eqnarray}
where we have introduced
the supercovariant derivatives
\begin{eqnarray}
\nonumber
{\cal D}X &=&
\frac{1}{1-\mu \bar{\mu} } \ [(\partial - \bar{\mu} \bar{\partial} )X+ \bar{\mu} \alpha \lambda
-\bar{\alpha}^-\bar{\lambda}^- ] \\
\label{sgra}
\bar{{\cal D}}X &=&
\frac{1}{1-\mu \bar{\mu} }
\ [( \bar{\partial} -\mu \partial )X+\mu \bar{\alpha}^- \bar{\lambda}^- -\alpha
\lambda ] \\
{\cal D}\bar{\lambda}^- &=&
\frac{1}{1-\mu \bar{\mu} }
\ [(\partial - \bar{\mu} \bar{\partial} -\frac{1}{2} \bar{\partial} \bar{\mu} +\bar{v}^-)\bar{\lambda}^-
+ \bar{\mu} \alpha F -\alpha^- \bar{{\cal D}}X ]
\nonumber \\
\bar{{\cal D}}\lambda &=&
\frac{1}{1-\mu \bar{\mu} }
\ [( \bar{\partial} -\mu \partial -\frac{1}{2}\partial \mu -\bar{v})\lambda -\mu
\bar{\alpha}^- F-\bar{\alpha} {\cal D}X ]\ .
\nonumber
\end{eqnarray}
A generic expression for the variations of the matter fields
and for the supercovariant derivatives
can be given in the supergravity framework where the component
fields are defined by covariant projection \cite{ot}.
We leave it as an exercise to check that the action (\ref{n2a})
describing the superconformally invariant coupling of a twisted chiral
multiplet to supergravity coincides with the usual component
field expression \cite{ggw} by virtue of the Beltrami parametrization
of the space-time gauge fields (i.e. the zweibein, gravitino
and $U(1)$ gauge field) - see \cite{bbgc, gg} for the $N=1$ theory.
Component field results for a chiral multiplet can be
directly obtained from our results for
the twisted chiral multiplet by application of the
mirror map \cite{ggw}.
\section{Anomaly}
As pointed out in section 4.6, the constraint satisfied by $ H_{\th ^-}^{\ z} $
in the restricted geometry, i.e.
$\tilde D _- H_{\th ^-}^{\ z} =0$, can be solved by virtue of the nilpotency
of the operator
$\tilde D _-$:
\begin{equation}
H_{\th ^-}^{\ z} = \tilde D _- H^z =
[D_- - H_{\th ^-}^{\ z} \partial + (D H_{\th ^-}^{\ z} ) \bar D] \, H^z
\ \ .
\end{equation}
Here, the new variable
$H^z$ is determined up to a superfield $G^z$ satisfying
$\tilde D_- G^z =0$ and it transforms according to
\begin{eqnarray}
sH^z & = & C^z \, (1 + \partial H^z ) + (DC^z)(\bar D H^z) + B^z
\qquad {\rm with} \quad
\tilde{D} _- B^z = 0
\nonumber \\
sB^z & = & - \, [ C^z \partial B^z + (DC^z)(\bar D B^z) ]
\ \ .
\label{true}
\end{eqnarray}
In the WZ-gauge, we have $H^z = \theta^- H_{\theta^-}^{\ z}$
with $H_{\theta^-}^{\ z}$ given by (\ref{expa}). In this case, the
{\em holomorphically split form of the superdiffeomorphism anomaly}
on the superplane reads
\begin{eqnarray}
\label{438}
{\cal A} [C^z ; H^z ] \ + \ {\rm c.c.} & = &
\int_{\bf SC} d^6z \ C^z \, \partial [ D, \bar D ] \, H^z
\ + \ {\rm c.c.}
\\
& = &
- {1 \over 2 } \
\int_{\bf C} d^2z \ \left\{ c \partial^3 \mu
+ 2 \epsilon \partial^2 \bar{\alpha}
+ 2 \bar{\epsilon} \partial^2 \alpha
+ 4 k \partial \bar v \right\}
\ + \ {\rm c.c.}
\ \ .
\nonumber
\end{eqnarray}
It satisfies the consistency condition
$s{\cal A} = 0$ and can be generalized
to a generic compact SRS
by replacing the operator
$\partial [D, \bar D ]$ by the superconformally covariant operator (\ref{bol}).
The component field expression (\ref{438}) coincides with the one found
for the $z$-sector of the (2,0) theory, eq.(\ref{an}), and
with the one of references \cite{y} and \cite{ot} where
other arguments have been invoked.
At the linearized level, the transformation law
(\ref{true}) of $H^z$ reads
\[
\delta H^z = C^z + B^z
\qquad {\rm with} \quad
\bar{D} _- C^z = 0=
D_- B^z
\ \ .
\]
By solving the given constraints on $C^z$ and $B^z$ in terms of
spinorial superfields $L^{\theta}$ and $L^{\prime \bar{\theta}}$,
one finds
\begin{equation}
\delta H^z = \bar{D} _- L^{\theta} + D_- L^{\prime \bar{\theta}}
\ \ ,
\end{equation}
which result has the same form as the one found in the second
of references \cite{gw}, see eq.(3.19).
\chapter{Conclusion}
In the course of the completion of our
manuscript\footnote{A
preliminary version of the present paper has been part of the
habilitation thesis of F.G. (Universit\'e de Chamb\'ery, December
1994).},
the work \cite{l} concerning the
(2,0) theory appeared which also discusses the generalization
of our previous $N=1$ results
\cite{dg, dg1}. However, the author of reference \cite{l}
fails to take properly into account the U(1)-symmetry,
connection and transformation laws which leads to incorrect results
and conclusions. Furthermore, the super Beltrami coefficients
(2.34) of \cite{l} are not inert under superconformal transformations
of the capital coordinates, eqs.(2.33), and therefore do not
parametrize superconformal structures as they are supposed to.
Finally, various aspects of the (2,0) theory that we treat here
(e.g. superconformal models and
component field expressions)
are not addressed in reference \cite{l}.
In a supergravity approach \cite{ggrs} , some gauge choices
are usually made when an explicit
solution of the constraints is determined. Therefore, the question
arises in which case the final solution represents
a complete solution of the problem, i.e.
a complete set of prepotentials (and compensators).
Obviously, such a solution has been obtained if
there are as many independent variables as there are independent
symmetry parameters in the theory. If there is
a smaller number of prepotentials, then it
is clear that some basic symmetry parameters have been used to
eliminate fields from the theory (a `gauge choice' or
`restriction of the geometry' has been made).
From these facts, we conclude that
the solution of constraints discussed in references \cite{eo,
kl, l} and \cite{gw} is not complete.
As for
reference \cite{ot}, it has not been investigated which ones
are the independent variables.
Possible
further developments or applications of our formalism include
the derivation of operator product expansions
and the proof of
holomorphic factorization of partition functions
along the lines of the work on the $N=1$ theory
\cite{dg1, agn}. (The latter reference also involves the
supersymmetric generalization of the Verlinde functional
which occurs in conformal field theories and in the
theory of $W$-algebras.)
Another extension of the present study consists of the determination
of $N=2$ superconformally covariant differential operators
and of their application to super $W$-algebras.
This development will be reported on elsewhere \cite{ip}.
\nopagebreak
| proofpile-arXiv_065-336 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
First-principles calculations based on density functional
theory~\cite{hohenberg:kohn,kohn:sham} (DFT)
and the pseudo\-potential method are widely used for studying
the energetics, structure and dynamics of solids and
liquids~\cite{gillan,galli:pasquarello,payne}. In the
standard approach, the occupied Kohn-Sham orbitals are expanded
in terms of plane waves, and the ground state is found by minimizing
the total energy with respect to the plane-wave
coefficients~\cite{car:parrinello}.
Calculations on systems of over a hundred atoms with this approach
are now quite common. However, it has proved difficult to go to very
much larger systems, because the computational effort in this approach
depends on the number of atoms $N$ at least as $N^2$, and asymptotically
as $N^3$. Because of this limitation, there has been a vigorous effort
in the past few years to develop linear-scaling
methods~\cite{yang1,yang2,yang3,yang4,yang5,baroni,galli,mauri1,mauri2,mauri3,ordejon1,ordejon2,li,nunes,hierse,hernandez:gillan,hernandez:gillan:goringe,ordejon:artacho:soler}
-- methods
in which the effort depends only linearly on the number of atoms.
We have recently described a general theoretical framework for developing
linear-scaling self-consistent DFT
schemes~\cite{hernandez:gillan,hernandez:gillan:goringe}.
We presented one practical way of implementing
such a scheme, and investigated its performance for crystalline silicon.
Closely related ideas have been reported by other
authors~\cite{hierse,ordejon:artacho:soler} -- an overview
of work on linear-scaling methods was given in the Introduction of
our previous paper~\cite{hernandez:gillan:goringe}.
The practical feasibility of linear-scaling DFT techniques is thus
well established. However, there are still technical problems to be solved
before the techniques can be routinely applied. Our aim here is to
study the problem of representing the localized orbitals that appear
in linear-scaling methods (support functions in our terminology) --
in other words, the problem of basis functions.
To put this in context, we recall briefly the main
ideas of our linear-scaling DFT method.
Standard DFT can be expressed in terms of the Kohn-Sham density matrix
$\rho ( {\bf r}, {\bf r}^\prime )$. The total-energy functional
can be written in terms of $\rho$, and the ground state is obtained
by minimization with respect to $\rho$ subject to two constraints:
$\rho$ is idempotent (it is a projector, so that its eigenvalues are
0 or 1), and its trace is equal to half the number of electrons.
Linear-scaling behavior is obtained by imposing a limitation on the
spatial range of $\rho$:
\begin{equation}
\rho ( {\bf r}, {\bf r}^\prime ) = 0 \; , \; \; \; | {\bf r} -
{\bf r}^\prime | > R_c \; .
\end{equation}
By the variational principle, we then get an upper bound $E ( R_c )$
to the true ground-state energy $E_0$. Since the true ground-state
density matrix decays to zero as $| {\bf r} - {\bf r}^\prime |
\rightarrow \infty$, we expect that $E ( R_c \rightarrow \infty ) =
E_0$. To make the scheme practicable, we introduced the further
condition that $\rho$ be separable:
\begin{equation}
\rho ( {\bf r}, {\bf r}^\prime ) = \sum_{\alpha \beta}
\phi_\alpha ( {\bf r} ) \, K_{\alpha
\beta} \, \phi_\beta ( {\bf r}^\prime ) \; ,
\end{equation}
where the number of support functions $\phi_\alpha ( {\bf r} )$ is finite.
The limitation on the spatial range of $\rho$ is imposed by requiring
that the $\phi_\alpha ( {\bf r} )$ are non-zero only in localized
regions (``support regions'') and that the spatial range of
$K_{\alpha \beta}$ is limited. In our method, the support regions are
centered on the atoms and move with them.
We have shown~\cite{hernandez:gillan,hernandez:gillan:goringe}
that the condition on the eigenvalues of $\rho$ can be
satisfied by the method of Li, Nunes and Vanderbilt~\cite{li}
(LNV): instead
of directly varying $\rho$, we express it as:
\begin{equation}
\rho = 3 \sigma * \sigma - 2 \sigma * \sigma * \sigma \; ,
\end{equation}
where the asterisk indicates the continuum analog of matrix multiplication.
As shown by LNV, this representation of $\rho$ not only ensures
that its eigenvalues lie in the range $[ 0 , 1 ]$, but it drives
them towards the values 0 and 1. In our scheme, the auxiliary
matrix $\sigma ( {\bf r}, {\bf r}^\prime )$ has the same type of
separability as $\rho$:
\begin{equation}
\sigma ( {\bf r}, {\bf r}^\prime ) = \sum_{\alpha \beta}
\phi_\alpha ( {\bf r} ) \, L_{\alpha \beta} \,
\phi_\beta ( {\bf r}^\prime ) \; .
\end{equation}
This means that $K$ is given by the matrix equation:
\begin{equation}
K = 3 L S L - 2 L S L S L \; ,
\end{equation}
where $S_{\alpha \beta}$ is the overlap matrix of support functions:
\begin{equation}
S_{\alpha \beta} = \int \! \! \mbox{d} {\bf r} \, \phi_\alpha \phi_\beta \; .
\label{eq:overlap}
\end{equation}
We can therefore summarize the overall scheme as follows. The total
energy is expressed in terms of $\rho$, which depends on
the separable quantity $\sigma$. The ground-state energy is obtained
by minimization with respect to the support functions
$\phi_\alpha ( {\bf r} )$ and the matrix elements $L_{\alpha \beta}$,
with the $\phi_\alpha$ confined to localized regions
centered on the atoms, and the
$L_{\alpha \beta}$ subject to a spatial cut-off.
The $\phi_\alpha ( {\bf r} )$ must be allowed to vary freely
in the minimization process, just like the Kohn-Sham orbitals
in conventional DFT, and we must consider how to represent them. As always,
there is a choice: we can represent them either by their values on
a grid~\cite{chelikowsky1,chelikowsky2,chelikowsky3,gygi1,gygi2,gygi3,seitsonen,hamann1,hamann2,bernholc},
or in terms of some set of basis functions.
In our previous work~\cite{hernandez:gillan,hernandez:gillan:goringe},
we used a grid representation. This was satisfactory for
discussing the feasibility of linear-scaling schemes, but seems to us to
suffer from significant drawbacks. Since the
support regions are centered on the ions in our method, this means
that when the ions move, the boundaries of the regions will
cross the grid points. In any simple grid-based scheme, this will
cause troublesome discontinuities. In addition, the finite-difference
representation of the kinetic-energy operator in a grid
representation causes problems at the boundaries of the regions.
A further point is that in a purely grid-based method we are almost certainly
using more variables than are really necessary.
These problems have led us to consider
basis-function methods.
We describe in this paper a practical basis-function scheme for
linear-scaling DFT, and we study its performance in numerical
calculations. The basis consists of an array of localized functions
-- we call them ``blip functions''.
There is an array of blip functions for each support region, and the
array moves with the region. The use of such arrays of localized
functions as a basis for quantum calculations is not
new~\cite{cho:arias:joannopoulos:lam,chen:chang:hsue,modisette,wei:chou}.
However, to our knowledge it
has not been discussed before in the context of linear-scaling calculations.
The plan of the paper is as follows. In Sec.~2, we emphasize
the importance of considering the relation between blip-function
and plane-wave basis sets, and we use this relation to analyze
how the calculated ground-state energy will depend on the width
and spacing of the blip functions.
We note some advantages of using
B-splines as blip functions, and we then present some practical
tests which illustrate the convergence of the ground-state energy
with respect to blip width and spacing.
We then go on (Sec.~3) to discuss the
technical problems of using blip-function basis sets in linear-scaling
DFT. We report the results of
practical tests, which show explicitly how the ground-state
energy in linear-scaling DFT converges to the value
obtained in a standard plane-wave calculation. Section~4 gives
a discussion of the results, and presents our conclusions. Some
mathematical derivations are given in an appendix.
\section{Blip functions and plane waves}
\subsection{General considerations}
Before we focus on linear-scaling problems, we need to set down some
elementary ideas about basis functions. To start with, we therefore
ignore the linear-scaling aspects, and we discuss the general problem
of solving Schr\"{o}dinger's equation using basis functions. It is
enough to discuss this in one dimension, and we assume a periodically
repeating system, so that the potential $V(x)$ acting on the
electrons is periodic: $V(x+t) = V(x)$, where $t$ is any
translation vector. Self-consistency questions are irrelevant at this
stage, so that $V(x)$ is given. The generalization to
three-dimensional self-consistent calculations will be straightforward.
In a plane-wave basis, the wavefunctions $\psi_i (x)$ are expanded as
\begin{equation}
\psi_i (x) = L^{- 1/2} \sum_G c_{i G} \,
\exp ( i G x ) \; ,
\end{equation}
where the reciprocal lattice vectors of the repeating geometry
are given by $G = 2 \pi n / L$ ($n$ is an integer), and we include
all $G$ up to some cut-off $G_{\rm max}$.
We obtain the ground-state energy
$E( G_{\rm max} )$ in the given basis by minimization with respect to
the $c_{i G}$, subject to the constraints of orthonormality.
For the usual variational reasons, $E ( G_{\rm max} )$ is a monotonically
decreasing function of $G_{\rm max}$ which tends to the exact value $E_0$ as
$G_{\rm max} \rightarrow \infty$.
Now instead of plane waves we want to use an array of spatially localized
basis functions (blip functions).
Let $f_0 (x)$ be some localized function, and denote by
$f_\ell (x)$ the translated
function $f_0 (x - \ell a )$, where $\ell$ is an integer
and $a$ is a spacing, which is chosen so that $L$ is an exact multiple
of $a$: $L= M a$. We use the array of blip functions $f_\ell (x)$ as a basis.
Equivalently, we can use any independent linear combinations of the
$f_\ell (x)$. In considering the relation between blip functions and
plane waves, it is particularly convenient to work with
``blip waves'', $\chi_G (x)$, defined as:
\begin{equation}
\chi_G (x) = A_G \sum_{\ell = 0}^{M - 1} f_\ell (x) \exp ( i G R_\ell ) \; ,
\end{equation}
where $R_\ell = \ell a$, and $A_G$ is some normalization constant.
The relation between blip waves and plane waves can be analyzed
by considering the Fourier representation of $\chi_G (x)$. It is
straightforward to show that $\chi_G (x)$ has Fourier components only
at wavevectors $G + \Gamma$, where $\Gamma$ is a reciprocal
lattice vector of the blip grid: $\Gamma = 2 \pi m / a$ ($m$ is an integer).
In fact:
\begin{equation}
\chi_G (x) = ( A_G / a) \sum_{\Gamma} \hat{f} (G + \Gamma )
\exp \left( i ( G + \Gamma ) x \right) \; ,
\label{eq:blipwave}
\end{equation}
where $\hat{f} (q)$ is the Fourier transform of $f_0 (x)$:
\begin{equation}
\hat{f} (q) = \int_{- \infty}^{\infty} \mbox{d}x \, f_0 (x) \, e^{i q x} \; .
\label{eq:fourier}
\end{equation}
At this point, it is useful to note that for some
choices of $f_0 (x)$ the blip-function basis set is exactly equivalent
to a plane-wave basis set. For this to happen, $\hat{f} (q)$
must be exactly zero beyond some cut-off wavevector $q_{\rm cut}$.
Then provided $q_{\rm cut} \geq G_{\rm max}$ and provided
$q_{\rm cut} + G_{\rm max} < 2 \pi / a$, all the $\Gamma \neq 0$
terms in Eq.~(\ref{eq:blipwave}) will vanish and all
blip waves for $- G_{\rm max}
\leq G \leq G_{\rm max}$ will be identical to plane-waves. (Of course, we
must also require that $\hat{f} (q) \neq 0$ for $| q | \leq G_{\rm max}$.)
Our main aim in this Section is to determine how the total energy
converges to the exact value as the width and spacing of the
blip functions are varied. The spacing is controlled by varying $a$,
and the width is controlled by scaling each blip function: $f_0 (x)
\rightarrow f_0 (sx)$, where $s$ is a scaling factor. In the case
of blip functions for which $\hat{f} (q)$ cuts off in the way just
described, the convergence of the total energy is easy to describe.
Suppose we take a fixed blip width, and hence a fixed wavevector
cut-off $q_{\rm cut}$. If the blip spacing $a$ is small enough so that
$q_{\rm cut} < \pi / a$, then it follows from what we have said
that the blip basis set is exactly equivalent to a plane-wave basis set
having $G_{\rm max} = q_{\rm cut}$. This means that the total
energy is equal to $E( q_{\rm cut} )$ and is completely independent
of $a$ when the latter falls below the threshold value
$a_{\rm th} = \pi / q_{\rm cut}$. This is connected with the fact
that the blip basis set becomes over-complete when $a < a_{\rm th}$: there
are linear dependences between the $M$ blip functions
$f_\ell (x)$.
It follows from this that the behavior of the total energy as a function
of blip spacing and blip width is as shown schematically in Fig.~1.
As the width is reduced, the cut-off $q_{\rm cut}$ increases in
proportion to the scaling factor $s$, so that the threshold spacing
$a_{\rm th}$ is proportional to the width. The energy
value $E( q_{\rm cut} )$ obtained for $a < a_{\rm th}$ decreases
monotonically with the width, as follows from the monotonic decrease
of $E( G_{\rm max} )$ with $G_{\rm max}$ for a plane-wave basis set.
Note that in Fig.~1 we have shown $E$ at fixed width as decreasing
monotonically with $a$ for $a > a_{\rm th}$. In fact, this may not always
happen. Decrease of $a$ does not correspond simply to addition of
basis functions and hence to increase of variational freedom: it also involves
relocation of the basis functions. However, what is true is that
$E$ for $a > a_{\rm th}$ is always greater than $E$ for $a < a_{\rm th}$,
as can be proved from the over-completeness of the blip basis set
for $a < a_{\rm th}$. At large spacings, the large-width blip basis is
expected to give the lower energy, since in this region the poorer
representation of long waves should be the dominant source of error.
\begin{figure}[tbh]
\begin{center}
\leavevmode
\epsfxsize=8cm
\epsfbox{idealised_blips.eps}
\end{center}
\caption{Expected schematic form for the total ground-state energy
as a function of the blip-grid spacing for two different
blip widths, in the case where the Fourier components of the
blip functions vanish beyond some cut-off. The horizontal dotted
line shows the exact ground-state energy $E_0$. The vertical dashed
lines mark the threshold values $a$ of blip grid spacing (see
text).}
\end{figure}
Up to now, we have considered only the rather artificial case where
the Fourier components of the blip function are strictly zero beyond
a cut-off. This means that the blip function must extend to infinity
in real space, and this is clearly no use if we wish to do all
calculations in real space. We actually want $f_0 (x)$ to be strictly
zero beyond some real-space cut-off $b_0$: $f_0 (x) = 0$ if
$| x | > b_0$. This means that $\hat{f} (q)$ will extend to
infinity in reciprocal space. However, we can expect that with a
judicious choice for the form of $f_0 (x)$ the Fourier components
$\hat{f} (q)$ will still fall off very rapidly, so that the
behavior of the total energy is still essentially as shown in Fig.~1.
If the choice is not judicious, we shall need a considerably
greater effort to bring $E$ within a specified tolerance of the
exact value than if we were using a plane-wave basis set. With
a plane-wave basis, a certain cut-off $G_{\rm max}$ is needed in
order to achieve a specified tolerance in $E$. With blip functions
whose Fourier components cut off at $G_{\rm max}$, we should need
a blip spacing of $\pi / G_{\rm max}$ to achieve the same tolerance. Our
requirement on the actual choice of blip function is that the spacing
needed to achieve the given tolerance should be not much less than
$\pi / G_{\rm max}$.
\subsection{B-splines as blip functions}
Given that the blip function cuts off in real space at some
distance $b_0$, it is helpful if the function and some of its
derivatives go smoothly to zero at this distance. If $f_0 (x)$
and all its derivatives up to and including the $n$th vanish
at $| x | = b_0$, then $\hat{f} (q)$ falls off asymptotically
as $1 / | q |^{n+2}$ as $| q | \rightarrow \infty$. One way of
making a given set of derivatives vanish is to build $f_0 (x)$
piecewise out of suitable polynomials. As an example, we examine here
the choice of $f_0 (x)$ as a B-spline.
B-splines are localized polynomial basis functions that are equivalent to
a representation of functions in terms of cubic splines. A single
B-spline $B (x)$ centered at the origin and covering the region
$| x | \leq 2$ is built out of third-degree polynomials in the four
intervals $-2 \leq x \leq -1$, $-1 \leq x \leq 0$, $0 \leq x \leq 1$
and $1 \leq x \leq 2$, and is defined as:
\begin{eqnarray}
B (x) = \left\{
\begin{array}{ccr}
1 - \frac{3}{2} x^2 + \frac{3}{4} | x |^3 \; & \mbox{ if } &
0 < | x | < 1 \\
\frac{1}{4} ( 2 - | x | )^3 \; & \mbox{ if } &
1 < | x | < 2 \\
0 \; & \mbox{ if } & 2 < | x |
\end{array}
\right.
\end{eqnarray}
The function and its first two derivatives are continuous everywhere.
The Fourier transform of $B (x)$, defined as in Eq.~(\ref{eq:fourier}),
is:
\begin{equation}
\hat{B} (q) = \frac{1}{q^4} ( 3 - 4 \cos q + \cos 2q ) \; ,
\end{equation}
which falls off asymptotically as $q^4$, as expected. Our choice of blip
function is thus $f_0 (x) = B (2x / b_0 )$, so that
$\hat{f} (q) = \hat{B} ( \frac{1}{2} b_0 q )$.
The transform $\hat{B} (q)$ falls rapidly to small values in the
region $| q | \simeq \pi$. It is exactly zero at the set of wavevectors
$q_n = 2 \pi n$ ($n$ is a non-zero integer), and is very small in a
rather broad region around each $q_n$, because the lowest
non-vanishing term in a polynomial expansion of $3 - 4 \cos q + \cos 2q$
is of degree $q^4$. This suggests that this choice of blip function
will behave rather similarly to one having a Fourier cut-off
$q_{\rm cut} = 2 \pi / b_0$. In other words, if we keep
$b_0$ fixed and reduce the blip spacing $a$, the energy should
approach the value obtained in a plane-wave calculation having
$G_{\rm max} = q_{\rm cut}$ when $a \simeq \frac{1}{2} b_0$. The
practical tests that now follow will confirm this. (We note that
B-splines are usually employed with a blip spacing equal to
$\frac{1}{2} b_0$;
here, however, we are allowing the spacing to vary).
\subsection{Practical tests}
Up to now, it was convenient to work in one dimension, but for practical
tests we clearly want to go to real three-dimensional systems.
To do this, we simply take
the blip function $f_0 ( {\bf r} )$ to be the product of factors
depending on the three Cartesian components of ${\bf r}$:
\begin{equation}
f_0 ( {\bf r} ) = p_0 ( x )\, p_0 ( y )\, p_0 ( z ) \; .
\label{eq:factor}
\end{equation}
All the considerations outlined above for a blip basis in one
dimension apply unchanged to the individual factors $p_0 ( x )$ etc,
which are taken here to be B-splines.
Corresponding to the blip grid of spacing $a$ in one dimension, the
blip functions $f_0 ( {\bf r} )$ now sit on the points of a
three-dimensional grid which we assume here to be simple cubic. The
statements made above about the properties of the B-spline
basis are expected to remain true in this three-dimensional form.
We present here some tests on the performance of a B-spline basis
for crystalline Si. At first
sight, it might appear necessary to write a new code in order
to perform such tests. However, it turns out that rather minor
modifications to an existing plane-wave code allow one to produce
results that are identical to those that would be obtained
with a B-spline basis. For the purpose of the present tests, this
is sufficient. The notion behind this device is that blip functions
can be expanded in terms of plane waves, so that the function-space
spanned by a blip basis is contained within the space spanned by a
plane-wave basis, provided the latter has a large enough $G_{\rm max}$.
Then all we have to do to get the blip basis is to project
from the large plane-wave space into the blip space. Mathematical
details of how to do this projection in practice are given in
the Appendix. The practical tests have been done with the
CASTEP code~\cite{payne},
which we have modified to perform the necessary projections.
Our tests have been done on the diamond-structure Si crystal, using the
Appelbaum-Hamann~\cite{appelbaum:hamann}
local pseudopotential; this is an empirical
pseudopotential, but suffices to illustrate the points of
principle at issue here.
The choice of
$k$-point sampling is not expected to make much difference
to the performance of blip functions, and we have done the calculations
with a $k$-point set corresponding to the lowest-order 4 $k$-point
Monkhorst-Pack~\cite{monkhorst:pack} sampling for an 8-atom cubic cell.
If we
go the next-order set of 32 $k$-points, the total energy per
atom changes by less than 0.1~eV. For reference purposes, we
have first used CASTEP in its normal unmodified plane-wave form
to examine the convergence of total energy with respect to plane-wave
cut-off for the Appelbaum-Hamann potential. We find that for
plane-wave cut-off energies $E_{\rm pw} = \hbar^2 G_{\rm max}^2 / 2 m$
equal to 150, 250 and 350~eV, the total energies per atom are --115.52,
--115.64 and --115.65~eV. This means that to obtain an accuracy of
0.1~eV/atom, a cut-off of 150~eV (corresponding to $G_{\rm max} =
6.31$~\AA$^{-1}$) is adequate. According to the discussion of Sec.~2.2,
the properties of this plane-wave basis should be quite well
reproduced by a blip-function basis of B-splines having half-width
$b_0 = 2 \pi / G_{\rm max} = 1.0$~\AA, and
the total energy calculated with this basis should converge
rapidly to the plane-wave result when the blip
spacing falls below $a \simeq \frac{1}{2} b_0 = 0.5$~\AA.
\begin{figure}[tbh]
\begin{center}
\leavevmode
\epsfxsize=8cm
\epsfbox{blips.eps}
\end{center}
\caption{Convergence of the total energy for two different blip
half-widths
as a function of the blip grid spacing. The calculations were
performed with
the Appelbaum-Hamann pseudopotential. $E(plw)$ is
the plane-wave result, obtained with a cutoff of 250~eV.}
\end{figure}
We have done the tests with two widths of B-splines: $b_0 =$
1.25 and 1.0~\AA, and in each case we have calculated $E$
as a function of the blip spacing $a$. In all cases, we have used a
plane-wave cut-off large enough to ensure that errors in
representing the blip-function basis are negligible compared
withe errors attributable to the basis itself. Our results for
$E$ as a function of $a$ (Fig.~2) fully confirm our expectations.
First, they have the general form indicated in Fig.~1. The difference
is that since we have no sharp cut-off in reciprocal space, $E$ does
not become constant when $a$ falls below a threshold, but instead continues to
decrease towards the exact value. Second, for $b_0 = 1.0$~\AA, $E$ does
indeed converge rapidly to the plane-wave result when $a$ falls
below $\sim 0.5$~\AA. Third, the larger blip width gives the lower
energy at larger spacings.
\section{The blip-function basis in linear-scaling calculations}
\subsection{Technical matters}
In our linear-scaling scheme, the support functions $\phi_\alpha ({\bf r})$
must be varied within support regions, which are centered on the
atoms. These regions, taken to be spherical with radius $R_{\rm reg}$
in our previous work, move with the atoms.
In the present work, the $\phi_\alpha$
are represented in terms of blip functions. Each atom has a blip grid attached
to it, and this grid moves rigidly with the atom. The blip functions
sit on the point of this moving grid. To make the region localized,
the set of blip-grid points is restricted to those for which the
associated blip function is wholly contained within the region
radius $R_{\rm reg}$.
If we denote by $f_{\alpha \ell} ({\bf r})$
the $\ell$th blip function in the region supporting
$\phi_\alpha$, then the representation is:
\begin{equation}
\phi_\alpha ({\bf r}) = \sum_\ell b_{\alpha \ell} \,
f_{\alpha \ell} ({\bf r}) \; ,
\end{equation}
and the blip coefficients $b_{\alpha \ell}$ have to be varied
to minimize the total energy.
The $\phi_\alpha$ enter the calculation through their overlap matrix elements
and the matrix elements of kinetic and potential energies. The overlap
matrix $S_{\alpha \beta}$ [see Eq.~(\ref{eq:overlap})] can be expressed
analytically
in terms of the blip coefficients:
\begin{equation}
S_{\alpha \beta} = \sum_{\ell \ell^\prime} b_{\alpha \ell} \,
b_{\beta \ell^\prime}\, s_{\alpha \ell , \beta \ell^\prime} \; ,
\end{equation}
where $s_{\alpha \ell , \beta \ell^\prime}$ is the overlap matrix between
blip functions:
\begin{equation}
s_{\alpha \ell , \beta \ell^\prime} = \int \! \! \mbox{d} {\bf r} \,
f_{\alpha \ell} \, f_{\beta \ell^\prime} \; ,
\end{equation}
which is known analytically. Similarly, the kinetic energy matrix elements:
\begin{equation}
T_{\alpha \beta} = - \frac{\hbar^2}{2 m} \int \! \! \mbox{d} {\bf r} \,
\phi_\alpha \nabla^2 \phi_\beta
\end{equation}
can be calculated analytically by writing:
\begin{equation}
T_{\alpha \beta} = \sum_{\ell \ell^\prime} b_{\alpha \ell} \,
b_{\beta \ell^\prime} \, t_{\alpha \ell , \beta \ell^\prime} \; ,
\end{equation}
where:
\begin{equation}
t_{\alpha \ell , \beta \ell^\prime} = - \frac{\hbar^2}{2 m}
\int \! \! \mbox{d} {\bf r} \, f_{\alpha \ell}
\nabla^2 f_{\beta \ell^\prime} \; .
\end{equation}
However, matrix elements of the potential energy cannot be
treated analytically, and their integrations must be approximated
by summation on a grid. This `integration grid' is, of course,
completely distinct from the blip grids. It does not move with
the atoms, but is a single grid fixed in space. If the position
of the $m$th point on the integration grid is called ${\bf r}_m$,
then the matrix elements of the local potential (pseudopotential plus
Hartree and exchange-correlation potentials) are approximated by:
\begin{equation}
V_{\alpha \beta} = \int \! \! \mbox{d} {\bf r} \,
\phi_\alpha V \phi_\beta \simeq
\delta \omega_{\rm int} \sum_m \phi_\alpha ( {\bf r}_m )
V( {\bf r}_m ) \phi_\beta ( {\bf r}_m ) \; ,
\end{equation}
where $\delta \omega_{\rm int}$ is the volume per grid point. For a non-local
pseudopotential, we assume the real-space version of the
Kleinman-Bylander~\cite{kleinman:bylander}
representation, and the terms in this are also calculated as a sum over
points of the integration grid. We note that the approximate equivalence
of the B-spline and plane-wave bases discussed above gives us an
expectation for the required integration-grid spacing {\em h\/}. In a
plane-wave calculation, {\em h\/} should in principle be less than
$\pi(2G_{\rm max})$. But the blip-grid spacing is approximately
$a=\pi G_{\rm max}$. We therefore expect to need
$h \approx \frac{1}{2} a$.
In order to calculate $V_{\alpha \beta}$ like this, we have to know all the
values $\phi_\alpha ( {\bf r}_m )$ on the integration grid:
\begin{equation}
\phi_\alpha ( {\bf r}_m ) = \sum_\ell b_{\alpha \ell} \,
f_{\alpha \ell} ( {\bf r}_m ) \; .
\label{eq:support}
\end{equation}
At first sight, it would seem that each point ${\bf r}_m$ would be
within range of a large number of blip functions, so that many
terms would have to be summed over for each ${\bf r}_m$
in Eq.~(\ref{eq:support}). In fact, this is
not so, provided the blip functions factorize into Cartesian components
in the way shown in Eq.~(\ref{eq:factor}).
To see this, assume that the blip grid and integration grid are cubic,
let the blip-grid index $\ell$ correspond to the triplet
$( \ell_x , \ell_y , \ell_z )$, and let the factorization of
$f_\ell ( {\bf r} )$ be written as:
\begin{equation}
f_\ell ( {\bf r}_m ) = p_{\ell_x} ( x_m ) \, p_{\ell_y} ( y_m ) \,
p_{\ell_z} (z_m ) \; ,
\end{equation}
where $x_m$, $y_m$ and $z_m$ are the Cartesian components of ${\bf r}_m$
(we suppress the index $\alpha$ for brevity). The sum
over $\ell$ in Eq.~(\ref{eq:support}) can then be performed as a
sequence of three summations,
the first of which is:
\begin{equation}
\theta_{\ell_y \ell_z} ( x_m ) = \sum_{\ell_x}
b_{\ell_x \ell_y \ell_z} \, p_{\ell_x} ( x_m ) \; .
\end{equation}
The number of operations needed to calculate all these quantities
$\theta_{\ell_y \ell_z} ( x_m )$ is just the number of points
$( \ell_x , \ell_y , \ell_z )$ on the blip grid times the number
$\nu_{\rm int}$ of points $x_m$ for which $p_{\ell_x} ( x_m )$ is
non-zero for a given $\ell_x$. This number $\nu_{\rm int}$ will generally
be rather moderate, but the crucial point is that the number of
operations involved is proportional only to $\nu_{\rm int}$ and not to
$\nu_{\rm int}^3$. Similar considerations will apply to the sums
over $\ell_y$ and $\ell_z$.
It is worth remarking that since we have to calculate
$\phi_\alpha ( {\bf r}_m )$ anyway, we have the option of calculating
$S_{\alpha \beta}$ by direct summation on the grid as well. In fact,
$T_{\alpha \beta}$ can also be treated this way, though here
one must be more careful, since it is essential that its symmetry
($T_{\alpha \beta} = T_{\beta \alpha}$) be preserved by whatever scheme
we use. This can be achieved by, for example, calculating the gradient
$\nabla \phi_\alpha ( {\bf r}_m )$ analytically on the integration
grid and then using integration by parts to express $T_{\alpha \beta}$
as an integral over $\nabla \phi_\alpha \cdot \nabla \phi_\beta$.
In the present work, we use the analytic forms for
$s_{\alpha \ell , \beta \ell^{\prime}}$ and
$t_{\alpha \ell , \beta \ell^{\prime}}$.
A full linear-scaling calculation requires minimization of the
total energy with respect to the quantities $\phi_{\alpha}$ and
$L_{\alpha \beta}$. However, at present we are concerned solely
with the representation of $\phi_{\alpha}$, and the cut-off
applied to $L_{\alpha \beta}$ is irrelevant. For our practical
tests of the blip-function basis, we have therefore taken the
$L_{\alpha \beta}$ cut-off to infinity, which is equivalent to exact
diagonalization of the Kohn-Sham equation. Apart from this, the
procedure we use for determining the ground state, i.e. minimizing
$E$ with respect to the $\phi_{\alpha}$ functions, is essentially
the same as in our previous
work~\cite{hernandez:gillan,hernandez:gillan:goringe}. We use
conjugate-gradients~\cite{numerical:recipes} minimization
with respect to the blip-function coefficients $b_{\alpha \ell}$.
Expressions for the required derivatives are straightforward to derive
using the methods outlined earlier~\cite{hernandez:gillan:goringe}.
\subsection{Practical tests}
We present here numerical tests both for the
Appelbaum-Hamann~\cite{appelbaum:hamann} local
pseudopotential for Si used in Sec.~2 and for a standard
Kerker~\cite{kerker}
non-local pseudopotential for Si. The aims of the tests are: first,
to show that the B-spline basis gives the accuracy in support-function
calculations to be expected from our plane-wave calculations; and second
to examine the convergence of $E$ towards the exact plane-wave
results as the region radius $R_{\rm reg}$ is increased. For present
purposes, it is not particularly relevant to perform the tests on
large systems. The tests have been done on perfect-crystal Si at
the equilibrium lattice parameter, as in Sec.~2.3.
\begin{table}[tbh]
\begin{center}
\begin{tabular}{cccc}
& $h$ (\AA) & $E$ (eV/atom) & \\
\hline
& 0.4525 & -0.25565 & \\
& 0.3394 & 0.04880 & \\
& 0.2715 & 0.00818 & \\
& 0.2263 & -0.01485 & \\
& 0.1940 & 0.00002 & \\
& 0.1697 & -0.00270 & \\
& 0.1508 & 0.00000 & \\
\end{tabular}
\end{center}
\caption{Total energy $E$ as a function of integration grid spacing
$h$ using
a region radius $R_{\rm reg} = 2.715$~\AA. The blip half-width
$b_0$ was set to
0.905~\AA\ and the blip grid spacing used was 0.4525~\AA. The zero of
energy is set equal to the result obtained with the finest grid.}
\end{table}
We have shown in Sec.~2.3 that a basis of B-splines having a
half-width $b_0 = 1.0$~\AA\ gives an error of $\sim 0.1$~eV/atom
if the blip spacing is $\sim 0.45$~\AA.
For the present tests we have used the
similar values $b_0 = 0.905$~\AA\ and $a = 0.4525$~\AA, in the
expectation of getting this level of agreement with CASTEP plane-wave
calculations in the limit $R_{\rm reg} \rightarrow \infty$.
To check the influence of integration grid spacing $h$, we have made
a set of
calculations at different $h$ using $R_{\rm reg} = 2.715$~\AA, which is
large enough to be representative (see below). The results (Table 1)
show that $E$ converges rapidly with decreasing $h$, and
ceases to vary for present purposes when $h = 0.194$~\AA.
This confirms our expectation that $h \approx \frac{1}{2} a$. We have then
used this grid spacing to study the variation of $E$ with $R_{\rm reg}$,
the results for which are given in Table~2, where we also
compare with the plane-wave results. The extremely rapid convergence
of $E$ when $R_{\rm reg}$ exceeds $\sim 3.2$~\AA\ is very striking,
and our results show that $R_{\rm reg}$ values yielding an accuracy
of $10^{-3}$~eV/atom are easily attainable. The close agreement with
the plane-wave result fully confirms the effectiveness of the blip-function
basis. As expected from the variational principle, $E$ from the blip-function
calculations in the $R_{\rm reg} \rightarrow \infty$ limit lies
slightly above the plane-wave value, and the discrepancy of $\sim 0.1$~eV
is of the size expected from the tests of Sec.~2.3 for (nearly) the
present blip-function width and spacing. (We also remark in
parenthesis that the absolute agreement between results obtained
with two entirely different codes is useful
evidence for the technical correctness of our codes.)
\begin{table}[tbh]
\begin{center}
\begin{tabular}{ccc}
$R_{\rm reg}$ (\AA) & \multicolumn{2}{c}{$E$ (eV/atom)} \\
& local pseudopotential & non-local psuedopotential \\
\hline
2.2625 & 1.8659 & 1.9653 \\
2.7150 & 0.1554 & 0.1507 \\
3.1675 & 0.0559 & 0.0396 \\
3.6200 & 0.0558 & 0.0396 \\
4.0725 & 0.0558 & 0.0396 \\
\end{tabular}
\end{center}
\caption{Convergence of the total energy $E$ as a function of the
region radius $R_{\rm reg}$ for silicon with a local and a non-local
pseudopotential. The calculations were performed with a blip grid
spacing of 0.4525~\AA\ and a blip half-width of 0.905~\AA\ in both
cases.
The zero of energy was taken to be the plane wave result obtained with
each pseudopotential, with plane wave cutoffs
of 250 and 200 eV respectively.}
\end{table}
The results obtained in our very similar tests using the
Kleinman-Bylander form of the Kerker pseudopotential for Si are also
shown in Table~2. In plane-wave calculations, the plane-wave cut-off
needed for the Kerker potential to obtain a given accuracy is very similar
to that needed in the Appelbaum-Hamann potential, and we have therefore
used the same B-spline parameters. Tests on the integration-grid spacing
show that we can use the value $h = 0.226$~\AA, which is close to what
we have used with the local pseudopotential.
The total
energy converges in the same rapid manner for $R_{\rm reg} > 3.2$~\AA,
and the agreement of the converged result with the CASTEP value
is also similar to what we saw with the Appelbaum-Hamann pseudopotential.
\section{Discussion}
In exploring the question of basis sets for linear-scaling
calculations, we have laid great stress on the relation with
plane-wave basis sets. One reason for doing this is that the plane-wave
technique is the canonical method for pseudopotential calculations,
and provides the easiest way of generating definitive results by
going to basis-set convergence. We have shown that within the linear-scaling
approach the total energy can be taken to convergence by systematically
reducing the width and spacing of a blip-function basis set, just as
it can be taken to convergence by increasing the plane-wave cut-off in
the canonical method. By analyzing the relation between the plane-wave and
blip-function bases, we have also given simple formulas for estimating
the blip width and spacing needed to achieve the same accuracy as
a given plane-wave cut-off. In addition, we have shown that the density
of integration-grid points relates to the number of blip functions
in the same way as it relates to the number of plane waves. Finally,
we have seen that the blip-function basis provides a practical way of
representing support functions in linear-scaling calculations, and that
the total energy converges to the plane-wave result as the region
radius is increased.
These results give useful insight into what can be expected of linear-scaling
DFT calculations. For large systems, the plane-wave method requires
a massive redundancy of information: it describes the space of occupied
states using a number of variables of order $N \times M$ ($N$ the number of
occupied orbitals, $M$ the number of plane waves), whereas the number of
variables in a linear-scaling method is only of order $N \times m$
($m$ the number of basis functions for each support function). This means
that the linear-scaling method needs fewer variables than the plane-wave
method by a factor $m / M$. But we have demonstrated that to achieve
a given accuracy the number of blip functions per unit volume is not
much greater than the number of plane waves per unit volume.
Then the factor $m / M$ is
roughly the ratio between the volume of a support region and the
volume of the entire system. The support volume must clearly depend on
the nature of the system. But for the Si system, we have seen that
convergence is extremely rapid once the region radius
exceeds $\sim 3.2$~\AA, corresponding to a region volume of 137~\AA$^3$,
which is about 7 times greater than the volume per atom of 20~\AA$^3$.
In this example, then, the plane-wave method needs more variables
than the linear-scaling method when the number of atoms $N_{\rm atom}$
is greater than $\sim 7$, and for larger systems it needs more
variables by a `redundancy factor' of $\sim N_{\rm atom} / 7$. (For
a system of 700 atoms, e.g., the plane-wave redundancy factor would
be $\sim 100$.) In this sense, plane-wave calculations on large systems
are grossly inefficient. However, one should be aware that there are
other factors in the situation, like the number of iterations needed
to reach the ground state in the two methods. We are not yet in a position
to say anything useful about this, but we plan to return to it.
Finally, we note an interesting question. The impressive rate of convergence
of ground-state energy with increase of region radius shown in Table~2
raises the question of what governs this convergence rate, and whether it
will be found in other systems, including metals. We remark that
this is not the same as the well-known question about the rate of
decay of the density matrix $\rho ( {\bf r} , {\bf r}^{\prime} )$
as $| {\bf r} - {\bf r}^{\prime} | \rightarrow \infty$, because in our
formulation both $\phi_\alpha ( {\bf r} )$ and $L_{\alpha \beta}$
play a role in the decay. Our intuition is that this decay is controlled
by $L_{\alpha \beta}$. We hope soon to report results on the support
functions for different materials, which will shed light on this.
\section*{Acknowledgements}
This project was performed in the framework of the U.K. Car-Parrinello
Consortium, and the work of CMG is funded by the
High Performance Computing Initiative (HPCI) through grant GR/K41649.
The work of EH is supported by EPSRC grant GR/J01967.
The use of B-splines
in the work arose from discussions with James Annett.
| proofpile-arXiv_065-337 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Research activity on inflationary cosmologies has continued steadily
since the concept of inflationary cosmology was first proposed in
1981 \cite{guth}.
It was recognized that in order to merge an inflationary scenario
with standard
Big Bang cosmology a mechanism to reheat the universe was needed. Such
a mechanism must
be present in any inflationary model to raise the temperature of the
Universe at the end of inflation,
thus the problem of reheating acquired further importance deserving
more careful investigation.
The original version of reheating envisaged that during the last
stages of inflation when the
accelerated universe
expansion slows down, the energy stored in the oscillations of the
inflaton zero mode transforms into particles via single particle
decay. Such particle production
reheats the universe whose temperature reduced enormously due to the
inflationary expansion \cite{newI}.
It was realized recently\cite{frw,stb,kls,jap}, that in fact,
the elementary theory of reheating \cite{newI} does not describe accurately
the quantum dynamics of the fields.
\bigskip
Our programme on non-equilibrium dynamics of quantum field theory, started in
1992\cite{nos1}, is naturally poised to provide a framework to study these
problems. The larger goal of the program is to study the dynamics of
non-equilibrium
processes from a fundamental field-theoretical description, by solving
the dynamical
equations of motion of the underlying
four dimensional quantum field theory for physically relevant problems:
phase transitions out of equilibrium, particle production out of equilibrium,
symmetry breaking and dissipative processes.
The focus of our work is to describe the quantum field dynamics when
the energy density is {\bf high}. That is, a large number of particles per
volume $ m^{-3} $, where $ m $ is the typical mass scale in the
theory. Usual S-matrix calculations apply in the opposite limit of low
energy density and since they only provide information on {\em in}
$\rightarrow$
{\em out} matrix elements, are unsuitable for calculations of
expectation values.
Our methods were naturally applied to different physical
problems like pion condensates \cite{dcc,nos3}, supercooled phase
transitions \cite{nos1,nos2} and inflationary cosmology
\cite{frw,nos2,big,fut,fut2}.
An analogous program has been pursued by the Los Alamos group, whose
research focuses on
non-linear effects in scalar QED linked to
the pair production in strong electric fields\cite{laCF},
the Schwinger-Keldysh non-equilibrium formalism in the large $ N $
expansion\cite{laNG},
and the dynamics of chiral condensates in such framework\cite{dccla}.
\section{Preheating in inflationary universes}
As usual in inflationary cosmology, matter is described in an
effective way by a self-coupled scalar field $ \Phi(x) $ called the inflaton.
The spacetime geometry is a cosmological spacetime with metric $ ds^2
= (dt)^2 - a(t)^2 \; (d{\vec x})^2 $, where $ a(t) $ is the scale factor.
The evolution equations for the $k$-modes of the inflaton field as
considered by different groups can be summarized as follows,
\begin{equation}\label{modos}
{\ddot \chi}_k + 3 H(t) \; {\dot \chi}_k + \left( {{k^2} \over {a^2(t)}} +
M^2[\Phi(.)] \right) \chi_k(t) = 0
\end{equation}
where $ H(t) = {\dot a}(t)/a(t) , \; $ and $ M^2[\Phi(.)] $
is the effective mass felt by the modes. The
expression considered depends on the model (here the $\lambda \;
\Phi^4 $ model) and the approximations made. The value of $\lambda$
is scenario-dependent but it is usually very small.
$ M^2[\Phi(.)] $
depends on the scale factor and on the physical state. Therefore, it
depends on the modes $ \chi_k(t) $ themselves in a complicated
way. One is definitely faced with a complicated non-linear problem. We
call `back-reaction' the effect of the modes $ \chi_k(t) $ back on
themselves through $ M^2[\Phi(.)] $.
In the initial stage, all the energy is assumed in the zero mode of
the field $ \phi(t) $
\cite{nos1,nos2,nos3,big,stb,kls,kof,jap}. That is, the field expectation value
$ \phi(t) \equiv <\Phi(x)> $, where
$ <\cdots > $ stays for the expectation value in the translational
invariant but non-equilibrium quantum
state. For very weakly coupled theories, and early times, such that the
back-reaction effects of the non-equilibrium quantum fluctuations can
be neglected, one can approximate,
\begin{equation} \label{prehM}
M^2[\Phi(.)] \simeq m^2 + { {\lambda} \over 2} \phi(t)^2
\end{equation}
At this moment, the scale factor is set to be a
constant in ref.\cite{stb,jap}. Refs.\cite{nos1,nos2,nos3,big} consider
Minkowski spacetime from the start. In ref.\cite{kls}, the scale
factor is set to be a constant for a model without
$ \lambda \Phi^4 $ inflaton self-coupling. That is, for the classical
potential\cite{kls}
\begin{equation}\label{Vines}
V = \frac12 m^2 \Phi^2 + g \sigma^2 \Phi^2 \; ,
\end{equation}
one can consider as classical solution $ \Phi(t) = \Phi_0 \;
\cos(mt) , \; \sigma = 0 $. In such case, the Mathieu equation
approximation is exact in Minkowski spacetime.
However, the potential (\ref{Vines}) is
unstable under renormalization (a $ \Phi^4 $ counterterm is needed from
the one-loop level). Hence, the $ \lambda = 0 $ choice is a fine-tuning not
protected by any symmetry.
In a second case a massless selfcoupled $ \lambda \Phi^4 $ field in a
radiation dominated universe $ a(t) \propto \sqrt{t} $ is considered
in conformal time \cite{kls}. In such specific case the classical
field equations take the Minkowski form for $ \sqrt{t} \Phi$.
In a way or another, using
the classical oscillating behaviour of $ \phi(t) $, one is
led by eq.(\ref{modos}) to an effective mass that oscillates in time. In
this approximation (which indeed, may be very good for small
coupling, see \cite{big}),
eq.(\ref{modos}) exhibits {\bf parametric resonance} as noticed first
in ref\cite{stb}. Namely, there are allowed and forbidden bands in $ k^2 $. The
modes within the forbidden bands grow exponentially whereas those
in the allowed bands stay with bounded modulus. The growth of the
modes in the forbidden bands translates into profuse particle production, the particles being created
with these particular unstable momenta. The rate of particle production is determined by the
imaginary part of the Floquet index in these unstable bands.
Notice that the approximation (\ref{prehM}) breaks down as
soon as many particles are produced. Namely, when the energy of the
produced particles becomes of the order of the zero-mode energy and
eq.(\ref{prehM}) is no more valid.
Now, in order to compute quantitatively the particles produced one
needs the form of $ \phi(t) $. In ref.\cite{stb,kls,jap} $ \phi(t) $
is approximated by a cosinus function in the calculations.
The mode equations become a Mathieu equation (the scale factor is set
to be a constant). In ref.\cite{stb} the Bogoliubov-Krylov approximation
is used to compute estimates. In ref.\cite{kls,kof}, estimates are
obtained using asymptotic formulas for the Mathieu equation. In
ref.\cite{big} the exact classical solution is used (a cn Jacobi
function) to compute estimates.
Let us now compare the results from the exact mode solutions obtained
in ref.\cite{big} with the Mathieu equation approximation to it.
In units where $ m^2 = 1 $ and setting $ \eta(t) \equiv \sqrt{
{\lambda} \over 2} \; \phi(t) $, one finds
\begin{eqnarray}\label{etac}
\eta(t) &=& \eta_0\; \mbox{cn}\left(t\sqrt{1+\eta_0^2}, {\bar k}\right)
\cr \cr
{\bar k} &=& {{\eta_0}\over{\sqrt{2( 1 + \eta_0^2)}}}\; ,
\end{eqnarray}
where cn stands for the Jacobi cosinus and we choose for initial
conditions $ \eta(0) = \eta_0\; , \; {\dot \eta}(0) = 0 $.
Here $ \eta(t) $ has period $
4 \omega \equiv {{ 4 \, K( {\bar k})}\slash {\sqrt{1+\eta_0^2}}} $,
where $ K( {\bar k}) $ is
the complete elliptic integral of first kind, and $ \eta(t)^2 $
has period $ 2 \omega$.
Inserting this form for
$\eta(\tau)$ in eqs.(\ref{prehM}) and (\ref{modos}) yields
\begin{equation}\label{modsn}
\left[\;\frac{d^2}{dt^2}+k^2+1+ \eta_0^2\;
\mbox{cn}^2\left(t\sqrt{1+\eta_0^2}, {\bar k}\right) \;\right]
\chi_k(t) =0 \; . \label{nobackreaction}
\end{equation}
This is the Lam\'e equation for a particular value of the coefficients that
make it solvable in terms of Jacobi functions \cite{herm}.
As shown in ref.\cite{big}, this equation has only one forbidden band
for positive $ k^2 $ going from $ k^2 = 0 $ to $ k^2 =
{{\eta_0^2}\over 2} $. One can choose Floquet solutions of
eq.(\ref{nobackreaction}) that fullfil the relation
\begin{equation}\label{floq}
U_k(t + 2 \omega) = e^{i F(k)} \; U_k(t),
\end{equation}
where the Floquet indices $ F(k) $ are independent of $t$. In the
forbidden band the $ F(k) $ posses an imaginary part.
The production rate is determined by the imaginary
part of the Floquet index.
The exact form of $ F(k) $ results \cite{big},
$$
F(k) = -2 i K( {\bar k}) \; Z(2 K( {\bar k}) \,v) + \pi
$$
where $ Z(u) $ is the Jacobi zeta function \cite{erd} and
$ v $ is a function of $ k $ in the forbidden band defined by
\begin{equation}\label{qprohi}
k = {{\eta_0}\over {\sqrt2}}\, \mbox{cn}(2 K( {\bar k})\,v,k) \; ,
\; 0 \leq v \leq \frac{1}{2}.
\end{equation}
All these elliptic functions posses fastly convergent expansions in
powers of the elliptic nome
$$
q \equiv e^{-\pi K'( {\bar k})/ K( {\bar k})} \; .
$$
Since $ 0 \leq {\bar k} \leq 1/\sqrt2 $ [see eq.(\ref{etac})], we have
\begin{equation}\label{qsomb}
0 \leq q \leq e^{-\pi} = 0.0432139\ldots \; .
\end{equation}
Then,
\begin{equation}
F(k) = 4i\, \pi \; q \; \sin(2\pi v)\;\left[ 1 + 2 \, q
\; \cos2\pi v + O( q^2)\right] + \pi \; .\label{easyfloquet}
\end{equation}
The imaginary part of this function has a maximum at $ k = k_1 =
\frac12 \; \eta_0 \; (1 - q ) + O( q^2) $ where \cite{big}
\begin{equation}\label{Flame}
{\cal F} \equiv Im F(k_1) = 4\, \pi \; q + O(q^3) \; .
\end{equation}
This simple formula gives the maximum of the imaginary part of the
Floquet index in the forbidden band with a
precision better than $ 8.\, 10^{-5} $. $ q $ can be expressed in
terms of $ \eta_0 $ as follows \cite{big}
$$
q = \frac12 \; {{ (1+\eta_0^2)^{1/4} - (1+\eta_0^2/2)^{1/4}}
\over { (1+\eta_0^2)^{1/4} + (1+\eta_0^2/2)^{1/4}}} \; .
$$
with an error smaller than $\sim 10^{-7} $.
Let us now proceed to the Mathieu equation analysis of this
problem. The cn Jacobi function can be expanded as \cite{gr}
$$
{\rm cn} (z, {\bar k}) = (1-q) \cos(1-4q)z + q \cos3z + O(q^2) \; .
$$
To zeroth order in $ q $ we have
$$
\eta(t)^2 = {{\eta_0^2}\over 2} \left[ 1 + \cos(2t\sqrt{1 + \eta_0^2})
\right] + O(q) \; .
$$
and $ 2 \omega = \pi/\sqrt{1 + \eta_0^2} + O(q) $.
Under such approximations eq.(\ref{modsn}) becomes the Mathieu
equation \cite{abr}
\begin{equation}\label{mathieu}
{{d^2 y}\over {dz^2}} + \left( a - 2 {\bar q} \cos2z \right)y(z) = 0
\; ,
\end{equation}
where
$$
a = 1 + {{k^2 - {{\eta_0^2}\over 2} }\over {\eta_0^2 + 1 }} \; , \;
{\bar q} = {{\eta_0^2}\over{ 4(\eta_0^2 + 1)}}
$$
and $ z = \sqrt{ \eta_0^2 + 1 } \; t $. Notice that $ 0 \leq {\bar q}
\leq 1/4 $ in the present case. Eq.(\ref{mathieu}) posses an infinite
number of forbidden bands for $ k^2 > 0 $. The lower and upper edge of the first band are
respectively\cite{abr}
$$
k^2_{inf} = {{\eta_0^2}\over 4}\left[ 1 - {{\eta_0^2}\over{
2^5(\eta_0^2 + 1)}} + {{\eta_0^4}\over{ 2^{10}(\eta_0^2 + 1)}} + \ldots
\right] \; ,
$$
till
$$
k^2_{sup} = {{\eta_0^2}\over 4}\left[ 3 - {{\eta_0^2}\over{
2^5(\eta_0^2 + 1)}} - {{\eta_0^4}\over{ 2^{10}(\eta_0^2 + 1)}} + \ldots
\right] \; .
$$
These values must be compared with the exact result for the Lam\'e
equation (\ref{nobackreaction}) : $ k^2_{inf} = 0 \; , \; k^2_{sup} =
{{\eta_0^2}\over 2} $. The width of the band is well approximated but
not its absolute position. The numerical values of the maximum of the
imaginary part of the Floquet index are given in Table I
and compared with the exact values from eq.(\ref{Flame}).
We see that the Mathieu approximation {\bf underestimates} the exact result
by a fraction ranging from $13$\% to $ 39$\%. The second forbidden
band in the Mathieu equation yields $ {\cal F}_{ Mathieu} =
0.086\ldots $ for $ \eta_0 \to \infty $. This must be compared with $
{\cal F}_{Lame} = 0 $ (no further forbidden bands).
In ref.\cite{kof}, an even larger
discrepancy between Lame and Mathieu
Floquet indices has been reported within a different approximation scheme.
It must be noticed that the Floquet indices entering in exponents
produces a dangerous error propagation. For example, the number of
particles produced during reheating is of the order of the exponential
of $ 2 {\cal F} $ times the reheating time in units of $ \pi/\sqrt{ 1 +
{\eta_0^2}} $. An error of $25$\% in $ {\cal F} $ means an error of
$25$\% in the exponent. (For instance, one would find $10^9$ instead
of $10^{12}$).
The mode equations (\ref{modos}) apply to the self-coupled $ \lambda \;
\Phi^4 $ scalar field. Models for reheating usually contain at least
two fields: the inflaton and a lighter field $ \sigma(x) $ in which
the inflaton decays. For a $ g \sigma^2 \Phi^2 $ coupling, the mode
equations for the $ \sigma $ field take the form \cite{nos3,kls,jap}
\begin{equation}\label{modsi}
{\ddot V}_k + 3 H(t) {\dot V}_k + \left( {{k^2} \over {a^2(t)}} +
m_{\sigma}^2 + {g\over {\lambda}} F[\Phi(.),\sigma(.)] \right) V_k(t) = 0
\end{equation}
A new dimensionless parameter $ { g \over {\lambda}} $ appears
here. Neglecting the $ \sigma $ and $ \Phi $ backreaction, we have
\begin{equation}\label{preH}
F[\Phi(.),\sigma(.)] \simeq \eta^2(t) \; .
\end{equation}
In ref.\cite{nos2,big}, it is shown that abundant particle
production (appropriate for reheating) shows up even for $ g =
\lambda$.
\bigskip
\begin{table} \centering
\begin{tabular}{|l|l|l|l|}\hline
$ \eta_0 $ & $ {\cal F}_{Lame} $ & $ {\cal F}_{ Mathieu} $ &
$ \% error $ \\ \hline
$ $& $ $ & $ $ & $ $ \\
1 & $ 0.2258 \ldots $ & $ 0.20 \ldots $ & $ 13$\% \\
$ $& $ $ & $ $ & $ $ \\ \hline
$ $& $ $ & $ $ & $ $ \\
$ 4 $ & $ 0.4985\ldots $ & $ 0.37\ldots $ & $ 35$\% \\
$ $& $ $ & $ $ & $ $ \\ \hline
$ $& $ $ & $ $ & $ $ \\
$ \eta_0 \to \infty $ & $ 4\pi e^{-\pi} = 0.5430\ldots $ & $ 0.39\ldots
$ & $ 39$\% \\ $ $& $ $ & $ $ & $ $ \\ \hline
\end{tabular}
\bigskip
\label{table1}
\caption{ The maximum of the imaginary part of the Floquet index ${\cal
F}$ for the Lam\'e equation and for its Mathieu approximation.}
\end{table}
Eqs.(\ref{modsi})-(\ref{preH}) becomes a Lam\'e equation when $
\eta(t) $ is approximated by the classical solution in Minkowski
spacetime given by (\ref{etac}). Such Lam\'e equation is solvable in closed
form when the couplings $ g $ and $ \lambda $ are related as
follows \cite{big}
$$
{{2 g}\over {\lambda}} = n(n+1) \; , \; n=1,2,3,\ldots
$$
In those cases there are $ n $ forbidden bands for $ k^2 \geq 0
$. The Lam\'e equation exhibits an infinite number of
forbidden bands for generic values of $ {g\over {\lambda}} $.
The Mathieu and WKB approximations has been applied
in the non-exactly solvable cases \cite{kls,kof,jap}. However, as the
above analysis shows (see Table I) such estimations
cannot be trusted quantitatively. The only available precise method
consists on accurate numerical calculations as those of
ref.\cite{nos2,nos3,big} (where the precision is at least $ 10^{-6} $).
Estimates in the cosinus approximation for FRW-de Sitter backgrounds
and open universes
using the Bogoliubov-Krylov approximation are given in
ref.\cite{kaiser}. In ref.\cite{son} the Bogoliubov-Krylov
approximation is applied to the large $ N $ equations.
Applications of preheating to various relevant aspects of the early
cosmology are considered in ref.\cite{apli}.
\bigskip
As soon as the quantum fluctuations grow and cease to be negligible
compared with the the classical piece (\ref{preH}), all the approximations
discussed so far (Lam\'e, Mathieu, etc.) break down. This time is the
so-called preheating time $ t_{reh} $ \cite{big}.
One can estimate $ t_{reh} $ by equating the zero mode energy
(\ref{preH}) with the estimation of the quantum fluctuations derived
from the unstable Floquet modes \cite{kls,big}. Such estimation yields
when the Lam\'e Floquet indices are used
\cite{big},
\begin{equation}
t_{reh} \approx {1 \over B} \, \log{{N(1+\eta^2_0/2) \over { g \sqrt
B}}}\; , \label{maxtime}
\end{equation}
where
\begin{eqnarray}\label{ByN}
B &=& \displaystyle{
8\, \sqrt{1+\eta_0^2}\; { q } \; (1 - 4 { q }) + O(
{ q }^3) }\; , \cr \cr
N &=& {4 \over {\sqrt{ \pi}}} \; \sqrt{ q }\;
{{ ( 4 + 3 \, \eta_0^2) \, \sqrt{ 4 + 5 \, \eta_0^2}}\over{
\eta_0^3 \, (1+\eta_0^2)^{3/4}}} \left[ 1 + O ( q )\right]\; \; .
\end{eqnarray}
Where $ B $ is determined by the maximum of the imaginary part of the
Floquet index in the
band. It is now clear that a few percent correction to the Floquet
indices will result in a large error in the estimate of the preheating
time scale.
\section{The end of preheating and beyond: self-consistent large $N$
calculations}
In order to compute physical magnitudes beyond $ t_{reh} $, one {\bf
must} solve self-consistently the field equations including the back reaction.
In ref.\cite{nos2,big} this is done for the $ N \to \infty $
limit and in ref.\cite{nos3} to one-loop order.
In the large $ N $ limit the zero mode and $k$-modes renormalized
equations take the form,
\begin{eqnarray}
& & \ddot{\eta}+ \eta+
\eta^3+ g \;\eta(t)\, \Sigma(t) = 0 \label{modo0} \; , \\
& & \left[\;\frac{d^2}{dt^2}+k^2+1+
\;\eta(t)^2 + g\; \Sigma(t)\;\right]
\varphi_k(t) =0 \; , \label{modok}
\end{eqnarray}
where $g \Sigma(t)$ is given by
\begin{eqnarray}
g \Sigma(t) & = & g \int_0^{\infty} k^2
dk \left\{
\mid \varphi_k(t) \mid^2 - \frac{1}{\Omega_k}
\right. \nonumber \\
& & \left.
+ \frac{\theta(k-\kappa)}{2k^3}\left[
-\eta^2_0 + \eta^2(t) + g \; \Sigma(t) \right] \right\} \; ,\cr \cr
{\Omega_k}&=& \sqrt{k^2+1 + \eta^2_0} \; .
\label{sigmafin}
\end{eqnarray}
We choose the initial state such that at $t=0$ the quantum
fluctuations are in the ground state of the oscillators.
That is,
$$
\varphi_k(0) = {1 \over {\sqrt{ \Omega_k}}} \quad , \quad
{\dot \varphi}_k(0) = - i \; \sqrt{ \Omega_k} \; ,
$$
$$
\eta(0) = \eta_0 \quad , \quad {\dot\eta}(0) = 0\; .
$$
In the one-loop approximation the term $ g \; \Sigma(t) $ is absent
in eq.(\ref{modok}).
Eqs.(\ref{modok})-(\ref{sigmafin})
were generalized to cosmological spacetimes (including
the renormalization aspects) in ref.\cite{frw}.
Eqs.(\ref{modok}) is an infinite set of coupled non-linear
differential equations in the $k$-modes $ \varphi_k(t) $ and in the
zero mode $ \eta(t) $. We have numerically solved eqs.(\ref{modok})
for a variety of couplings and initial conditions
\cite{nos2,nos3,big}.
In figs. 1-3 we display $ \eta(t), \; \Sigma(t)$
and the total number of created particles $ N(t) $ as a function of $
t $ for $ \eta_0 = 4 $ and $ g = 10^{-12} $ \cite{big}. One sees that
the quantum fluctuations are indeed very small till we approach the
reheating time $ t_{reh} $. As one can see from the figures,
eq.(\ref{maxtime}) gives a very good approximation for $ t_{reh} $ and
for the behaviour of the quantum fluctuations for $ t \leq t_{reh} $.
For times earlier than $ t_{reh} $, the Lam\'e estimate for the
envelope of $ \Sigma(t) $ and $ N(t) $ take the form \cite{big},
\begin{eqnarray}\label{polenta}
\Sigma_{est-env}(t) &=& { 1 \over { N \, \sqrt{t}}}\; e^{B\,t}\;
, \cr\cr
N_{est-env}(t) &=& {1 \over {8 \pi^2}} \; {{4 + \frac92 \eta_0^2}
\over { \sqrt{4 + 5 \eta_0^2}}} \; \Sigma_{est-env}(t)\; .
\end{eqnarray}
where $ B $ and $ N $ are given by eq.(\ref{ByN}).
This analysis presents a very clear physical picture of the properties of the ``gas'' of
particles created during the preheating stage. This ``gas'' turns to be {\bf out} of thermodynamic equilibrium as
shown by the energy spectrum obtained, but it still fulfills a
radiation-like equation of state $ p \simeq e/3 $ \cite{big}.
In approximations that do not include the backreaction effects there
is infinite particle production,
since these do not maintain energy conservation\cite{stb,jap}.
The full backreaction problem and the approximation
used in our work exactly conserves the energy.
As a result, particle production shuts-off when
the back-reaction becomes important ending the preheating
stage.
In ref. \cite{dort} results of ref.\cite{nos2} are rederived with a
different renormalization scheme.
\section{Broken symmetry and its quantum restoration}
Up to now we discussed the unbroken symmetry case $ m^2 = 1 > 0 $. In the
spontaneously broken symmetry case $ m^2 = -1 < 0 $, the classical
potential takes the form
\begin{equation}\label{potcla}
V(\eta) = \frac14 (\eta^2 - 1 )^2
\end{equation}
In refs.\cite{nos2,big} the field evolution is numerically solved
in the large $ N $ limit for $ m^2 < 0 $. Analytic estimations are
given in ref.\cite{big} for early times.
In ref.\cite{nos2} it is shown that for small couplings and small
$ \eta_0 $, the order
parameter damps very quickly to a constant value close to the
origin. That is, the order parameter ends close to the classical false
vacuum and far from the classical minimum $\eta = 1$. The fast
damping is explained, in this spontaneously broken symmetry case
by the presence of Goldstone bosons. The massless Goldstone
bosons dissipate very efficiently the energy from the zero mode. We
plot in figs. 4 and 5, $ \eta(t) $ and $ \Sigma(t) $ for $ \eta_0 =
10^{-5} $ and $ g = 10^{-12} $ \cite{big}. The profuse particle
production here is due to spinodal unstabilities.
In ref.\cite{kls2}, it is claimed that quantum fluctuations have
restored the symmetry in the situations considered in ref.\cite{nos2}.
However, the presence of massless Goldstone bosons and the non-zero
limiting value of the order parameter clearly show that the symmetry {\bf is}
always broken in the cases considered in ref.\cite{nos2,big}. These
results rule out the claim in ref.\cite{kls2}. In particular, the asymptotic value of the
order parameter results from a very detailed dynamical evolution that conserves energy.
In the situation of `chaotic initial conditions' but with a broken
symmetry tree level potential, the
issue of symmetry breaking is more subtle. In this case the zero mode
is initially displaced with a
large amplitude and very high in the potential hill. The total energy
{\em density} is non-perturbatively
large. Classically the zero mode will undergo oscillatory behavior
between the two classical turning
points, of very large amplitude and the dynamics will probe both
broken symmetry states. Even
at the classical level the symmetry is respected by the dynamics in
the sense that the time evolution
of the zero mode samples equally both vacua. This is not the situation
that is envisaged in
usual symmetry breaking scenarios.
For broken symmetry situations there are no finite energy field
configurations that can
sample both vacua. In the case under consideration with the zero mode
of the scalar field with very
large amplitude and with an energy density much larger than the top of
the potential hill, there
is enough energy in the system to sample both vacua. (The energy is
proportional to the spatial
volume). Parametric amplification transfers energy from the zero mode
to the quantum fluctuations.
Even when only a fraction of the energy of the zero mode is
transferred thus creating a
non-perturbatively large number of particles, the energy in the
fluctuations is very large, and the
equal time two-point correlation function is non-perturbatively large
and the field fluctuations are
large enough to sample both vacua. The evolution of the zero mode is
damped because of this
transfer of energy, but in most generic situations it does not reach
an asymptotic time-independent
value, but oscillates around zero, sampling the tree level minima with
equal probability.
This situation is reminiscent of finite temperature in which case the
energy density is finite and above
a critical temperature the ensemble averages sample both tree level
vacua with equal probability
thus restoring the symmetry. In the dynamical case, the ``symmetry
restoration'' is just a consequence
of the fact that there is a very large energy density in the initial
state, much larger than the top of the
tree level potential, thus under the dynamical evolution the system
samples both vacua equally.
This statement is simply the dynamical equivalent of the equilibrium
finite temperature statement that
the energy in the quantum fluctuations is large enough that the
fluctuations can actually sample both
vacua with equal probability.
Thus the criterion for symmetry restoration when the
tree level potential allows
for broken symmetry states is that the energy density in the initial
state be larger than the top of the
tree level potential. That is when the amplitude of the zero mode is
such that $ V(\eta_0) > V(0) $.
In this case the dynamics will be very similar to the unbroken
symmetry case, the amplitude of the
zero mode will damp out, transferring energy to the quantum
fluctuations via parametric amplification,
but asymptotically oscillating around zero with a fairly large amplitude.
To illustrate this point clearly, we plot in figs. 6 and 7, $
\eta(t) $ and $ \Sigma(t) $ for $
\eta_0 = 1.6 > \sqrt2 $ [and hence $ V(\eta_0) > V(0) $, see
eq.(\ref{potcla})] and $ g = 10^{-3} $. We find the typical behaviour
of unbroken symmetry. Notice again that the effective or tree level
potential is an irrelevant
quantity for the dynamics, the asymptotic amplitude of oscillation of
the zero mode is $\eta \approx
0.5$, which is smaller than the minimum of the tree level potential
$\eta=1$ but the oscillations are
symmetric around $\eta=0$.
Since the dynamical evolution sampled both vacua symmetrically from the
beginning, there never was a symmetry breaking in
the first place, and ``symmetry restoration'' is just the statement
that the initial state has enough
energy such that the {\em dynamics} probes both vacua symmetrically
despite the fact that the
tree level potential allows for broken symmetry ground states.
In their comment (hep-ph/9608341), KLS seem to agree with our
conclusion that in the situations studied in our articles\cite{nos2,nos3},
when the expectation value is
released below the potential hill, symmetry restoration does not occur.
We will present a deeper analytical and numerical study of the subtle
aspects of the dynamics in
this case in a forthcoming article\cite{fut}.
\section{Linear vs. nonlinear dissipation (through particle creation)}
As already stressed, the field theory dynamics is unavoidable nonlinear for
processes like preheating and reheating. It is however interesting to
study such processes in the amplitude expansion. This is done in
detail in refs.\cite{nos2,nos3}. To dominant order, the amplitude
expansion means to linearize the zero mode evolution equations. This
approach permits an analytic resolution of the evolution in closed
form by Laplace transform. Explicit integral representations for $
\eta(t) $ follow as functions of the initial data
\cite{nos2,nos3}. Moreover, the results can be clearly described in
terms of S-matrix concepts (particle poles, production thresholds,
resonances, etc.).
Let us consider the simplest model where the inflaton $\Phi$ couples
to a another scalar $\sigma$ and to a fermion field $\psi$, and
potential\cite{nos3}
\begin{eqnarray}\label{mod2c}
V &=& {1 \over 2} \left[ m^2_{\Phi}\Phi^2 + m_{\sigma}^2 \sigma^2
+ g \; \sigma^2 \Phi^2 \right] \cr \cr
&+&{ \lambda_{\Phi} \over 4!} \, \Phi^4
+{ \lambda_{\sigma} \over 4!} \, \sigma^4+
{\bar{\psi}} (m_{\psi} + y \Phi ) \psi \;. \nonumber
\end{eqnarray}
In the unbroken symmetry case $ (m^2_{\Phi} > 0) $ the inflaton is
always stable and we found for the order parameter (expectation value
of $ \Phi$) evolution in the amplitude expansion \cite{nos3},
\begin{eqnarray}
\eta(t)&=&\frac{\eta_i}
{1-\frac{\partial\Sigma(i m_{\Phi})}{\partial m_{\Phi}^2}}
\cos [m_{\Phi} t]\cr \cr
&+&{{2\, \eta_i }\over{\pi}} \int_{ m_{\Phi}+ 2m_{\sigma}}^{\infty}
{{\omega\Sigma_I(\omega) \cos\omega
t\;d\omega}\over{[\omega^2- m_{\Phi}^2-
\Sigma_R(\omega)]^2+ \Sigma_I(\omega)^2}}\;. \label{stable}
\end{eqnarray}
where $\Sigma_{\rm physical}(i\omega\pm 0^+)=\Sigma_R(\omega)\pm
i\Sigma_I(\omega)$ is the inflaton self-energy in the physical sheet,
$ {\eta_i} = \eta(0) $ and $ {\dot \eta}(0) = 0 $. The first term is
the contribution of the one-particle pole (at the physical inflaton
mass). This terms oscillates forever with constant amplitude. The
second term is the cut contribution $ \eta(t)_{cut} $
corresponding to $ \Phi \to \Phi + 2 \sigma $.
In general, when
$$
\Sigma_I(\omega\to\omega_{\rm{threshold}}) \buildrel
{\omega\to\omega_{\rm{threshold}} } \over = B \;
(\omega-\omega_{\rm{threshold}})^\alpha \; ,
$$
the the cut contribution behaves for late times as
\begin{eqnarray}
\eta(t)_{cut} &\simeq & {{2\, \eta_i }\over{\pi}}
{{ B \; \omega_{\rm{threshold}} \; \Gamma(1+\alpha)}\over
{[\omega^2- m_{\Phi}^2- \Sigma_R(\omega_{\rm{threshold}})]^2}}\cr \cr
&&
t^{-1-\alpha} \cos\left[\omega_{\rm{threshold}} t +
\frac{\pi}2 (1+\alpha) \right] \; .
\end{eqnarray}
Here, $ \omega_{\rm{threshold}} = m_{\Phi} +2 M_\sigma $ and
$ \alpha = 2 $ since to two-loops,\cite{nos3}
$$
\Sigma_I(\omega)\buildrel{\omega \to {m_{\Phi} + 2 M_{\sigma}}}\over=
\frac{2 g^2 \pi^2}{(4\pi)^4}
\frac{M_\sigma \sqrt{ m_{\Phi}}}{( m_{\Phi}+2 M_\sigma)^{7/2}}
[\omega^2-( m_{\Phi}+2 M_\sigma)^2]^2\;.
$$
In the broken symmetry case $ (m^2_{\Phi} < 0) $ we may have either $
M < 2 m_{\sigma}$ or $ M > m_{\sigma} $, where $ M $ is the
physical inflaton mass. ( $ M = | m_{\Phi}| \sqrt2 $ at the tree
level). In the first case the inflaton is stable and eq.(\ref{stable})
holds. However, the self-energy starts now at one-loop and vanishes at
threshold with a power $ \alpha = 1/2 $. For $ M > m_{\sigma} $
the inflaton decomes a resonance (an unstable particle) with width
(inverse lifetime)
$$
\Gamma = {g^2\Phi^2_0\over{8\pi M}}\sqrt{1-{{4
m^2_{\sigma}}\over{M^2}}} \; .
$$
This pole dominates $ \eta(t) $ for non asymptotic times
\begin{equation}\label{BW}
\delta(t)\simeq\delta_i\, A\; e^{-{\Gamma t/2}}\;
\cos(Mt+\gamma) \; ,
\end{equation}
where
\begin{equation}
A=1+ {{\partial\Sigma_R(M)}\over{\partial M^2}}
\;,\quad\quad
\gamma= -{{\partial\Sigma_I(M)}\over{\partial M^2}}\;.\nonumber
\end{equation}
In summary, eq.(\ref{BW}) holds provided: a) the inflaton is a resonance
and b) $ t \leq\Gamma^{-1}\ln(\Gamma / M_\sigma)$.
For later times the fall off is with a power law $t^{-3/2}$
determined by the spectral density at threshold as before\cite{nos3}.
In ref.\cite{nos3} the selfconsistent nonlinear evolution is computed
to one-loop level for the model (\ref{mod2c}). In fig. 8 $ \eta(t) $
is plotted as a function of time for $ \lambda = g = 1.6 \pi^2 , \; y =
0, \; m_{\sigma} = 0.2 m_{\Phi} , \; \eta(0) = 1 $ and $ {\dot
\eta}(0) = 0 $.
Figure 8 shows a very rapid, non-exponential damping within few
oscillations of the expectation value and a saturation effect when the
amplitude of the oscillation is rather small (about 0.1 in this case), the
amplitude remains almost constant at the latest times tested. Figures 8 and
9 clearly show that the time scale for dissipation (from fig. 8) is that
for which the particle production mechanism is more efficient
(fig. 9). Notice that the total number of particles produced rises on the
same time scale as that of damping in fig. 8 and eventually when the
expectation value oscillates with (almost) constant amplitude the average
number of particles produced remains constant. This behaviour is a
close analog to the selfcoupled inflaton for unbroken symmetry (fig.1).
The amplitude expansion predictions are in qualitative
agreement with both results.
These figures clearly show that
damping is a consequence of particle production. At times larger than about 40
$m_{\Phi}^{-1}$ (for the initial values and couplings chosen) there is no
appreciable damping. The amplitude is rather small and particle production has
practically shut off. If we had used the {\it classical} evolution of the
expectation value in the mode equations, particle production would not shut off
(parametric resonant amplification), and thus we clearly see the dramatic
effects of the inclusion of the back reaction.
In ref.\cite{nos3} the broken symmetry case $ m^2_{\Phi} < 0 $ is then
studied. Figures 11-13 show $\eta(\tau)$ vs $\tau$,
${\cal{N}}_{\sigma}(\tau)$ vs
$\tau$ and ${\cal{N}}_{q,\sigma}(\tau=200)$ vs $q$ respectively, for
$\lambda / 8\pi^2 =
0.2;~~~ g / \lambda = 0.05;~~~ m_{\sigma}= 0.2\, |m_{\Phi}|; ~~
\eta(0)=0.6;~~~\dot{\eta}(0)=0$. Notice that the mass for the linearized
perturbations of the $\Phi$ field at the broken symmetry ground state is
$\sqrt{2}\,|m_{\Phi}| > 2 m_{\sigma}$. Therefore, for the values used in the
numerical analysis, the two-particle decay channel is open.
For these values of the parameters, linear relaxation predicts
exponential decay with a time scale $\tau_{rel} \approx 300$ (in the units
used). Figure 11 shows very rapid non-exponential damping on time scales
about {\em six times shorter} than that predicted by linear relaxation. The
expectation value reaches very rapidly a small amplitude regime, once this
happens its amplitude relaxes very slowly.
In the non-linear regime relaxation is clearly {\em
not} exponential but extremely fast. The amplitude at long times
seems to relax to the expected value, shifted slightly from the
minimum of the tree level potential at $\eta = 1$. This is as expected
from the fact that there are quantum corrections.
Figure 12 shows that particle production occurs during the
time scale for which dissipation is most effective, giving direct proof that
dissipation is a consequence of particle production. Asymptotically, when the
amplitude of the expectation value is small, particle production shuts off. We
point out again that this is a consequence of the back-reaction in the
evolution equations. Without this back-reaction, as argued above, particle
production would continue without indefinitely. Figure 13 shows that the
distribution of produced particles is very far from thermal and concentrated at
low momentum modes $k \leq |m_{\Phi}|$. This distribution is qualitatively
similar to that in the unbroken symmetry case, and points out that the excited
state obtained asymptotically is far from thermal.
In ref.\cite{nos3} the case where the inflaton is only coupled to
fermions is studied ($g=0,\; y\neq 0$). The damping of the zero mode
is very inefficient in such case due to Pauli blocking. Namely, the
Pauli exclusion principle forbids the creation of more than $ 2 $
fermions per momentum state. Pauli
blocking shuts off particle production and dissipation very early on.
\section{Future perspectives}
The preheating and reheating theory in inflationary cosmology is
currently a very active area of
research in fast development, with the potential for dramatically
modifying the picture of the
late stages of inflationary phase transitions.
As remarked before, estimates and field theory calculations have
been done mostly assuming Minkowski spacetime. Results in
de Sitter\cite{fut2} and FRW backgrounds\cite{ult} are just beginning
to emerge.
A further important step will be to consider the background
dynamics. Namely, the coupled gravitational and matter field
dynamics. The matter state equations obtained in Minkowski\cite{big} and de
Sitter backgrounds\cite{fut} give an indication, through the
Einstein-Friedmann equation, of the scale factor behaviour.
| proofpile-arXiv_065-338 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection{Detection Statistics}
Table 2 lists the 150 H$_2$O maser sites
observed at IRAM in the CS J = 5\to4, 3\to2, and 2\to1 transitions.
Tables 1 and 2 of Paper I list the positions of the masers and
the CS J=7\to6 line parameters or upper limits.
Table 2 of the present paper lists the source names in order of
increasing galactic longitude,
the radiation temperature (T$_R^*$), integrated
intensity ($\int{T_R^*dv}$), velocity centroid (V$_{LSR}$), and
full width at half maximum (FWHM) for the three transitions.
For CS data obtained in 1990 June, we list the line parameters at the
position in the nine-point map with the strongest emission in the
J=5\to4 line. This choice is based on the results of \S 3.2, where we
find that the J=5\to4 emission almost always peaks at the maser position.
While the line parameters for 1990 June are useful in detection statistics and
as a guide for follow-up work, we have found that the position correction
was inadequate for them to be used together with the J=7\to6 data to
determine densities; therefore we do not use the 1990 June data in \S 4.
For undetected lines, the upper limits to T$_R^*$ are 3$\sigma$.
For CS J = 3\to2 and 2\to1,
we have tabulated only the data with the highest
spectral resolution. We also observed the C$^{34}$S lines in 49 of
the strongest CS emitters.
The results for C$^{34}$S are presented in Table 3. Transitions
listed with dashes (--) instead of values or
upper limits to T$_R^*$ were not observed.
Table 4 has the results for J=10\to9 and 14\to13.
Usually, we obtained the line parameters from Gaussian
fits to the lines but
some sources listed in Table 2 had spectra with more
than one peak.
To determine the line parameters in these cases,
we took the following approach.
First, if the profiles of the higher J (i.e., 7\to6 or 5\to4) lines or
C$^{34}$S
lines (where available)
matched one or more of the peaks seen in the lower
J transitions, we assumed that
the source was composed of distinct cloud
components (e.g., Figure 1a); and we derived
the line parameters by performing a multiple
Gaussian fit to the whole profile.
Each Gaussian component is listed individually in Table 2.
Three sources have 2 velocity components and one has 3 components; these
are identified in Tables 2 and 3 by the notation `` C\#'' (where
\# is the component number). With the inclusion
of all the separate components, Table 2 displays results for 155
cloud components.
Second, if comparison of CS data with
C$^{34}$S data indicated that the
CS line was self-absorbed (Figure 1b shows an example of this situation),
we calculated the line parameters ($\int T_R^* dV$, V$_{LSR}$, and FWHM)
from moment integrals over the profile. Then $T_R^*$ was calculated from
$\int T_R^* dV$ /FWHM (values given in parentheses in Table 2).
Only 18 of the 150 spectra
were obviously self-absorbed in CS 2\to1, with smaller numbers showing obvious
self-absorption in the higher-J lines.
Of course, self-absorption may exist at a less obvious level in other sources.
Figure 2 illustrates the detection rate for the observed CS transitions.
The distribution counts as detected
only those sources with observed T$_R^* \geq 0.5$K .
Because the sensitivity achieved for
the CS J = 7\to6 line (Paper I) was similar to that for the lower
J transitions, the drop in the detection rate towards higher rotational
levels reflects a real drop in the number of
sources exhibiting emission at the same level in the higher J lines.
\subsection{Extent of the Dense Gas: CS J = 5\to4 Maps}
To determine the effect that very dense gas has upon star formation,
we need to know the extent of the gas and its location within the star-forming
regions. We have observed 21 of our sources in the
CS 5\to4 line with the CSO. For each source, we made a cross-scan in R.A.
and Dec., typically consisting of 9 points. For most of the sources, the
separation of the observed points was 30\arcsec. For a few of the smaller
sources, we made the observations at 15\arcsec\ intervals.
In addition, we have assembled from the literature
data taken with the same equipment for four
other sources from our survey.
Table 5 lists the mapping results
for all 25 sources. The integrated intensities
listed in Table 5 are for the interpolated maximum along each cross scan.
{}From the maps we derived diameters and beam correction factors,
$F_c = (\Omega_{source}+\Omega_{beam}$)/$\Omega_{beam}$.
The beam correction factors were calculated assuming that
a Gaussian was a good representation of both the beam shape and the source
intensity distribution. Using the integrated intensity, the $F_c$, and
the distances, $d$(kpc), we calculated the luminosity in the CS J=5\to4
line from
\begin{equation}
L({\rm CS \ 5-4}) = 1.05 \times 10^{-5} L_{\sun} d^2 F_c \int{T_R^*dv}.
\end{equation}
Table 5 also lists the offsets of the CS 5\to4 peaks from the maser
positions in arcseconds.
With the exception of a few of the larger sources, differences
in the peak position of
the CS 5\to4 distribution and the H$_2$O maser position are smaller than
the combined pointing uncertainties and maser positional uncertainties
($\pm$3\arcsec\ and $\leq\pm$8\arcsec, respectively).
Jenness et al. (1995) have also found a very good correlation between the
peak of the submillimeter emission and the maser position.
The mean diameter of the sources listed in Table 3 is 1.0 pc.
The dispersion about this mean, however, is large (0.7 pc). If one
examines sources at $d\leq 3.0$ pc, the mean diameter is 0.5 pc with
a dispersion of 0.4 pc. This difference, while significant, probably
does not arise from observational biases in the CS data. Most
of the more distant sources are well resolved and bright. It is more
likely that the differences arise from selection biases in the original
samples used to search for H$_2$O masers.
Complete mapping of the CS 5\to4 line in several sources gives similar
sizes. The emission in NGC2024 has a diameter of 0.4 pc, while S140 has
a diameter of 0.8 pc (Snell et al. 1984). The emission in M17 is more
extensive:
2.3 pc in 5\to4 (Snell et al.); 2.1 pc in 7\to6 (Wang et al. 1993).
\section{Analysis}
With the addition of the lower J transitions in the present study to the
CS J = 7\to6 data from Paper I, we can
determine densities in a large sample of star-forming regions.
In \S 4.1, we discuss the calculations and examine the effects
of opacity and uncertainties in kinetic temperature
on density and column density determinations.
In \S 4.2, we consider the effects of density inhomogeneities, and
we compute masses in \S 4.3.
\subsection{Densities and Column Densities}
To determine densities and column densities,
we used a large velocity gradient (LVG) code to solve
the coupled equations of statistical equilibrium and radiative transfer,
including the first 20 rotational levels of CS in the calculation.
We assume that the gas has a constant density and temperature and that it
uniformly fills all the beams used in this study.
We calculated a 20$\times$20 grid of radiation temperatures
in column density per velocity interval -- density space
for a kinetic temperature of 50 K.
The CS densities in the LVG model grid ran from
$10^4$ to $10^8$ cm$^{-3}$, and the column densities per velocity interval
(N/$\Delta$v) ranged from $10^{11}$ to $10^{16}$
cm$^{-2}$/km s$^{-1}$. These ranges span the parameter space of
all solutions which fit our data. All the models converged to a solution.
Using a $\chi^2$ minimization routine, we fit the LVG models to the
observed CS line intensities.
Table 6 lists the densities for 71 sources.
We have not included fits for the CS data obtained in 1990 June, for reasons
discussed below. We have listed the log of the density and column density,
along with the value of $\chi^2$ and a key to which transitions were used
and whether the lines were self-absorbed. The values of density and column
density apply to the region selected by the typical beam used for
the observations (about 20\arcsec ). The $\chi^2$ values allow us
to assess whether the models (any of the points in the LVG grid) are
a good representation of the data.
The distribution of $\chi^2$ values for sources with 4 transitions
(40 sources) is similar to
what is expected theoretically if the model is a reasonable fit to the data,
as is the distribution for sources with only three transitions (31 sources).
These facts suggest that our estimates of the calibration uncertainties
are reasonable. We originally included the 1990 June data in the fits, but
they had a very high percentage of bad fits, leading us to conclude that
the uncertain pointing made them unsuitable for combining with the
CSO J=7\to6 data. The 8 self-absorbed sources with fits in Table 6
(marked by a flag) do not have $\chi^2$\ significantly worse than the other
sources. One source with 3 transitions (212.25-1.10) produced a very uncertain
density, and we have excluded it from the statistics that follow.
The mean logarithmic density for sources with detected emission from
all 4 CS transitions is $\langle$log(n)$\rangle$ $= 5.93 \pm 0.23$, where 0.23 represents
the standard deviation of the distribution.
The mean logarithmic column density is $\langle$log(N)$\rangle$ $= 14.42 \pm 0.49$.
The results for the sources undetected in J=7\to6 are
$\langle$log(n)$\rangle$ $= 5.59 \pm 0.39$; $\langle$log(N)$\rangle$ $ = 13.57 \pm 0.35$.
Figure 3 shows histograms of the densities and column densities.
The solid line plots the densities determined from
all 4 CS transitions and the dashed line is the density distribution
for sources without J= 7\to6 detections.
These results show that the difference between a CS 7\to6 detection
and non-detection is more related to column density than to volume density.
Therefore, the detectability of lines of
high critical density is more affected by the quantity of dense gas
present than by its density.
To check whether the difference was solely a result of having a J=7\to6 line
to fit, we re-fit the sources with 7\to6 detections,
forcing the $\chi ^2$ fitting routine
to ignore the CS 7\to6 line and to fit only the 3 lower transitions.
The resulting $\langle$log(n)$\rangle$\ is $5.71 \pm 0.19$, and $\langle$log(N)$\rangle$\ is $14.36 \pm 0.49$.
This result confirms our conclusion that the most significant difference
between
a J=7\to6 detection and a non-detection is the column density.
What effect would high opacity in the CS lines have on the
derived densities and column densities?
Eighteen of the sources in this survey have noticeable self-absorption in
at least one transition.
In addition, an LVG model run for the mean density, column density, and
linewidth results in CS line opacities that are roughly unity.
Thus, self-absorption may affect the fits, even if it is not apparent
in the line profiles.
Since the C$^{34}$S transitions will usually be optically thin,
we independently fit the C$^{34}$S transitions to an LVG model grid,
with a range of parameters identical to those used in the original CS grid.
Table 6 lists the densities, column densities, and $\chi{^2}$ derived from
fits to the C$^{34}$S data.
Problems with the receivers during the C$^{34}$S observations meant that
we have various combinations of lines to fit, as indicated by
the key in Table 6.
There are few sources with both adequate CS and acceptable C$^{34}$S data.
The fits to the sources with three transitions of C$^{34}$S give
$\langle$log(n)$\rangle$ $= 5.95 \pm 0.20$,
essentially identical to the $\langle$log(n)$\rangle$\ derived from 4 transitions of CS.
The mean difference between CS and C$^{34}$S in log(n) is $0.07\pm0.24$,
indicating no significant difference in the derived densities.
It is unlikely that the densities calculated for sources in our survey from the
CS lines alone are seriously affected by CS optical depth.
The average isotope ratio, $N(CS)/ N(C^{34}S)$, is $5.1\pm 2.2$,
clearly less than the terrestrial ratio, and
lower than the isotope ratios of 9--17 found by Mundy et al. (1986)
and 13 (Wang et al. 1993). Chin et al. (1996) have recently found
evidence for low values of this ratio in the inner Galaxy, but our
values are lower still. It is likely that our procedure has underestimated
$N(CS)$ to some extent. For this reason, and also because
these ratios are not very well determined
for individual sources, we have adopted an isotopic abundance
ratio of 10 in what follows.
By increasing the number of transitions,
simultaneous fitting of the CS and C$^{34}$S data should,
in principle, allow us to determine the
densities and column densities more accurately.
Using the LVG model grid for CS and constraining the isotope ratio to be
10, we fit CS and C$^{34}$S transitions simultaneously.
The results are listed in Table 6.
While neither the densities nor the column densities
are significantly different from those determined from
fits to the CS data alone, $\chi ^2$ is considerably larger.
The poor fits probably result from assuming a fixed isotopic
abundance ratio for all sources.
It is likely that many of the regions of massive star formation
contained within this study have temperatures in excess of 50 K.
At the densities
implied by the CS observations, the gas kinetic temperature will be coupled
to the dust temperature. For grains with opacity decreasing linearly
with wavelength, one can write
\begin{equation}
T_D = C [{{L}\over{\theta^2 d^2}}]^{0.2},
\end{equation}
where $L$ is the luminosity in solar units, $d$ is the distance in kpc,
and $\theta$ is the angular separation from the heating source in arcseconds.
Using these units, $C = 15$ to 40 (Makinen et al. 1985, Butner et al. 1990).
We can estimate the range of temperatures in our sources from the
luminosities in Table 7 and distances in Table 5; $\langle (L/d^2)^{0.2}
\rangle
= 7.5\pm 1.6$. At a radius of 10$\arcsec$, characteristic of the beams in
Table 1 and the beam of the J = 7\to6 observations, $T_D = 50$ to 100 K.
To assess the effects of temperature uncertainties on the derived
source properties, we also fit the sources with 4 transitions to a grid
of models run for a temperature of 100 K. The value of $\langle$log(n)$\rangle$\ decreased by
0.3 and the value of $\langle$log(N)$\rangle$\ was essentially unchanged. Regardless of the
assumed temperature, our data imply a thermal pressure, $nT \sim 4 \times 10^7$
K cm$^{-3}$, which is much higher than found in regions not forming
massive stars.
Within the limitations of a single-density model, we conclude that
the effects of opacity and temperature on the determinations of density
are not severe (about at the factor of 2 level). Typical densities in regions
detected in the J=7\to6 survey are $10^6$ cm$^{-3}$. Toward water masers not
detected in the J=7\to6 survey, the densities are about a factor of 2 less,
but the column densities of CS are about a factor of 7 less, on average,
than the values found for regions detected in the J=7\to6 line.
The densities for both groups of sources are considerably
less than the critical density of the
CS J=7\to6 line ($2 \times 10^7$ cm$^{-3}$), reminding us that detection
of emission from a hard-to-excite line does not imply the existence of gas at
the critical density. Molecules can emit significantly in high-J transitions
with critical densities considerably above the actual density because
of trapping and multilevel effects (see also Evans 1989).
For example, levels with J$>>0$ have many possible routes for excitation
by collisions, but only one radiative decay path.
The high densities found in this survey of regions forming massive stars
are similar to those obtained from other, more detailed, studies
of individual, luminous, star-forming regions (see ref. in \S 1).
Consequently, the results found from studies of a few clouds can be applied,
in a statistical sense, to the broader sample of massive star-forming
regions.
\subsection{ Multiple Density Models}
Our LVG analysis assumes that the density is uniform and that
the emitting gas fills the beam.
How good are these assumptions? Figure 4 gives examples of LVG
model fits to several of the sources: three with good fits and three
with bad fits, as measured by the $\chi^2$\ value.
While the LVG models generally fit the data within the uncertainties,
a closer look reveals that the discrepancies between model and observation
are very consistent, even for the good fits.
Almost all fits overpredict the 3\to2 and 5\to4 lines
and underpredict the 2\to1 and 7\to6 lines.
Thus, the data have, on average, a smaller variation of intensity with
J than do the best-fit
LVG models, as would be expected for a source with
a mixture of gas at different densities.
In this section, we examine models with varying densities to
see how well they explain the intensity versus J distribution.
\markcite{Snell et al. (1984) } and Wang et al. (1993) have discussed
the effects of fitting a single density to the CS emission from
a mixture of gas at about $10^6$ cm$^{-3}$\ and gas lower in density by about a
factor of 10. They showed that, until the filling factor of the high
density gas becomes very small (i.e., $f< 0.2$), the density derived
from fitting a single density model matches that of the high density
component to within a factor of two.
The CS transitions we have observed should behave
in a similar way in that they are biased toward measuring gas with
densities close to $10^6$ cm$^{-3}$.
We now ask a more radical question. Could the apparent density near
$10^6$ cm$^{-3}$\ be an artifact of fitting to a single density a mixture of
ultra-dense gas (n = $10^8$ cm$^{-3}$) and gas at a much lower (n = $10^4$ cm$^{-3}$)
density?
In this picture, the histogram of densities (Figure 3) would be produced by
varying the filling factor of the dense component.
We chose a value of $10^8$ cm$^{-3}$\ for the
density of the ultra-dense gas because the 7\to6 transition
becomes completely thermalized at that density. Thus, the component
with n$= 10^8$ cm$^{-3}$\ represents any gas with n$\geq 10^8$ cm$^{-3}$.
We synthesized clouds from a mixture of these two components
at 20 values of N/$\Delta$v between
$10^{12}$ and 10$^{16}$ cm$^{-2}$/km s$^{-1}$.
For each density and column density, we used the LVG code to calculate the
expected emission. We then
varied the filling factor of the ultra-dense gas ($f$) and
the low-density gas ($1-f$), with $0 \leq f \leq 1$ in steps of 0.05,
and summed the contributions to each transition for each possible combination
of $f$, column density of the gas at n$= 10^4$ cm$^{-3}$\ (N$_{\rm low}$), and
column density of the gas at n$= 10^8$ cm$^{-3}$\ (N$_{\rm high}$).
These results then formed
a grid of models which could be fitted to the data, just as the single-density
models had been fitted. We found that the $\chi^2$\ value worsened,
despite the extra free parameter, for sources where the single-density
fit had been good ($\chi^2$ $\leq 1$). On the other hand, the sources
which were poorly fitted ($\chi^2$ $> 1$) with the single-density model
were better fitted with the two-density model. The two-density fits
typically required very high column densities ($\langle$log(N)$\rangle$ $= 16.16$) of
the low-density gas compared to those of the ultra-dense gas
($\langle$log(N)$\rangle$ $= 13.85$).
To see if we could constrain the amount of ultra-dense gas in the sources
with poor single-density fits, we followed a similar, but
less restrictive, procedure.
We started by assuming that the CS J = 2\to1 and 3\to2
transitions effectively probe the low density gas in the
beam, and we used them to fit the density (n$_{\rm low}$)
and column density (N$_{\rm low}$) of the
low-density component. We then used the LVG code
to obtain the expected emission from each rotational
transition for a gas at this density and column density at a
temperature of 50K. These intensities, multiplied by ($1-f$),
were used to represent the
lower density component. We then searched a parameter space of $f$ and
log(N/$\Delta$v) for the best values for the ultra-dense component
(density once again fixed at 10$^8$ cm$^{-3}$). We summed ($1-f$) times
the lower density intensities and $f$ times the ultra-dense gas intensities
and compared this sum to the observations.
This method has a large number of free parameters: $f$, n$_{\rm low}$,
N$_{\rm low}/ \Delta$v, and N$_{\rm high}/ \Delta$v, which are constrained
by only 4 transitions. Furthermore, it
does not correct the properties of the lower
density component for the contributions of the high
density gas to the J = 2\to1 and 3\to2 emission. We use it for
illustrative purposes only. We show the two-density fits as dashed
lines in Figure 4, but we do not tabulate the results. The mean
properties of these solutions for the sources with single-density $\chi^2$ $> 1$
are as follows: $f = 0.22$, log(n$_{\rm low})
= 5.4 \pm 0.3$, log(N$_{\rm low}) =
14.39$, and log(N$_{\rm high}) = 14.39$ (equal column densities in the two
components). Thus,
in general, the filling factor of ultra-dense gas is small (less than
25\%), and the data still favor a large amount of gas at $n > 10^5$ cm$^{-3}$.
Another possible source model is a continous density gradient, such
as a power law. Power-law density distributions have been proposed for
regions of low-mass star formation on theoretical grounds (Shu 1977)
and seem to fit the observations well in some cases (e.g., Zhou et al. 1991).
They have also been applied to some regions forming stars of higher mass
(e.g., Zhou et al. 1994; Carr et al. 1995). The latter reference is
particularly relevant here, as it included a more complete
analysis of GL2591 (called CRL2591 in this paper),
including data from this paper, but adding other data. While Table 6
indicates a good fit to the data for that source with a single-density model,
Carr et al. found that a single density cannot fit all the data,
when other data are included, particularly J = 5\to4
and 10\to9 data from the CSO.
They developed models with power-law density and temperature gradients
that fit all the data. We can use the example of CRL2591 to explore the meaning
of the densities in Table 6 if the actual density distribution is
a power law. If
$n(r) = n_1 r_{pc}^{-\alpha}$, with $n_1$ (the density at 1 pc) set
by matching the line profiles (Carr et al. 1995),
the density in Table 6 is reached at
radii of 18\arcsec\ to 7\arcsec\ for $1 \leq \alpha \leq 2$, corresponding to filling
factors of 0.3 to 0.6 in our largest beam. We conclude that, in this source,
the densities derived in this study characterize gas on scales somewhat
smaller than our beams, if the source has a density gradient. Similar studies
of other sources are needed to see if this conclusion can be generalized.
Further evidence for a range of densities is that J=10\to9 emission has
been seen in a number of sources (Hauschildt et al. 1993 and our Table 4).
The data do not warrant detailed source-by-source modeling, but we have
predicted the expected J=10\to9 emission from a source with the mean
properties found in \S 4.1: log(n) = 5.93 and log(N) = 14.42. We assumed
a linewidth of 5.0 km s$^{-1}$, about the mean for our sample, and T$_K$ = 50 K.
The predicted T$_R$ of the J=10\to9 line is 0.2 K for this average cloud,
weaker than any of the detections.
If we use the conditions for the cloud with properties at the high end of the
1 $\sigma$ spread, we can produce T$_R$ = 1.6 K, about the weakest detection.
Increasing T$_K$\ to 100 K raises the prediction to 7 K, similar to many of
the detections. Detection of a J=10\to9 line therefore implies a cloud with
higher density, column density, and/or temperature than the average cloud
in our sample of sources detected at J=7\to6.
\subsection{Masses }
Table 7 contains mass estimates for the regions for which
we have determined cloud sizes. We have computed three different
estimates. The first estimate assumes that the volume density
fills a spherical volume with the diameter of the J=5\to4 emission:
\begin{equation}
M_n = {{4}\over{3}}\pi{r^3}{n}{\mu},
\end{equation}
where r is the radius of the cloud
and $\mu=2.34m_H$ is the mean mass per particle.
The second estimate uses the CS column densities (N) and the formula:
\begin{equation}
M_N = \pi{r^2}{{N}\over{X}}{\mu},
\end{equation}
where X is the abundance of CS. We have used $X = 4 \times 10^{-10}$,
based on a more detailed analysis of one of the sources in this
study (Carr et al. 1995).
Finally, we estimated masses from the virial theorem:
\begin{equation}
M_{V} = {{5}\over{3}}{{R V^2_{rms}}\over{G}},
\end{equation}
for a spherical, non-rotating cloud. Assuming
that the velocity profile is Gaussian, $V_{rms}$
is related to the FWHM ($\Delta{v}$) of the line by
$V_{rms} = \sqrt{3} \Delta{v}/2.35$.
We used the average $\Delta{v}$ of the CS lines.
The value of $M_n$ for GL490 is probably underestimated
substantially because the maser position is quite far from
the peak of a very compact source. Zhou et al. (1996) have
analyzed this source in more detail and found considerably higher
densities from spectra on the peak. Consequently, we ignore this
source in the following discussion.
The average ratio of $M_N/M_n$ is $0.84\pm 0.73$.
The agreement is gratifying, but the poorly known abundance of CS makes $M_N$
quite uncertain. In contrast, the agreement between $M_n$ and $M_V$ is worse,
with $M_n$ almost always considerably larger than $M_V$.
A likely explanation is that the gas is distributed inhomogeneously
within the beam, whereas the calculation of
$M_n$ assumes that the density is uniformly distributed.
We have used the ratio of $M_V$ to $M_n$ to estimate the volume
filling factor ($f_v$) of the gas, also listed in Table 7.
The filling factors have a large range (0.02 to 2.3) and a mean
value of $0.33\pm 0.59$.
The virial mass estimate is susceptible to
error because the linewidth may be affected by unbound
motions, such as outflows, and it ignores effects of external
pressure. Least certain is $M_n$, which depends on the cube
of the size (and hence distance). Each mass estimate depends on
a different power of the size, making their ratio strongly dependent
on uncertainties in the distance.
In view of the problems inherent in each of the
different mass calculations, the masses agree reasonably well.
Because the virial mass estimates have the fewest potential
problems, we will use them in what follows.
The average $M_V = 3800$ M$_{\sun}$.
\section{Implications }
\subsection{Comparison to Other Star-Formation Regions }
Are the high densities seen in this survey
peculiar to regions of massive star formation or are they
a feature of star formation in general?
Lada, Evans, \& Falgarone (1996) have found that
the density in the most active star-forming cores in L1630
is about log(n) = 5.8, very similar to what we find.
We also compared the results of our
study with surveys of regions forming low-mass stars.
\markcite{Zhou et al. (1989)}
observed a sample of low-mass cores in CS transitions up to J=5\to4
and derived densities of $\langle$log(n)$\rangle$ $ = 5.3\pm 1.1$.
These densities are about a factor of 4 lower than the densities
we find in this study (and in other studies of regions of massive
star formation). Since Zhou et al. (1989) did not have J=7\to6 data,
it may be more appropriate to compare with our fits to sources without
J=7\to6 detections; in that case, our densities are larger by
a factor of about 2. The net result is that regions forming massive
stars do seem to have larger densities when similar techniques are
used, but the difference is not an order of magnitude.
The ability to form low-mass stars in regions of massive star formation
may depend on whether the Jeans mass remains low as the cloud is heated.
We can calculate the Jeans mass from
\begin{equation}
M_J(\rm{M}_{\sun}) = 18T^{{3}\over{2}}n^{-{{1}\over{2}}}.
\end{equation}
Using the mean logarithmic densities and the assumed temperatures
(10 K for the low-mass cores, 50 K for our sample),
we compute $\langle M_J\rangle = 1.3$ M$_{\sun}$
for the clouds forming low-mass stars
and $\langle M_J\rangle = 7 $M$_{\sun}$ for clouds in this
study with J=7\to6 emission.
The assumed temperatures make $M_J$ higher in regions forming massive stars
even though they are denser. However, the strong dependence of $M_J$
on temperature means that statements about average properties
should not be taken too literally until the temperatures are
known better. In addition, the
fragmentation spectrum may have been established early in the evolution
of the core, before the temperatures were raised by the formation
of massive stars.
\subsection {Do Larson's Laws Apply to Massive Cores?}
Most studies of the global properties of molecular clouds deal with
the usual linewidth--size--density relations, as proposed by Larson (1981)
and confirmed by others (e.g., Fuller \& Myers 1992; Solomon et al. 1987;
Caselli \& Myers 1995). These relations were generally found by comparing
properties of whole clouds; similar relations were found within single
clouds by comparing map sizes in transitions of different molecules.
A recent paper by Caselli \& Myers (1995) includes
information on both low mass cores and more massive cores within
the Orion molecular cloud. They fit the non-thermal linewidth (the
observed linewidth after correction for the thermal contribution) and cloud
radius for these types of regions separately to this relation:
\begin{equation}
{\rm log} \Delta v ({\rm km s^{-1}}) = b + q {\rm log} R({\rm pc}).
\end{equation}
They found a strong relationship (correlation coefficient, $r = 0.81$)
in low-mass cores with $b= 0.18 \pm 0.06$ and $q= 0.53 \pm 0.07$.
The relation was considerably weaker ($r = 0.56$) and
flatter ($q = 0.21 \pm 0.03$) in the massive cores.
In Figure 5, we plot log($\Delta v$) versus log$R$ for the sources
in Table 5, which are generally denser and more massive than the
cores studied by Caselli \& Myers. No relationship is apparent
(the correlation coefficient is only $r = 0.26$),
despite the fact that our sample covers a range of 30 in source size.
Nevertheless, we fitted the data to equation 7 using least squares
and considering uncertainties in both variables (we assumed 20\%
uncertainties in size and used the standard deviation of the linewidths
of the different lines for the uncertainty in $\Delta v$).
The result was $b = 0.92 \pm 0.02$ and $q= 0.35\pm 0.03$, but
the goodness of fit parameter, $Q = 2.8 \times 10^{-8}$,
whereas a decent fit should have $Q>0.001$. Alternatively, we minimized
the mean absolute deviation (robust estimation; see Press et al. 1992).
The result was $b = 0.80$ and $q= 0.08$, indicating essentially no
size--linewidth relation.
Thus our data confirm
the trend discernable in the analysis of Caselli \& Myers: the
$\Delta v - R$ relation tends to break down in more massive cores.
We have plotted the Caselli \& Myers relations in Figure 5, along
with Larson's original relation.
It is clear that our sources have systematically higher linewidths
at a given radius than sources in other studies. For the radii we
are probing, most other studies were considering linewidths from CO or
its isotopes and may thus have included a larger contribution from low-density
envelopes. The usual relations would predict larger $\Delta v$ in these
regions,
which would make the discrepancy worse. However, our sources are regions of
massive star formation,
and Larson (1981) noted that such regions (Orion and M17 in his study) tended
to have larger $\Delta v$ for given size and not to show a size--linewidth
correlation.
Most previous studies have found an inverse relation between $mean$ density
and size, corresponding to a constant column density. However, Scalo (1990)
and Kegel (1989) have
noted that selection effects and limited dynamic range may have produced this
effect, and Leisawitz (1990) found no relationship between density and size
in his study of clouds around open clusters. In previous studies,
the $mean$ densities were found by dividing a column density by a size,
which might be expected to introduce an inverse correlation if the column
density tracer has a limited dynamic range. Since our
densities were derived from an excitation analysis, it may be interesting to
see if any correlation exists in our data.
We plot log(n) versus log$R$ in Figure 5. Again, no correlation
is evident ($r = -0.25$), and our densities all lie well above (factors of
100!)
predictions from previous relations (e.g., Myers 1985). Again, Larson
(1981) noted a similar, though much less dramatic, tendency for regions
of massive star formation in his analysis. For a recent theoretical
discussion of these relations, see V\'azquez-Semadeni, Ballesteros-Paredes,
\& Rodr\'iguez (1997).
To use data on sources without
size information, we plot (in the bottom panel of Figure 5)
log($\Delta v$) versus log(n). The previous relations would
predict a negative slope (typically $-0.5$) in this relation.
In contrast to the predictions, our data show a positive, but small,
correlation coefficient ($r= 0.40$). The slope from a least squares fit
is quite steep ($1.3\pm 0.2$), but robust estimation gives a slope of
only 0.39. In addition, the linewidths are much larger than would have
been predicted for these densities from previous relations.
These results suggests that an uncritical application of scaling
relations based on {\it mean} densities to actual densities, especially in
regions of massive star formation, is likely to lead to errors.
The fact that Larson's laws are not apparent in our data indicate
that conditions in these very dense cores with massive star formation
are very different from those in more local regions of less massive
star formation. The linewidths may
have been affected by star formation (outflows, expanding HII regions,
etc.); the higher densities are probably caused by gravitational
contraction, which will also increase the linewidths.
While the regions in this study may not be typical of most molecular
gas, they are typical of regions forming most of the massive stars
in the Galaxy. These conditions (denser, more turbulent than usually
assumed) may be the ones relevant for considerations of initial mass
functions.
\subsection{Luminosity, Star Formation Efficiency, and Gas Depletion Time}
We have collected data from the literature (or our own
unpublished data) on the luminosity of the sources in Table 7.
The ratio of the luminosity to the virial mass
($L/M$), roughly proportional to the star formation rate per
unit mass, ranges from 24 to 490 in solar units (see Table 7)
with a mean of $190 \pm 43$, where 43 is the standard deviation of the mean
(all other uncertainties quoted in the text are standard
deviations of the distribution). Previous studies, using masses
determined from CO luminosity, have
found much lower average values of $L/M$: 4.0 for the inner galaxy
(Mooney \& Solomon 1988); 1.7 for the outer galaxy (Mead,
Kutner, \& Evans 1990). In fact, the maximum values in those
samples were 18 and 5, respectively, smaller than any of our
values. The enormous difference is caused by
the fact that we are calculating the mass of the dense gas,
which is much less than the mass computed from the CO luminosity.
While we have also tried to use luminosities measured with
small beams, the main difference is in the mass. One
way to interpret this result is that the star formation rate
per unit mass
rises dramatically (a factor of 50) in the part of the cloud with dense gas.
The star formation rate per unit mass of very dense gas may be more relevant
since stars do not seem to
form randomly throughout molecular clouds (Lada et al. 1991).
Instead, the 4 most massive CS cores in L1630,
which cover only 18\% of the surveyed area,
contain 58\% to 98\% of all the forming stars, depending on
background correction. Li et al. (1996) have found that there
is little evidence for any recent star formation outside the clusters,
suggesting that the 98\% number is closer to correct.
The star formation efficiency in the clusters can be
quite high (e.g., 40\%) compared to that of the cloud
as a whole (4\%) (Lada et al. 1991).
The gas depletion time ($\tau$) is the time to turn all the molecular gas into
stars. Considering only stars of $M > 2$ M$_{\sun}$, the star
formation rate can be written as
$dM/dt$ (M$_{\sun}$ yr$^{-1}$) = $4 \times 10^{-10} L$
(Gallagher \& Hunter 1986; Hunter et al. 1986). The coefficient differs
by only 20\% if the lower mass cutoff is 10 M$_{\sun}$.
The gas depletion time can then be written as
$\tau\ = 2.5 \times 10^{9} (M/L)$ yr. Using our value of average $L/M = 190$,
$\tau = 1.3 \times 10^7$ yr. This time is comparable to that for
dispersal of clouds surrounding open clusters; clusters with ages
in excess of $1.0\times 10^7$ yr do not have associated molecular clouds
with masses as large as $10^3 M_{\sun}$ (Leisawitz, Bash, \& Thaddeus
1989).
\subsection{ Luminosity of the Galaxy in CS J = 5\to4}
CS J = 5\to4 emission has been seen toward the centers of
NGC 253, M82, IC 342, Maffei 2, and NGC 6946
(\markcite{Mauersberger and Henkel 1989; }
\markcite{Mauersberger et al. 1989)}.
For comparison to studies of other galaxies, we will estimate the luminosity
of the Milky Way in CS 5\to4 [$L_G ({\rm CS \ 5-4})$] from the mean $L ({\rm CS \ 5-4})$\
per cloud in Table 5
and an estimate of the number of such clouds ($n_{cl}$) in the Galaxy.
{}From Table 5
we find $\langle L({\rm CS \ 5-4}) \rangle = 4 \times 10^{-2}$ L$_{\sun}$ and
$\langle\int{T_R^*dv}\rangle = 34$ K km s$^{-1}$, whereas
$\langle\int{T_R^*dv}\rangle = 42$ K km s$^{-1}$ for the whole sample in
Table 2. If we correct for the fact that the mean integrated intensity
of the sources in Table 5 is less than the mean of the whole sample,
we would get $5 \times 10^{-2}$ L$_{\sun}$ for the typical core.
We do not have
a direct measurement of $n_{cl}$ because our survey is incomplete.
The most recent update to the H$_2$O\ maser catalog (Brand et al. 1994)
brings the total
number of masers with IRAS colors characteristic of star formation regions
(see Palagi et al. 1993) to 414. If we assume
that our CS 5\to4 detection rate of 75\% applies equally to the other
sources, we would expect 311 regions of CS J = 5\to4 emission
in a region which covers two thirds of the galaxy. If we correct for the
unsurveyed third of the galaxy, we would estimate the total number
of cloud cores emitting CS J =5\to4 to be 466.
Consequently, we will assume $n_{cl} = 311 - 466$,
with the larger values probably being more likely.
Using these numbers, we calculate $L_G ({\rm CS \ 5-4}) = 15 - 23$ L$_{\sun}$.
Even though we have made some completeness corrections, we expect these
estimates to be underestimates because of our limited sensitivity and the
likelihood of CS emission from dense regions without H$_2$O\ masers.
These values can be compared with the luminosities of other galaxies
in Table 8. However, our estimate applies to the entire Galaxy excluding
the inner 400 pc,
while the $L ({\rm CS \ 5-4})$\ for other galaxies are derived from
a single beam, centered on the nucleus, with a radius given in the
Table. The inner 100 pc of M82 and NGC 253 emit more CS J = 5\to4
than does our entire Galaxy, excluding the inner 400 pc.
We can also compare our Galaxy to others in terms of its star formation rate
per unit mass. In \S 5.3, we used $L/M$, with $M$ as the virial mass, to
measure this quantity. Because linewidths in galaxy observations are likely to
reflect the total mass, rather than the gaseous mass, we will use
$L$/ $L ({\rm CS \ 5-4})$\ as a stand-in for the star formation rate per unit mass of dense
gas.
We have tabulated the far-infrared luminosity of the galaxies in Table 8,
using the data with the smallest available beam, to provide the best
match to the CS J = 5\to4 observations, which were mostly done with the
IRAM 30-m telescope ($11\arcsec$ beam).
The resulting $L$/$L ({\rm CS \ 5-4})$\ values range from $5.0 \times 10^7$ (NGC 253) to
$1.7 \times 10^9$ (M82). These numbers apply to regions typically 100 pc in
radius.
For our Galaxy, we have only the total $L ({\rm CS \ 5-4})$, so we compare to the total
$L = 1.8 \times 10^{10}$ L$_{\sun}$ (Wright et al. 1991). The result is
$8 - 13 \times 10^8$, nominally similar to M82; however, much of the
far-infrared emission of our Galaxy is likely to result from heating by
older stars.
Probably a more useful comparison is to the values of $L$/$L ({\rm CS \ 5-4})$\ in individual
clouds (Table 7). No individual cloud approaches the value in M82.
The highest value in Table 7 is about twice that of NGC 253 and half that
of IC 342.
\section{Summary}
\begin{enumerate}
\item Very dense gas is common in regions of massive star formation.
The gas density for the regions selected by having a water maser
is $\langle$log(n)$\rangle$ $ = 5.93$ and the CS column density is $\langle$log(N)$\rangle$ $ = 14.42$.
For regions without CS J = 7\to6 emission
the mean density is half as large and the mean column
density is about 7 times smaller.
These results are relatively insensitive to both CS optical
depth and to changes in the kinetic temperature of the region.
The mean density is an order of magnitude less than the critical
density of the J = 7\to6 line because of trapping and multilevel excitation
effects.
\item
In many regions forming massive stars, the
CS emission is well modeled by a single density gas component, but
many sources also show evidence for a range of densities.
{}From simulations of emission from gas composed of two different
densities ($10^4$ and $10^8$ cm$^{-3}$), we conclude that there are
few clouds with filling factors of ultra-dense gas (n$ = 10^8$ cm$^{-3}$)
exceeding 0.25.
\item
The densities calculated for the sources in this survey are comparable
to the densities seen from detailed studies of a few individual
regions forming massive stars.
Therefore, it is likely that very dense gas is
a general property of such regions.
The average density of regions forming massive stars
is at least twice the average in regions forming only low-mass
stars.
\item
Using a subsample of sources whose CS 5\to4 emission was
mapped at the CSO, the average cloud diameter is 1.0 pc and the average
virial mass is 3800 M$_{\sun}$.
\item
We see no evidence for a correlation between linewidth and size or density
and size in our sample.
Our linewidths and densities are systematically larger at a given size
than those predicted by previous relations.
There is, however, a positive correlation between linewidth and density,
the opposite of predictions based on the usual arguments.
\item
The ratio $L/M$, which is a measure of star formation
rate per unit mass for the dense gas probed by CS J=5\to4 emission,
ranges from 24 to 490, with an average value of 190.
\item
The dense gas depletion time, $\tau \sim 1.3 \times 10^7$ yr,
comparable to the dispersal time of gas around
clusters and OB associations.
\item
The estimated Galactic luminosity in the CS J = 5\to4 line is
$14-23$ L$_{\sun}$. This range of values is considerably less than
what is seen in the inner 100 pc of starburst galaxies. In addition,
those galaxies have a higher ratio of far-infrared luminosity to
CS J = 5\to4 luminosity than any cloud in our sample.
\end{enumerate}
\acknowledgements
We are grateful to the staff of the IRAM 30-m telescope for assistance
with the observations. We also thank T. Xie, C. M. Walmsley, and J. Scalo
for helpful discussion.
This research was supported in part by NSF Grant AST-9317567 to the
University of Texas at Austin.
\clearpage
\begin{table}[h]
\caption{Observing Parameters}
\vspace {3mm}
\begin{tabular}{l c c c c c c c }
\tableline
\tableline
Line & $\nu$ & Telescope & $\eta_{mb}^a$ & $\theta_b^a$ & $\langle
T_{sys}\rangle^b$ & $\delta v$ & $\delta v$ \cr
& (GHz) & & & ($\arcsec$) &(K) & (km s$^{-1}$) &(km s$^{-1}$) \\ \tableline
CS 2\to1 & 97.980968 & IRAM & 0.60 & 25\arcsec & 675 & 0.31$^c$
& 3.06$^d$ \cr
CS 3\to2 & 146.969049 & IRAM & 0.60 & 17\arcsec & 990 & 0.32$^e$ &
2.04$^d$ \cr
CS 5\to4 & 244.935606 & IRAM & 0.45 & 10\arcsec & 2500 & 1.22$^f$
& \nodata \cr
C$^{34}$S 2\to1 & 96.412982 & IRAM & 0.60 & 25\arcsec & 620 &
0.31$^{c,g}$ & 3.11$^d$ \cr
C$^{34}$S 3\to2 & 144.617147 & IRAM & 0.60 & 17\arcsec & 835 &
0.32$^{e,h}$ & 2.07$^d$ \cr
C$^{34}$S 5\to4 & 241.016176 & IRAM & 0.45 & 10\arcsec & 2700 & 1.24$^f$
& \nodata \cr
CS 5\to4 & 244.935606 & CSO & 0.71 & 30\arcsec & 445 & 0.17$^i$
& 1.2$^j$ \cr
C$^{34}$S 7\to6 & 337.396602 & CSO & 0.55 & 20\arcsec & 1000 & 0.12$^i$
& 0.89$^j$ \cr
CS 10\to9 & 489.75104 & CSO & 0.39 & 14\arcsec & 4300 & 0.09$^i$
& 0.61$^j$ \cr
CS 14\to13 & 685.434764 & CSO & 0.31 & 11\arcsec & 2050 & 0.06$^i$
& 0.44$^j$ \cr
\end{tabular}
\tablecomments{(a) Efficiency and beam size; (b) average $T_{sys}$ during
observing;
(c) 100 kHz filterbank; (d) Split 1 MHz filterbank; (e) Autocorrelator; (f) 1
MHz filterbank;
(g) $\Delta{V} = 0.486$ km s$^{-1}$\ for C$^{34}$S 2-1 in autocorrelator;
(h) $\Delta{V} = 0.207$ km s$^{-1}$\ for C$^{34}$S 3-2 in 100 KHz filterbank; (i) 50
MHz AOS;
(j) 500 MHz AOS.}
\end{table}
\clearpage
\begin{table}[h]
\caption{Standin for table 2 ps file. Discard this page.}
\vspace {3mm}
\begin{tabular}{l r r r r r }
\tableline
\tableline
Source & $\int$T$_{R^*}$dV & $V_{LSR}$ & FWHM & $T_R^*(10-9)$ & $T_R^*$(14-13)
\cr
& K km s$^{-1}$ & km s$^{-1}$ & km s$^{-1}$ & K & K \\ \tableline
\end{tabular}
\tablerefs{
(a) Carr et al. (1995)}
\end{table}
\clearpage
\begin{table}[h]
\caption{Standin for table 3 ps file. Discard this page.}
\vspace {3mm}
\begin{tabular}{l r r r r r }
\tableline
\tableline
Source & $\int$T$_{R^*}$dV & $V_{LSR}$ & FWHM & $T_R^*(10-9)$ & $T_R^*$(14-13)
\cr
& K km s$^{-1}$ & km s$^{-1}$ & km s$^{-1}$ & K & K \\ \tableline
\end{tabular}
\tablerefs{
(a) Carr et al. (1995)}
\end{table}
\clearpage
\begin{table}[h]
\caption{Results for CS $J = 10\rightarrow 9$ and $J= 14\rightarrow 13$ Lines}
\vspace {3mm}
\begin{tabular}{l c c c c c }
\tableline
\tableline
Source & $\int$T$_{R^*}$dV & $V_{LSR}$ & FWHM & $T_R^*(10-9)$ & $T_R^*$(14-13)
\cr
& (K km s$^{-1}$) & (km s$^{-1}$) & (km s$^{-1}$) & (K) & (K) \\ \tableline
GL2591$^a$ & 2.7 & -5.3 & 1.6 & 1.6 & \cr
S158A & 22.6 & -57.2 & 2.9 & 7.2 & \nodata \cr
W3(2) & 6.4 & -38.49 & 2.28 & 2.6 & \nodata \cr
W3(OH) & \nodata & \nodata &\nodata & \nodata & $<1.6$ \cr
S255 & 10.3 & 8.2 & 2.3 & 4.4 & $<0.7$ \cr
\end{tabular}
\tablerefs{
(a) Carr et al. (1995)}
\end{table}
\clearpage
\begin{table}[h]
\caption{Diameters, Offsets and Luminosities from CS J = 5$\rightarrow$4 Maps}
\vspace {3mm}
\begin{tabular}{l c c c c c c c }
\tableline
\tableline
Source & ref & Dist. & $\int$T$_{R^*}$dV & $L ({\rm CS \ 5-4})$\ & Beam Corr. & Diameter &
Offset \cr
& & (kpc) & (K km s$^{-1}$) & (10$^{-2}$ L$_{\sun}$) & & (pc) & (arcsec) \\
\tableline
W43S & & 8.5 & 52.8 & 6.1 & 1.5 & 0.9 & (0,5) \cr
W43Main1 & & 7.5 & 22.1 & 5.2 & 4.0& 1.9 & (20,-36) \cr
W43Main3 & & 6.8 & 32.4 & 4.6 & 2.9 & 1.4 & (-8,2) \cr
31.25-0.11 & & 13 & 9.0 & 5.7 & 3.6 & 3.0 & (-12,-15) \cr
31.44-0.26 & & 9.4 & 23.0 & 8.6 & 4.0 & 2.4 & (-2,-4) \cr
32.8+0.2A & & 15 & 64.1 & 15 & 1.0 & $<$1.1 & (-5,-4) \cr
W44 & & 3.7 & 87.9 & 3.1 & 2.5 & 0.7 & (-3,0) \cr
W51W & & 7 & 12.0 & 1.6 & 2.6 & 1.3 & (0,-7) \cr
W51N & & 7 & 79.3 & 17 & 4.2 & 1.8 & (0,-5) \cr
W51M & & 7 & 152 & 19 & 2.4 & 1.2 & (-3,-2) \cr
ON1 & & 6 & 24.4 & 1.6 & 1.7 & 0.7 & (0,0) \cr
K3-50 & & 9 & 11.3 & 1.9 & 2.0 & 1.3 & (-5,5) \cr
ON3 & & 9 & 11.0 & 1.8 & 2.0 & 1.3 & (0,-4) \cr
ON2S & & 5.5 & 22.3 & 1.5 & 2.2 & 0.9 & (-6,0) \cr
ON2N & & 5.5 & 15.4 & 1.0 & 2.1 & 0.8 & (6,5) \cr
S106 & & 0.6 & 5.4 & 0.004 & 2.2 & 0.1 & (20,0) \cr
CRL 2591 & 1& 1.0 & 7.9 & 0.024 & 3.3 & 0.22 & (0,0) \cr
DR21 S & & 3 & 44.8 & 1.0 & 2.3 & 0.5 & (-6,5) \cr
W75(OH) & & 3 & 47.6 & 1.1 & 2.4 & 0.5 & (-6,-5) \cr
W75S1 & & 3 & 9.4 & 0.9 & 9.7 & 1.3 & (-3,7) \cr
W75S3 & & 3 & 6.8 & 0.2 & 3.2 & 0.7 & (0,0) \cr
W75N & & 3 & 35.2 & 0.8 & 2.5 & 0.5 & (-5,6) \cr
CepA & 2 & 0.73 & 30.0 & 0.1 & 5.5 & 0.2 & (10,12) \cr
W3(2) & 2 & 2.3 & 26.3 & 0.8 & 5.5 & 0.7 & (0,12) \cr
GL 490 & 2 & 0.9 & 7.5 & 0.01 & 1.8 & 0.12 & (-14,-12) \cr
\end{tabular}
\tablerefs{
(1) Carr et al. (1995); (2) Zhou et al. (1996)}
\end{table}
\clearpage
\begin{table}[h]
\caption{Standin for table 6 ps file. Discard this page.}
\vspace {3mm}
\begin{tabular}{l r r r r r }
\tableline
\tableline
Source & $\int$T$_{R^*}$dV & $V_{LSR}$ & FWHM & $T_R^*(10-9)$ & $T_R^*$(14-13)
\cr
& K km s$^{-1}$ & km s$^{-1}$ & km s$^{-1}$ & K & K \\ \tableline
\end{tabular}
\tablerefs{
(a) Carr et al. (1995)}
\end{table}
\clearpage
\begin{table}[h]
\caption{Masses and Luminosities}
\vspace {3mm}
\begin{tabular}{l c c c c c c c c c}
\tableline
\tableline
Source & Flag & $M_n$ & $M_N$ & $M_V$ & $f_v$ & $L$ & Ref. & $L/M_V$ & $L$/
$L ({\rm CS \ 5-4})$\ \cr
& & ($M_{\sun}$) & ($M_{\sun}$) & ($M_{\sun}$) & & ($L_{\sun}$) &
&($L_{\sun}/M_{\sun}$) &($10^7$) \\ \tableline
W43 S& C$^{34}$S& $2.3\times10^4$& $2.8\times10^4$& $1.8\times10^3$& 0.08&
\nodata & \nodata & \nodata & \nodata\cr
31.44$-$0.26& C$^{34}$S& $3.9\times10^5$& $1.2\times10^5$& $6.3\times10^3$&
0.02& \nodata & \nodata & \nodata & \nodata \cr
32.8+0.20 A& C$^{34}$S& $5.6\times10^4$& $1.3\times10^4$& $7.0\times10^3$&
0.13& \nodata & \nodata & \nodata & \nodata \cr
W44 & C$^{34}$S& $1.6\times10^4$& $3.9\times10^4$& $4.5\times10^3$& 0.27&
$3.0\times10^5$ & 4 & 67 & 1.0 \cr
W51 W& C$^{34}$S& $5.9\times10^4$& $1.4\times10^4$& $1.5\times10^3$& 0.03&
\nodata & \nodata & \nodata & \nodata \cr
W51 N C2& C$^{34}$S& $6.2\times10^5$& $1.2\times10^5$& $1.3\times10^4$& 0.02&
$4.0\times10^6$ & 4 & 310 & 2.4 \cr
W51M& CS & $7.2\times10^4$& $8.8\times10^4$& $1.6\times10^4$& 0.23&
$2.8\times10^6$& 3 & 170 & 1.5 \cr
K3$-$50& C$^{34}$S& $5.9\times10^4$& $9.4\times10^3$& $6.1\times10^3$& 0.10&
$2.1\times10^6$& 5 & 340 & 11 \cr
ON3& C$^{34}$S& $1.7\times10^4$& $7.3\times10^3$& $2.3\times10^3$& 0.13&
$3.7\times10^5$& 5 & 160 & 2.1 \cr
ON2S&C$^{34}$S& $3.3\times10^4$& $8.0\times10^3$& $9.1\times10^2$& 0.03 &
\nodata & \nodata & \nodata & \nodata \cr
CRL2591& CS& $3.0\times10^2$& $5.0\times10^2$& $3.2\times10^2$& 1.1&
$2.0\times10^4$& 2& 63 & 8.3 \cr
DR21 S& C$^{34}$S& $3.6\times10^3$& $3.5\times10^3$& $1.1\times10^3$& 0.31&
$5.0\times10^5$& 6 & 460 & 5.0 \cr
W75(OH)& C$^{34}$S& $5.6\times10^3$& $9.6\times10^3$& $1.6\times10^3$& 0.27&
$5.4\times10^4$& 8 & 35 & 0.5 \cr
W75 N& C$^{34}$S& $6.6\times10^3$& $3.8\times10^3$& $1.4\times10^3$& 0.22&
$1.8\times10^5$& 3 & 130 & 2.3 \cr
Cep A& CS& $2.5\times10^2$& $4.3\times10^2$& $5.9\times10^2$& 2.3&
$2.5\times10^4$& 1 &42 & 2.5 \cr
W3(2)& C$^{34}$S& $1.9\times10^4$& $2.6\times 10^3$& $6.1\times10^2$& 0.03&
$3.0\times10^5$& 7 & 490 & 3.8 \cr
GL490& CS & $6.2$& $2.8\times10^1$& $9.1\times10^1$& 15& $2.2\times10^3$& 2& 24
& 2.2 \cr
\end{tabular}
\tablerefs{
(1) Evans et al. (1981); (2) Mozurkewich et al. (1986); (3) Jaffe, unpublished;
(4) Jaffe et al. (1985);
(5) Thronson \& Harper (1979); (6) Colom\'e et al. (1995); (7) Campbell et al.
(1995)}
\end{table}
\clearpage
\begin{table}[h]
\caption{Comparison to Other Galaxies}
\vspace {3mm}
\begin{tabular}{l c c c c c c c c }
\tableline
\tableline
Source & Distance & Radius & $\int{T^*_Rdv}$ & $L ({\rm CS \ 5-4})$\ & Ref. & $L$ & Ref. &
$L$/ $L ({\rm CS \ 5-4})$\ \cr
& (Mpc) & (pc) & (K km s$^{-1}$) & (L$_{\sun}$) & & ($10^9$ L$_{\sun}$) & &
($10^7$) \\ \tableline
NGC 253 & 2.5$^a$ & 67 & 23.5 & 154 & 1 & 8 & 3 & 5 \cr
Maffei 2 & 5 & 133 & $<$2 & $<$53 & 2 & 9.5 & 2 & $>18$ \cr
IC 342 & 1.8$^b$ & 48 & 0.76 & 3 & 1 & 0.64 & 4 & 21 \cr
M82 & 3.2$^c$ & 85 & 2.6 & 28 & 1 & 47 & 3 & 170 \cr
NGC 6946 & 5 & 133 & $<$2.8 & $<$74 & 2 & 1.2 & 3 & $>1.7$ \cr
\end{tabular}
\tablerefs{ (a) de Vaucouleurs (1978); (b) McCall (1987); (c) Tammann \&
Sandage (1968);
(1) Mauersberger \& Henkel (1989); (2) Mauersberger et al. (1989);
(3) Smith \& Harvey 1996;
(4) Becklin et al. (1980) for flux, McCall (1987) for distance. }
\end{table}
\clearpage
| proofpile-arXiv_065-339 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The physics of diffractive scattering processes
has emerged as one of the most interesting
topics of study in the early running of HERA. Up to now,
the cross section for events
in which the virtual photon diffractively dissociates
into a vector meson or a generic state $X$
has been measured in the H1 and ZEUS experiments either by requiring a
``large rapidity gap" between the proton beam direction and
the most forward energy deposit
recorded in the detector or by subtracting the non-diffractive
background in a statistical way~\cite{hera_diffractive}-\cite{H1_rhopsi_hiq2}.
Here we present the first cross section measurement at
HERA in which diffraction is tagged by the detection of
a high energy scattered proton, thereby eliminating contamination by events with
dissociation of the proton.
The measurement of the proton was performed using the ZEUS
Leading Proton Spectrometer (LPS), which detects protons scattered at
very small angles ($\raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$<$} 1$~mrad).
In this spectrometer, silicon micro-strip detectors are
used in conjunction with the proton beam line elements to measure
the momentum of the scattered proton. The detectors are positioned
as close as the $10\sigma$ envelope of the circulating proton
beam (typically a few mm) by using the ``Roman pot" technique~\cite{romanpot}.
In the configuration used to
collect the data presented here, the LPS consisted of a total of about
22,000 channels.
This paper concentrates on the exclusive process $\gamma p \rightarrow
\rho^0 p$ in $ep$ interactions at small photon virtualities
($Q^2 \approx 0$, the
``photoproduction" region). This reaction
is often called ``elastic", in reference to the vector meson dominance
model (VDM).
Elastic photoproduction of
$\rho^0$ mesons has been
investigated in fixed target experiments at photon-proton
centre-of-mass energies $W \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 20$~GeV~\cite{bauer}-\cite{omega} as well
as at HERA energies,
$W \approx 100$-200~GeV~\cite{rho93,rhoh1}.
The process has the characteristic features of
soft diffractive interactions: the dependence of the cross section on $W$
is weak, the dependence on $t$ is approximately exponential, and the
vector meson is observed to retain the helicity of the photon
($s$-channel helicity conservation). Here $t$ is the squared
four-momentum exchanged at the proton vertex.
The data presented in this paper cover the kinematic range
$50<W<100$~GeV, $Q^2 < 1$~GeV$^2$ and $0.073<|t|<0.40$~GeV$^2$.
Elastic events were selected by requiring that the scattered proton
carry more than 98\% of the incoming proton beam energy.
The scattered positron was not detected;
however, $Q^2$ was estimated using transverse momentum balance.
\section{Experimental set-up}
\label{setup}
\subsection{HERA}
The data discussed here were collected in 1994 at
HERA which operated with 820~GeV protons and 27.5~GeV positrons (indicated in
the following with the symbol $e$). The
proton and positron beams each contained 153 colliding bunches, together with
17 additional unpaired proton and 15 unpaired positron bunches. These
additional bunches were used for background studies. The time between
bunch crossings was 96~ns. The typical instantaneous luminosity was
$1.5 \times 10^{30}$~cm$^{-2}$s$^{-1}$ and the integrated luminosity for
this study is $898\pm 14$~nb$^{-1}$.
\subsection{The ZEUS detector}
A detailed description of the ZEUS detector can be found
elsewhere~\cite{detector_a}.
A brief outline of the components in the central ZEUS
detector~\cite{detector_b} which are
most relevant for this analysis is given below, followed by a description
of the Leading Proton Spectrometer.
\subsubsection{Central components and luminosity measurement}
Charged particles are
tracked by the inner tracking detectors which operate in a
magnetic field of 1.43 T provided by a thin superconducting coil.
Immediately surrounding the beam pipe is the vertex detector (VXD),
a drift chamber which consists of 120 radial cells, each with 12
sense wires~\cite{vxd}.
It is surrounded by the central tracking detector (CTD), which consists
of 72 cylindrical drift chamber layers, organised into 9
superlayers covering the polar angle region
$15^\circ < \theta < 164^\circ$\footnote{The
coordinate system used in this paper has the
$Z$ axis pointing in
the proton beam direction, hereafter referred to as ``forward'',
the $X$ axis pointing horizontally towards the centre of HERA and
the $Y$ axis pointing upwards. The polar angle
$\theta$ is defined with respect to the $Z$ direction.}~\cite{ctd}.
The high resolution uranium-scintillator calorimeter (CAL) \cite{CAL}
consists of three parts: the
forward (FCAL), the rear (RCAL) and the barrel calorimeter (BCAL).
Each part is subdivided transversely into towers and
longitudinally into one electromagnetic section (EMC) and one (in RCAL)
or two (in BCAL and FCAL) hadronic sections (HAC). A section of a tower
is called a cell; each cell is viewed by two photomultiplier tubes.
The CAL energy resolution, as measured under test beam conditions,
is $\sigma_E/E=0.18/\sqrt{E}$ for electrons and $\sigma_E/E=0.35/\sqrt{E}$
for hadrons ($E$ in GeV).
The Veto Wall, the C5 counter and the small angle rear tracking detector (SRTD)
all consist of scintillation counters and
are located at $Z =- 730$~cm, $Z =- 315$~cm and $Z =- 150$~cm,
respectively.
Particles which are generated by proton beam-gas interactions upstream
of the nominal $ep$ interaction point hit the RCAL, the Veto Wall, the
SRTD and C5 at different times
than particles originating from the nominal $ep$ interaction point.
Proton beam-gas events are thus rejected by timing measurements
in these detectors.
The luminosity is determined from the rate of the Bethe-Heitler
process, $ep \rightarrow e \gamma p$, where the photon is measured with a
calorimeter (LUMI) located in the HERA tunnel downstream of the
interaction point in the direction of the outgoing positrons~\cite{lumi}.
\subsubsection{The Leading Proton Spectrometer}
The Leading Proton Spectrometer~\cite{detector_a} (LPS) detects
charged particles scattered at small angles and carrying a substantial fraction,
$x_L$, of the incoming proton momentum; these particles remain in the beam pipe
and their trajectory is measured by a system of position sensitive silicon
micro-strip detectors very close to the proton beam.
The track deflection induced by the magnets in the
proton beam line is used for the momentum analysis of the scattered proton.
The layout of the LPS is shown in Fig.~\ref{lps_detailed}; it
consists of six detector stations,
S1 to S6, placed along the beam line in the direction of the outgoing
protons, at $Z=23.8$~m, 40.3~m, 44.5~m,
63.0~m, 81.2~m and 90.0~m from the interaction point, respectively.
Each of the stations S1, S2 and S3 is equipped with an assembly of
six planes of silicon micro-strip detectors parallel to each other
and mounted on a mobile arm, which allows them to be positioned near the proton
beam.
Stations S4, S5 and S6 each consist of two halves,
each half containing an assembly of six planes similar to those of
S1, S2, S3, also mounted on mobile arms, as shown in Fig.~\ref{pots}.
Each assembly has two planes with strips parallel
to the direction of motion of the arm,
two planes with strips at $+ 45^{\circ}$ and two at $- 45^{\circ}$ with respect
to it; this makes it possible to measure the particle
trajectory in three different projections in each assembly.
The dimensions of the detector planes vary from station to station
but are approximately $4 \times 6$~cm$^2$.
The pitch is 115~$\mu$m for the planes with vertical or horizontal
strips and
$115/\sqrt{2}=81~\mu$m for the planes with $\pm 45^{\circ}$ strips.
The distance along $Z$ between neighbouring planes in an assembly is $\approx 7$~mm.
The detector planes are mounted in each assembly with a precision of
about $30$~$\mu$m.
The detector planes are inserted
in the beam pipe by means of re-entrant ``Roman pots" which allow the
planes to operate at atmospheric pressure.
A pot consists of a stainless steel cylinder with an open end away from the
beam; the other end is closed. The silicon detector planes are inserted
from the open end and are moved in until they are at about 0.5~mm from the
closed end. The whole cylinder can be inserted transversely into the beam
pipe. Figure~\ref{pots} illustrates the principle
of operation. The walls of the pots are 3~mm thick, except
in front of and behind the detector planes, where they are
400~$\mu$m thick; the thickness of the pot bottom walls facing the beam is
also 400~$\mu$m. The vacuum seal to the proton beam pipe
is provided by steel bellows.
The pots and the detector planes are positioned by
remotely controlled motors and are retracted during the
filling operations of the collider to increase the aperture of the
vacuum chamber; this also minimises the radiation damage to the
detectors and the front-end electronics.
In stations S1, S2, S3 the detector planes
are inserted into the beam pipe horizontally from the outside of the HERA
ring towards the centre. In stations S4, S5, S6, the detector planes
in the two halves of each station independently approach the beam from above
and from below. In the operating position the upper and lower
halves partially overlap (cf. Fig.~\ref{pots}). The offset
along the beam direction between the centres of the upper and lower pots
is $\approx 10$~cm. Stations S5 and S6 were used in an earlier
experiment at CERN and were adapted to the HERA beam line~\cite{ua4}.
Each detector plane has an elliptical cutout which
follows the profile of the
10$\sigma$ envelope of the beam, where~$\sigma$ is the standard deviation
of the spatial distribution of the beam in the transverse plane.
Since the 10$\sigma$ profile differs from station to station,
the shape of the cutout varies from station to station;
in data taking conditions the distance of each detector from the beam
centre is also different and ranges from 3 to 20~mm.
Small variations of the detector positions
from fill to fill are necessary during operation in order to follow the
changes of the beam position
and adapt to the background conditions.
The detector planes are read out by two types of VLSI chips mounted on the
detector support:
a bipolar amplifier-comparator~\cite{tekz} followed by a radiation hard
CMOS digital pipeline~\cite{dtsc}, which operates with a clock
frequency of 10.4~MHz, synchronous with the HERA bunch crossing. Each
chip has 64 channels reading out 64 adjacent strips.
The chips are radiation hard up to doses of several
Mrad.
\bigskip
A simplified
diagram of the spectrometer optics is shown in Fig.~\ref{lps},
in which the beam line elements
have been combined to show the main
optical functions. Together with the HERA proton beam magnets,
the six LPS stations form two spectrometers:
\begin{enumerate}
\item Stations S1, S2, S3 use the combined horizontal bending power of a
septum magnet and three magnetic septum half-quadrupoles.
S1, S2, S3 were not operational in 1994 and are not discussed further here.
\item Stations S4, S5, S6 exploit in addition the vertical bending provided by
three main dipole magnets (BU). These stations were used for the present
measurement.
\end{enumerate}
The insertion of the detectors into the operating positions
typically begins as soon as the beams are brought into collision. Among
the conditions required prior to beginning the insertion are the
following: (i) proton beam position as measured with the HERA beam position
monitor next to S4 within 1~mm of the nominal position;
(ii) background levels as measured in counters downstream of the main proton
beam collimators, in the C5 counter and in the trigger counters of the
Forward Neutron Calorimeter~\cite{FNC} (located downstream of
S6 at $Z \approx 109$~m) stable and below
given thresholds. About fifty minutes were necessary in 1994
to insert the detector planes. This and the fact that the beam
conditions did not always allow safe insertion of the detectors
results in the reduced value of the integrated luminosity available
for this analysis with respect to other analyses of the ZEUS 1994 data.
The strip occupancy during data taking, i.e. the average number of
strips firing per trigger divided by the total number of strips,
depended on the beam conditions but was
typically less than 0.1\%, with small contributions from noise
and synchrotron radiation.
The fraction of noisy and malfunctioning
channels in 1994 was less than 2\%; they were due to bad detector strips and
dead or noisy front-end channels. The efficiency of the detector planes,
after excluding these channels, was better than 99.8\%.
The LPS accepts
scattered protons carrying a fraction of the beam momentum, $x_L$,
in the range $x_L \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.4$ and with $0 \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} p_T \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 1$~GeV,
where $p_T$ is the transverse momentum of the proton with respect to
the incoming beam direction.
With the configuration installed in 1994 (S4, S5, S6), the
resolution in $x_L$ is better than
0.4\% at 820~GeV and the $p_T$ resolution is about 5~MeV. The latter is
less than the intrinsic transverse momentum spread in the
proton beam at the interaction point
(with rms of about 40~MeV horizontally and about $90$~MeV
vertically) due to the beam divergence of $\approx 50$~$\mu$rad
in the horizontal and $\approx 110$~$\mu$rad in the vertical plane.
The LPS resolution is further discussed in section~\ref{offline}.
\subsubsection{Reconstruction of an LPS track}
\label{reconstruction}
Tracks are reconstructed in stages, proceeding from individual hits
to full tracks~\cite{tesi_roberto}.
Noisy and malfunctioning channels are masked out and clusters of
adjacent hit strips are searched for in each detector plane.
Most clusters are one strip wide only (typically $\approx 25\%$ of the
clusters have more than 1 strip).
Track segments are then found independently in each detector assembly
of six planes.
As a first step, matching clusters in the two planes with
the same strip orientation are combined.
Candidate local track segments are then found by combining pairs of clusters
belonging to different projections; when a pair of such
clusters intersects within the region covered by the sensitive area of
the detectors, a corresponding cluster in the remaining projection is
searched for.
In order to reduce the number of candidates, local track segments that
traverse the overlap region of the detectors in the upper and the
lower halves of the station are treated
as one candidate. Finally, all hits belonging to a candidate (up to twelve
for tracks crossing the two halves, up to six otherwise)
are used in a fit to find the transverse coordinates
of the track at the value of $Z$ corresponding to the centre of the
station. The spatial resolution on these coordinates is about
30~$\mu$m.
Figure~\ref{tesi3_8} shows the position of the
reconstructed coordinates in the stations S4, S5 and S6 for a
typical run. The regions with a high density of reconstructed hits
in the overlap zone between the upper and the lower detectors
correspond to tracks with $x_L$ close to unity. Lower $x_L$ tracks
are deflected upwards and focussed horizontally onto a vertical line.
For $x_L$ close to unity, this focus line is downstream of S6;
it approaches S6 as $x_L$ decreases and reaches S6 for $x_L\approx 0.7$.
This explains the fact that for low $x_L$ tracks, the impact points
in S5 and S6 tend to lie in a region which becomes narrower as the
vertical coordinate increases.
We distinguish two classes of events: those which are detected in all three of
the stations and those which are detected in only two stations.
In the latter, the interaction vertex
position is required as a third point to measure the momentum. Tracks
detected in three stations can be extrapolated backwards to
$Z=0$ to also measure the transverse position of the interaction vertex.
In both cases, coordinates reconstructed in pairs of different stations are
first combined into track candidates and the track momentum is determined using
the average $ep$ interaction
vertex with coordinates $(X_0,Y_0)$, found on a run-by-run basis with
the sample of three-station tracks.
Linear matrix equations relate the horizontal and vertical coordinates
of the positions ($h_k,v_k$) and slopes ($h^{\prime}_k=dh_k/dl$,
$v^{\prime}_k=dv_k/dl$) of the track at each station to the position
($X_0,Y_0$) and slope ($X_0^{\prime},Y_0^{\prime}$) of the track at the
interaction point. The coordinate along the
beam trajectory is $l$. The positions $(h_k,v_k)$ and slopes
($h_k^{\prime},v_k^{\prime}$) are relative to the nominal beam position and
direction at that value of $l$.
The nominal beam crosses the interaction
point ($Z=0$) at $X=Y=0$. For the horizontal direction one has:
\begin{eqnarray}
\left( \begin{array}{c}
h_k \\
h^{\prime}_k
\end{array}\right)=
\left( \begin{array}{cc}
m_0 & m_1 \\
m_2 & m_3
\end{array}\right)
\left( \begin{array}{c}
X_0 \\
X^{\prime}_0
\end{array}\right)+
\left( \begin{array}{c}
b_0 \\
b_1
\end{array}\right).
\label{matrix}
\end{eqnarray}
\noindent
An independent equation of the same form
can be written for the vertical direction.
The matrix elements
$m_{i}$ are known functions of $x_L$
which describe the beam optics including the effects
of quadrupoles and drift
lengths. The quantities $(b_0,b_1)$, also functions of
$x_L$, describe the deflection induced by the dipoles and by the
quadrupoles in which the beam is off axis; since the beam is taken as reference,
they vanish as $x_L \rightarrow 1$.
Equation~(\ref{matrix}) and the corresponding one for the vertical
direction can be written for a pair of stations $(a,b)$;
upon eliminating the unknowns $X^{\prime}_0$ and $Y^{\prime}_0$, one finds
\begin{eqnarray}
h_b &= &M^{ab}_h(x_L) h_a + C^{ab}_h(x_L,X_0),\label{matrix01}\\
v_b &= &M^{ab}_v(x_L) v_a + C^{ab}_v(x_L,Y_0),
\label{matrix02}
\end{eqnarray}
\noindent
where $M^{ab}$ and $C^{ab}$ are functions of the matrix elements
$m_i$ and $b_i$.
These two equations are independent, apart from the common
dependence on $x_L$, and can be used to obtain two
independent estimates of $x_L$.
If the values obtained are compatible, the pair of coordinates
is retained as a candidate two-station track.
As a final step
of the pattern recognition, three-station track candidates are
searched for using pairs of two-station candidates, e.g. one in S4, S5 and
another in S5, S6. If a pair uses the same hits in the station
common to the two tracks,
if the projections of the two tracks on the horizontal (non-bending) plane
coincide and
if the momenta assigned to each track are compatible, then the
two candidates are merged in a three-station track candidate.
Two-station and three-station candidates are then
passed to a conventional track-fitting stage.
A track $\chi^2$ is
defined as
\begin{equation}
\chi^2 = \left[ \sum_{i}
{{(s_i - S_i(\psi))^2} \over {\sigma_i^2}}\right]
+ {{(X_V - X_0)^2} \over {\sigma_{X_V}^2}}
+ {{(Y_V - Y_0)^2} \over {\sigma_{Y_V}^2}},
\label{eq:chisquared}
\end{equation}
\noindent
where the sum runs over all clusters in all planes assigned to a track. Here
$s_i$ is the cluster position, $\sigma_i$ the uncertainty associated to it
(which includes the effects of multiple scattering and the
contribution of the cluster width; typical values range from 50 to
100~$\mu$m),
$(X_V,Y_V)$ the interaction vertex coordinates in the $X,Y$ plane,
$\sigma_{X_V}$ and $\sigma_{Y_V}$ the nominal widths of the
vertex distribution;
$S_i$, a function of the five track parameters
${\bf{\psi}}=(X_V,Y_V,X_V^\prime,Y_V^\prime,x_L)$, is the predicted cluster
position calculated from equation~(\ref{matrix}) and the corresponding one
in the vertical direction; the quantities $X_V^\prime$, $Y_V^\prime$
indicate the track slopes at the interaction vertex.
The last two terms in eq.~(\ref{eq:chisquared}) constrain the track
to the interaction vertex.
This $\chi^2$ is minimised with respect to the five track parameters,
and the best track parameters, together with the error matrix, are determined.
In the present analysis, for $x_L$ close to unity,
the average value of $\chi^2/ndf$ is
$\approx 1$, where $ndf$ is on average 7.3 for two-station tracks and 17.3
for three-station tracks. Three-station tracks are 60\% of the total.
\subsubsection{Alignment of the LPS}
\label{alignment}
The alignment of the LPS relies on survey
information for locating the detector planes in $l$ and on high-energy proton
tracks for locating them in $h$ and $v$.
The individual detector planes are first aligned within one station,
then the relative alignment of the stations
is determined, and finally
the three stations S4, S5, S6 are aligned relative to the ZEUS
detector. Typical accuracies in $h$ and $v$ are better than 20~$\mu$m.
The actual path of the proton beam is also determined. These steps are
described below.
Tracks traversing the region in which the active areas of the detector
planes in the upper and lower halves of a station overlap are used
to align the detector planes within each half as well as to determine
the position of the upper with respect to the lower half. With this
procedure each plane is aligned independently; rotations of the
detectors around the $l$ axis are also determined.
The relative alignment between the S4, S5, S6 stations in $h$
is then found by exploiting the
fact that tracks are straight lines in this projection.
The only magnetic elements between S4 and S6 are the
dipoles between S4 and S5 which deflect particles vertically.
A sample of tracks with known deflection (i.e. known momentum)
is thus necessary to align the stations relative to each other in $v$.
This can be obtained independently of the LPS using the ZEUS calorimeter:
\begin{eqnarray}
x_L^{CAL}=1-\sum_i(E_i+p_{Zi})/(2E_p),
\label{align_x_l}
\end{eqnarray}
\noindent
where the sum runs over all calorimeter cells, $E$ is the energy measured
in each cell and $p_Z=E\cos{\theta}$, with $\theta$ the polar angle of
each cell. The symbol $E_p$ denotes the incoming proton energy.
Equation~(\ref{align_x_l}) follows from energy and momentum conservation:
$\sum(E+p_Z)_{IN}=\sum(E+p_Z)_{OUT}$, where the sums run
over the initial and final state particles, respectively, and
$\sum(E+p_Z)_{IN}=2E_p$.
Events are selected with $x_L^{CAL}>0.99$; these events have a clear peak in the
$x_L$ spectrum as measured by the LPS, with very little background underneath.
The relative positions of the stations are adjusted so that the peak appears
at $x_L$ of unity.
For the 1994 data, about 20,000 events were used.
The vertical alignment is finally checked by using events with elastic
photoproduction of $\rho^0$ mesons -- those
discussed in the present paper -- exploiting the fact that these events
have scattered protons with $x_L$ very close to unity: $x_L$ can be written,
for elastic $\rho^0$ photoproduction, as
$x_L = 1 - (Q^2 + M_{\rho}^2 + |t|)/W^2$, where
$M_{\rho}$ is the $\rho^0$ meson mass;
for the sample used, the value of $x_L$ differs from unity by at most 0.2\%.
In order to align S4, S5 and S6 with respect to the proton beam line, tracks
traversing all three stations are extrapolated to $Z=0$, taking into account
the fields of all traversed magnetic
elements (mostly quadrupoles, as shown in Fig.~\ref{lps_detailed}). The detectors are
aligned with respect to the quadrupole axes by requiring that,
independent of $x_L$,
the average position of the extrapolated vertex be
the same as that measured by the central tracking detectors.
At this point the detectors are aligned
relative to the proton beam line and to the HERA quadrupole axes, and
hence to the other components of ZEUS. About 40,000 three-station tracks
were used for this procedure.
Finally, the average angle of the proton beam with respect
to the nominal beam direction is determined by using events of elastic
photoproduction of $\rho^0$ mesons.
For such events the transverse components of the scattered proton
momentum balance on average those of the $\rho^0$ meson.
The mean value of the sum of $p_X^{LPS}$ and $p_X^{CTD}$, and similarly
for $p_Y$,
is set to zero by adding a constant offset to the fitted angle of the
LPS tracks at the
interaction vertex for all events. Here $p_X^{LPS}$ and $p_X^{CTD}$ indicate
the $X$ component of the proton momentum as measured by the LPS and
of the $\rho^0$ momentum as measured by the CTD, respectively.
This procedure defines the direction of the $Z$ axis.
Typical values of the beam
offset are $-15$~$\mu$rad and $-100$~$\mu$rad in the horizontal and
vertical directions, respectively, with respect to the nominal
beam direction. The 1994 running period was split
into three parts during which the beam tilt was relatively constant and
the offset was determined for each part.
Fig.~\ref{ctdlps} shows, separately for the $X$ and $Y$
projections, the sum of the proton and the $\rho^0$ transverse momenta after
this correction, which is determined by requiring that both histograms be
centred on zero. The width of the distributions is dominated by the
intrinsic
spread of the transverse momentum in the beam. The other (minor)
contributions are the LPS and CTD resolutions and the fact
that the transverse momentum of the scattered positron is not
identically zero since $Q^2$ is not zero. Note that the effect of
non-zero $Q^2$ is just to widen the
distributions of Fig.~\ref{ctdlps}, not to shift them,
since the $X$ and $Y$ components
of the scattered positron momentum are centred on zero. In addition
events with $Q^2\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.01$~GeV$^2$ contribute to the non-Gaussian tails.
The $x_L$ scale is not affected
by this tilt correction. The sensitivity of the determination of
$t$ to the value of the tilt is weak, as discussed in section~\ref{results}.
The effect of this correction is negligible for all quantities
measured in the central ZEUS apparatus.
As mentioned earlier, the detectors are in the retracted position
between HERA runs;
the positions of the pots (and hence of the detector planes)
vary from one proton fill to the next by up to a few millimeters
in $Y$ (rarely in $X$) depending on the beam position and on
the background conditions. Coordinate reconstruction can thus
not be more accurate than the reproducibility of the detector
positioning system folded with the alignment accuracy.
This is monitored by the run-to-run dependence of the difference
between the coordinates of the track
impact point as measured by the
detector planes in the upper and lower halves of a station
for tracks in the overlap region. Note that this can be done since the
alignment procedure described above is carried out using data from the
whole running period, i.e. not on a run-by-run basis.
The rms value of this difference is $\approx 25~\mu$m
and is consistent
with the specifications of the mechanics and commensurate with the detector
resolution.
\section{Analysis}
\label{analysis}
\subsection{Event selection}
\label{event_selection}
\subsubsection{Trigger}
ZEUS uses a three-level trigger system~\cite{detector_a,detector_b}.
For the present data, the trigger
selected events with photoproduction of a vector meson decaying
into two charged particles with no requirement that either
the scattered positron or the scattered proton be detected.
The first-level trigger required an energy deposit of at least 464~MeV
in the electromagnetic section of RCAL (excluding the towers immediately
around the beam pipe) and at least one track candidate in the CTD. Events
with an energy deposit larger than
1250~MeV in the FCAL towers surrounding
the beam pipe were rejected in order to
suppress proton beam-gas events along with a large fraction of other
photoproduction events. No requirements were made on the LPS information.
At the second-level trigger, the background was reduced by using the
measured times of the energy deposits and the summed energies from the
calorimeter.
The full event information was available at the third-level trigger;
however, only a simplified reconstruction procedure was used.
Tighter timing cuts as well as algorithms to remove
cosmic muons were applied.
One reconstructed vertex
was demanded, with a $Z$ coordinate within $\pm 66$~cm of the nominal
interaction point. Furthermore, the events were required to satisfy at
least one of the following conditions:
\begin{enumerate}
\item fewer than four reconstructed tracks and at least one pair with
invariant mass less than 1.5~GeV (assuming they are pions);
\item fewer than six reconstructed tracks and no pair with invariant mass
larger than 5~GeV (again assuming pions).
\end{enumerate}
\noindent
Both sets of third-level triggers were prescaled by a factor six.
Approximately $3 \times 10^5$ events were selected in this way,
from an integrated luminosity of $898 \pm 14$~nb$^{-1}$
(the luminosity corresponding to no prescale).
\subsubsection{Offline requirements}
\label{offline}
After performing the full reconstruction of the events,
the following offline requirements were imposed to select elastic
$\rho^0 \rightarrow \pi^+\pi^-$ candidates with a high momentum scattered
proton:
\begin{itemize}
\item Exactly two tracks in the CTD from particles of opposite charge, both
associated with the reconstructed vertex.
\item The $Z$ coordinate of the vertex within $\pm30$~cm
and the radial distance within 1.5~cm of the nominal interaction point.
\item In BCAL and RCAL, not more than 200 MeV in any EMC (HAC)
calorimeter cell which is more than 30~cm (50~cm) away from the
extrapolated impact position of either of the two tracks. This cut
rejects events with additional particles, along with events with the
scattered positron in RCAL.
\item One track in the LPS with $0.98<x_L<1.02$.
This corresponds to a $\pm 5\sigma$ window around $x_L=1$,
for an $x_L$ resolution of $0.4\%$.
As stated in section~\ref{alignment}, elastic photoproduction of $\rho^0$ mesons
peaks at values
of $x_L$ which differ from unity by less than $2 \times 10^{-3}$.
This requirement is used to tag elastic events.
\item Protons whose reconstructed trajectories come closer than 0.5~mm
to the wall of the beam pipe, at any point between the vertex and the last
station hit, were rejected. This eliminates events where the proton could have
hit the beam pipe wall and showered. In addition, it removes any
sensitivity of the acceptance to possible misalignments of the HERA beam
pipe elements.
\item The value of the $\chi^2/ndf$ of the fit to the proton track
(cf. section~\ref{reconstruction}) less than 6.
\end{itemize}
\noindent
The pion mass was assigned to each CTD track and the analysis was
restricted to events reconstructed in the kinematic region defined by:
\begin{eqnarray}
0.55 < & M_{\pi\pi} & < 1.2 ~\mbox{GeV}, \nonumber \\
0.27 < & p_T & < 0.63 ~\mbox{GeV}, \nonumber \\
50 < & W & < 100 ~\mbox{GeV},\\
& Q^2&< 1 ~\mbox{GeV}^2. \nonumber
\label{kin}
\end{eqnarray}
\noindent
The restricted range in the two-pion invariant mass $M_{\pi\pi}$
reduces the contamination from
reactions involving other vector mesons, in particular from elastic
$\phi$ and $\omega$ production. The limits on $p_T$, which is measured
with the LPS, remove regions in
which the acceptance of the LPS changes rapidly (cf. section~\ref{montecarlo}).
The photon-proton centre-of-mass energy $W$ and the mass $M_{\pi\pi}$ were determined from
the momenta of the two pions~\cite{rho93}.
Energy and momentum conservation relate the photon energy,
$E_{\gamma}$, to the two-pion system energy $E_{\pi \pi}$ and
longitudinal momentum $p_{Z\pi \pi}$ by
$2E_{\gamma} \approx (E_{\pi \pi} - p_{Z\pi \pi})$,
under the assumption that the positron emits the virtual photon with zero
transverse momentum. Therefore $ W^2 \approx 4 E_\gamma E_p \approx
2 (E_{\pi \pi} - p_{Z\pi \pi}) E_p.$
From the Monte Carlo study discussed in the next section, the resolution on
$W$ has been found to be about 2~GeV; that on $M_{\pi\pi}$ is about $30$~MeV.
The combination of the trigger requirements and the offline cuts, excluding
that on $Q^2$, limits $Q^2$ to be less than
$\approx 4$~GeV$^2$.
However, unlike previous ZEUS analyses of untagged photoproduction
events, for the
present data $Q^2$ was determined event by event. By exploiting the
transverse momentum balance of the scattered positron, the $\pi^+ \pi^-$
pair and the scattered proton, one obtains
$p_{T}^e$, the transverse momentum of the scattered positron.
The variable $Q^2$ was then calculated from $Q^2=(p_{T}^e)^2/(1-y)$, where
$y$ is the fraction of the positron energy transferred
by the photon to the hadronic final state, in the proton rest frame; it
was evaluated as $y \approx W^2/s$, where $\sqrt{s}$
is the $ep$ centre-of-mass energy.
Figure~\ref{figq2_resolution} shows a scatter plot of the reconstructed
and the generated value of $Q^2$ for the sample of Monte Carlo events
used to evaluate the acceptance (cf. section~\ref{montecarlo}); the line
shows
the expected average relationship between these two quantities assuming
a beam transverse momentum distribution with $\sigma_{p_X}=40$~MeV and
$\sigma_{p_Y}=90$~MeV.
At small values of $Q^2$, the resolution on $Q^2$ is dominated by the beam
transverse momentum spread; it is about
100\% at $3 \times 10^{-2}$~GeV$^2$, 40\% at 0.1~GeV$^2$ and
20\% at 1~GeV$^2$.
The final sample contains 1653 events.
Figure~\ref{dataMC} shows the
$M_{\pi\pi}$, $W$, $p_X$, $p_Y$, $p_T$ and $x_L$
distributions after the offline selections; the variables $p_X$ and $p_Y$
denote the transverse components of the outgoing proton momentum with respect to the
incoming beam axis and $p_T^2=p_X^2+p_Y^2$. The invariant mass plot
is dominated by the $\rho^0$ peak. The shape of the $p_X$ spectrum, with
two well separated peaks, is a
consequence of the fact, discussed earlier, that events with $x_L$ close
to unity only populate a narrow region of the detectors at $v\approx 0$.
As discussed earlier, elastic $\rho^0$ photoproduction peaks at $x_L=1$ to
within $< 2 \times 10^{-3}$; the width of the $x_L$ distribution in
Fig.~\ref{dataMC}f shows that the resolution of the LPS in $x_L$
is $\approx 0.4\%$. For the same events,
Fig.~\ref{pxvspy} shows the scatter plot of the reconstructed $X$ and
$Y$ components of the proton momentum.
For the present measurement,
only the $p_T$ region between the dashed
vertical lines in Fig.~\ref{dataMC}e was used.
The variable $t=(p-p^{\prime})^2$, where $p$ and
$p^{\prime}$ are the incoming and the scattered proton four-momenta,
respectively, can be evaluated as follows:
\begin{eqnarray}
t=(p-p^{\prime})^2 \approx -\frac{p_T^2}{x_L}
\left[1+(M_p^2/p_T^2)(x_L-1)^2\right],
\end{eqnarray}
\noindent
where $M_p$ is the proton mass and terms of order
$(M_p^2/\vec{p^{\prime}}^2)^2$ or higher are neglected.
For the present events, which have $x_L\approx 1$, the approximation
$t\approx -p_T^2/x_L\approx -p_T^2$ was used.
Finally, Fig.~\ref{q2} shows the distribution of the reconstructed
values of $Q^2$. As discussed above, at small values of $Q^2$ the
intrinsic spread of the beam transverse momentum dominates. The requirement
that $Q^2$ be less than 1 GeV$^2$ removes 7 events.
The median $Q^2$ of the data, estimated with the Monte Carlo study
discussed in the next section, is approximately $10^{-4}$~GeV$^2$.
\subsection{Monte Carlo generators and acceptance calculation}
\label{montecarlo}
The reaction $ep \rightarrow e\rho^0 p$ was modelled using the
DIPSI~\cite{dipsi} generator, which was shown to reproduce the ZEUS
$\rho^0$ photoproduction data~\cite{rho93}. The effective $W$
dependence of the $\gamma p$ cross section for the events generated
was of the type $\sigma \propto W^{0.2}$. The $t$ distribution was
approximately exponential with a slope parameter of 9.5~GeV$^{-2}$.
The two-pion invariant mass, $M_{\pi\pi}$, was generated so as to
reproduce, after reconstruction, the measured distribution. The
angular distribution of the decay pions was assumed to be that expected
on the basis of $s$-channel helicity conservation~\cite{shilling-wolf}.
The simulated events were passed through the same reconstruction and
analysis programs as the data. In Figures~\ref{dataMC} and~\ref{q2}
the distributions of the reconstructed data (not corrected for acceptance)
over $M_{\pi \pi}$, $W$, $p_X$, $p_Y$, $p_T$, $x_L$ and $Q^2$
are compared with those obtained for the reconstructed Monte Carlo events.
The Monte Carlo is in reasonable agreement with the data.
Figure~\ref{acceptance}a shows the overall acceptance as a function
of $t$, obtained using DIPSI. The acceptance includes the effects of
the geometric acceptance of the apparatus, its efficiency and
resolution and the trigger and reconstruction efficiencies.
Since the detector planes cannot be positioned in the beam,
the acceptance vanishes at small values of $t$.
Conversely, in the $p_X, p_Y$ region covered by the detectors,
the acceptance of the LPS is large, as shown in Fig.~\ref{acceptance}b,
which shows the geometric acceptance of the LPS alone, irrespective
of the acceptance of the rest of the ZEUS apparatus.
The region of LPS geometric acceptance larger
than 95\% for both $p_X>0$ and $p_X<0$ maps into that of
$0.25 \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} p_T \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.65$~GeV;
as discussed in section~\ref{offline}, events outside this region
are not used in the present analysis.
For events with elastic $\rho^0$ photoproduction, the geometric
acceptance of the LPS, averaged over azimuth, is approximately
6\%.
As discussed in the next section, in order to estimate the
contamination from the reaction
$ep \rightarrow e \rho^0 X_N$, where $X_N$ is
a hadronic state of mass $M_N$ resulting from the diffractive
dissociation
of the proton, the PYTHIA generator~\cite{pythia} was used.
A cross section of the form $d^2\sigma /dt dM_N^2 \propto e^{-b|t|}/M_N^2$, with
$b=6$ GeV$^{-2}$, was assumed; the value of $M_N$ ranged between
$M_p+M_{\pi}$ (where $M_p$ is the proton and $M_{\pi}$ the pion mass)
and a maximum fixed by the condition $M_N^2/W^2 \le 0.1$~\cite{chapin}.
The $\rho^0$ decay
angular distributions were assumed to be
the same as those of the elastic events.
\subsection{Backgrounds}
\label{backgrounds}
After applying the selection criteria described in
section~\ref{event_selection},
the data contain small contributions from various background
processes to the reaction $ep \rightarrow e \pi^+ \pi^- p$:
\begin{itemize}
\item Beam-halo tracks observed in the LPS may overlap with events in which
a $\rho^0$ is seen by
the ZEUS central apparatus.
The term beam-halo refers to protons with
energy close to that of the beam originating from
interactions of beam protons with the residual gas in the
pipe or with the beam collimators.
Such tracks are completely uncorrelated with the activity
in the central ZEUS apparatus; therefore,
any sample of events selected without using the LPS information
contains the same fraction, $\epsilon_{halo}$, of random overlaps from
halo tracks within
the LPS acceptance. This fraction was found to be
$\epsilon_{halo}=0.25 \pm 0.03\%$ by analysing
events of the type $ep \rightarrow eX$ at $Q^2 > 4$~GeV$^2$ in
which the virtual photon diffractively dissociates into the state $X$.
For these events one can measure $X$ and the scattered positron
in the calorimeter; in addition a proton track is looked for in the LPS.
If one is found, the event is fully contained and its
kinematics is thus overconstrained: most beam-halo events
appear to violate energy-momentum conservation and can therefore
be identified.
The contamination of
the present sample (after the requirement of a good LPS track)
can be obtained as
$(\epsilon_{halo} N_{no LPS})/N_{LPS}=5.0\% \pm 0.6\%~(\mbox{stat.})$, where
$N_{no LPS}$ indicates the number of events
found by applying all cuts except for the requirement that a track be
detected in the LPS, and $N_{LPS}=1653$ is the number of events
after all cuts.
These events were not removed from the present
sample, but their effect on the measurement of the $t$ slope is small, as
discussed in section~\ref{results}.
\item
In the reaction $ep \rightarrow e \rho^0 X_N$,
the proton diffractively dissociates into a hadronic
state $X_N$ which may escape detection by the central detector.
The debris
of $X_N$ may contain a proton within the LPS acceptance and with
$0.98<x_L<1.02$: such events
are indistinguishable from elastic $\rho^0$ production.
In order to evaluate the contamination from such events, the cut on
$x_L$ was removed; Fig.~\ref{pdiffr} shows the $x_L$ spectrum thus
obtained, not corrected for acceptance.
The sum of the reconstructed $x_L$ distributions
from DIPSI and PYTHIA was fitted to this spectrum with
the normalisations of the simulated distributions as
free parameters of the fit.
The fit gives an acceptable description of the data, as shown in
Fig.~\ref{pdiffr}. The resulting
contamination of proton-dissociative events for $x_L>0.98$ is
$0.21\% \pm 0.15\%~(\mbox{stat.})$, a major improvement with
respect to $11\% \pm 1\%~(\mbox{stat.})
\pm 6\%~(\mbox{syst.})$ in the earlier ZEUS result~\cite{rho93} which did not
use the LPS.
\item
The contaminations from elastic production of $\omega$ and $\phi$ mesons
were estimated in~\cite{rho93} to be $(1.3\pm0.2)\%$ and
$(0.3 \pm 0.1)\%$, respectively.
\item
Contamination from positron beam-gas and proton beam-gas events was
studied by using the unpaired bunches event samples to which all the cuts
described above were applied. No event passed the cuts, indicating a
negligible contamination.
\end{itemize}
\section{Results}
\label{results}
The differential cross section $d\sigma/dM_{\pi\pi}$ for the process
$\gamma p \rightarrow \pi^+ \pi^- p$ was evaluated
in the kinematic range
$0.55<M_{\pi\pi}<1.2$~GeV, $50<W<100$~GeV, $Q^2<1$~GeV$^2$ and
$0.073<|t|<0.40$~GeV$^2$.
In each bin the
cross section was obtained as
\begin{eqnarray}
\frac {N_{\pi^+\pi^-}}{a L \Phi}c_{halo},
\label{crosssection}
\end{eqnarray}
\noindent
where $N_{\pi^+\pi^-}$ is the number of observed events in the bin,
$L$ is the integrated luminosity,
$a$ is the overall acceptance in the bin,
and $\Phi=0.0574$ is the photon flux factor, i.e. the integral of
equation (5) in ref.~\cite{rho93} over the measured $W$ and $Q^2$ ranges
of this measurement. The factor $c_{halo}=0.950 \pm 0.006$~(stat.) corrects for
the beam-halo contamination discussed in section~\ref{backgrounds}.
The effects of positron initial and final state radiation
and that of vacuum polarisation loops were neglected; the effects
on the integrated cross section were estimated to be smaller
than 4\%~\cite{kurek}.
The effects on the shape of the $M_{\pi\pi}$ and $t$ distributions are
expected to be negligible. The small residual contaminations
from proton dissociative
$\rho^0$ production and elastic $\omega$ and $\phi$ production,
discussed in the previous section, were not corrected for.
Figure~\ref{rec_mass} shows the differential cross section
$d\sigma/dM_{\pi\pi}$ in the interval
$0.55< M_{\pi\pi} < 1.2$~GeV, $0.073<|t|<0.40$~GeV$^2$ for
$\langle W \rangle = 73$~GeV. The mass spectrum is skewed, as previously
observed
both at low energy and at HERA. This can be understood
in terms of resonant and
non-resonant $\pi^+\pi^-$ production~\cite{drell}, and their
interference~\cite{soeding}. The spectrum was fitted using expression (11)
of ref.~\cite{rho93}. The results for the total, resonant and interference
terms, as obtained in the fit, are indicated in the figure.
The fraction of the resonant to the total contribution in the measured range
was found to be $c_{res}=0.91 \pm 0.04$~(syst.).
The uncertainty was evaluated by repeating the fit with
the various functional forms discussed in~\cite{rho93}.
In~\cite{rho93} the contribution of the resonant term was found to
vary from $86\%$ for $|t|=0.01$~GeV$^2$ to $95\%$ for $|t|=0.5$ GeV$^2$.
No $t$ dependence of $c_{res}$ was assumed here, except in the
evaluation of the systematic uncertainty (see below).
The differential cross section $d\sigma/dt$ for the reaction
$\gamma p \rightarrow \rho^0 p$ was obtained similarly to
$d\sigma/dM_{\pi\pi}$, but in addition the correction factor $c_{res}$, just
discussed, was applied.
Figure~\ref{dndt} shows the result
in the interval $0.073<|t|<0.40$~GeV$^2$, $0.55< M_{\pi\pi} < 1.2$~GeV
for $\langle W \rangle = 73$~GeV.
The data were fitted with the function
\begin{eqnarray}
\frac{ d\sigma}{dt} = A \cdot e^{-b |t|};
\label{single}
\end{eqnarray}
\noindent
the result of the fit is shown as a straight line on Fig.~\ref{dndt}.
The fitted value of the slope parameter $b$ is
\begin{eqnarray}
b = 9.8\pm 0.8~(\mbox{stat.})
\pm 1.1~(\mbox{syst.})~\mbox{GeV}^{-2}.
\label{result}
\end{eqnarray}
\noindent
The result is consistent with
$b=9.9 \pm 1.2~(\mbox{stat.}) \pm 1.4~(\mbox{syst.})$~GeV$^{-2}$
obtained in~\cite{rho93}
for the range $60<W<80$~GeV, $Q^2<4$~GeV$^2$ and $|t|<0.5$~GeV$^2$ using
a fit of the type $A\exp{(-b|t|+ct^2)}$. For both the present data and
those of ref.~\cite{rho93}, $\langle W \rangle \approx 70$~GeV.
The measured differential cross section was integrated over the range
$0.073<|t|<0.40$~GeV$^{-2}$, yielding
$\sigma = 5.8 \pm 0.3~(\mbox{stat.})\pm 0.7~(\mbox{syst.})~\mu\mbox{b}$,
again at $\langle W \rangle =73$~GeV and for
$0.55< M_{\pi\pi} < 1.2$~GeV. The result can be extrapolated to the mass range
$2M_{\pi}< M_{\pi\pi} < M_{\rho}+5 \Gamma_0$, as in~\cite{rho93},
using the fit to the mass spectrum described earlier (here $\Gamma_0$ is the
$\rho^0$ width); this yields
$\sigma = 6.3 \pm 0.3~(\mbox{stat.})\pm 0.8~(\mbox{syst.})~\mu\mbox{b}$,
where no uncertainty was assigned to the extrapolation. If our previous
result~\cite{rho93} is integrated in the $t$ range covered by the
present data, using the published results of the fit with the function
$A\exp{(-b|t|+ct^2)}$ (table 5 of ref.~\cite{rho93}, left column),
one finds
$\sigma = 6.7 \pm 1.1~(\mbox{syst.})~\mu\mbox{b}$, in good agreement
with the present result; only the systematic uncertainty is given since it is
dominant.
This uncertainty was
obtained by scaling the one published in~\cite{rho93} for the
cross section measured over the range $|t|<0.5$~GeV$^2$ by the ratio
of the cross sections for the present $t$ range and for $|t|<0.5$~GeV$^2$.
The major sources of systematic uncertainty on $b$ and $\sigma$
are the acceptance
determination and the background contamination, the former being
dominant. Table~\ref{systematics} lists the individual contributions.
In the following we discuss them in detail.
\begin{table}
\begin{center}
\begin{tabular}{lcc} \hline \hline
Contribution & $|\Delta b/b|$ & $|\Delta \sigma/\sigma|$ \\ \hline
Integrated luminosity & - & 1.5\% \\
Acceptance: trigger efficiency & - & 9\% \\
Acceptance for pion tracks & $<1\%$ & 1\% \\
Acceptance for proton track & 7\% & 6\% \\
Acceptance: sensitivity to binning in $t$& 2\% & - \\
Acceptance: unfolding of beam transverse
momentum spread & 7\% & - \\
Acceptance: sensitivity of $p$ beam angle& 3\% & 1\% \\
Background: beam-halo & 4\% & - \\
Procedure to extract the resonant
part of the cross section & 1.6\% & 4\% \\
Background due to elastic
$\omega$ and $\phi$ production & - & 1\% \\
Radiative corrections & - & 4\% \\
\hline
Total & 11\% & 12\% \\
\hline \hline
\end{tabular}
\end{center}
\caption{Contributions to the systematic uncertainty on $b$ and $\sigma$.
}
\label{systematics}
\end{table}
\begin{enumerate}
\item
In order to estimate the uncertainty due to the acceptance,
the analysis was repeated varying the requirements and
procedures as listed below.
\begin{enumerate}
\item For the pion tracks in the central detector:
\begin{itemize}
\item The pseudorapidity $\eta=-\ln{\tan{(\theta/2)}}$
of each of the two tracks was restricted
to the range $|\eta|~<~1.8$, thereby using only tracks which
have traversed at least three superlayers in the CTD.
\item The radial distance of the vertex from the beam axis was required
to be less than 1~cm.
\end{itemize}
In both cases the changes are small; by summing them in quadrature
one finds $|\Delta b/b|=0.2\%$ and $|\Delta \sigma/\sigma|=1\%$.
\item For the proton track in the LPS:
\begin{itemize}
\item The maximum allowed value of $\chi^2/ndf$ for the reconstructed proton
track was reduced from 6 to 2.
\item The minimum distance of approach of the proton
trajectory to the beam pipe was increased from 0.5~mm to
1.5~mm.
\item Events with $p_X>0$ and with $p_X<0$ were analysed separately,
as a check of possible relative rotations of the stations.
\item The data were divided into a ``large acceptance" and a ``low
acceptance" sample depending on the position of the LPS stations,
which, as discussed above, varied slightly from run to run.
\end{itemize}
By summing the individual contributions to $\Delta b/b$ in quadrature,
independently of their sign, $|\Delta b/b|=7\%$ is obtained. The corresponding
uncertainty on $\sigma$ is $|\Delta \sigma/\sigma|=6\%$.
\item The sensitivity of the result on $b$ to the binning in $t$ was studied by
reducing bin sizes by up to 20\%; the bin edges were moved by up to one
fourth of the bin size. The largest effect was $2\%$ for
$|\Delta b/b|$.
\item As discussed earlier, $t$ has been obtained as $-p_T^2$, with
$p_T^2=p_X^2+p_Y^2$ the transverse momentum of the scattered proton
with respect to the incoming beam axis. Since the incoming proton beam has an
intrinsic transverse momentum spread of $\sigma_{p_X} \approx 40$~MeV and
$\sigma_{p_Y} \approx 90$~MeV, which is much larger than the LPS resolution
in transverse momentum, the measured value of $t$ is smeared with
respect to the true $t$.
The Monte Carlo simulation takes into account the
proton beam transverse momentum spread.
The acceptance corrected $t$ distribution is thus corrected
also for this effect.
The following alternative approach to account for the effect of
the transverse momentum spread of the beam has also been followed.
Assuming that the true $t$ distribution has the form given
by equation~(\ref{single}),
the measured $p_T^2$ distribution can be expressed as a
convolution of equation~(\ref{single}) and a two-dimensional Gaussian
distribution
representing the beam transverse momentum distribution, with standard
deviations $\sigma_{p_X}$ and $\sigma_{p_Y}$.
Unfolding the contribution of the beam transverse
momentum spread from $d\sigma/dp_T^2$ provides an alternative evaluation
of $d\sigma/dt$. In this case one first measures the distribution of
$p_T^2$ without making any correction for
the effects of the beam intrinsic spread, thereby
exploiting the good resolution of the LPS on the transverse momentum.
In a second stage, the effect of the beam spread is unfolded.
If $\sigma_{p_X}=40$~MeV and $\sigma_{p_Y}=90$~MeV (as seen in the data,
cf. Fig.~\ref{ctdlps}),
then $|\Delta b/b|=7\%$, with
only a weak dependence on the values of $\sigma_{p_X}$ and $\sigma_{p_Y}$.
\item The sensitivity to the determination of the proton beam angle (cf.
section~\ref{alignment}) was evaluated by
systematically shifting $p_T$ by 10~MeV. This amount is
twice the $p_T$ resolution of the LPS and corresponds
to $>5$ times the uncertainty on the means of the distributions
of Fig.~\ref{ctdlps}. The corresponding variations
of $b$ and $\sigma$ were $|\Delta b/b|=3\%$ and
$|\Delta \sigma/\sigma|=1\%$.
\end{enumerate}
\noindent
The differences between the values of $b$ obtained in cases (a) to (e)
and that obtained with
the standard analysis were summed in quadrature, yielding
$|\Delta b/b|=10.5\%$ and $|\Delta \sigma/\sigma|=6\%$.
\item
Effect of background contamination.
\begin{enumerate}
\item As mentioned above, no correction was applied for a possible $t$
dependence of the background. The only significant background
is the halo. If the assumption is made that the halo contribution
($5.0\% \pm 0.6\%$) has
a distribution of the type $\exp{(-b_{halo}|t|)}$, then
$|\Delta b/b|<4\%$ when $b_{halo}$ is varied
between 5 and 15~GeV$^{-2}$; this range of variation is
consistent with estimates of $b_{halo}$ based on
the $ep \rightarrow eXp$ events at $Q^2>4$~GeV$^2$ discussed in
section~\ref{backgrounds}.
\item
If the $t$ dependence of $c_{res}$ evaluated in~\cite{rho93}
is assumed for the present data, the slope
changes by $\Delta b/b=-1.6\%$.
\end{enumerate}
\end{enumerate}
\noindent
The latter two contributions were also added quadratically to the
systematic uncertainty, yielding a total systematic uncertainty
of 11\% on $b$, dominated by the LPS acceptance and the effect of the
beam transverse momentum spread.
The total systematic uncertainty on $\sigma$ is 12\%, which includes,
in addition to the contributions detailed above,
the uncertainty on the luminosity (1.5\%), that
on the trigger efficiency~\cite{rho93} (9\%) and that related to
the extraction of the resonant part of the cross section.
The estimated background due to elastic $\omega$ and $\phi$ production,
as well as the upper limit of the correction for radiative effects have
also been included. The systematic uncertainty on $\sigma$ is dominated
by contributions not related to the LPS (11\%); the uncertainty
on the LPS acceptance is 6\%, which has only a small effect on the total
uncertainty when summed in quadrature with the other contributions.
\section{Conclusions}
The Leading Proton Spectrometer of ZEUS is a large scale system of
silicon micro-strip detectors which have been successfully operated
close to the HERA proton beam (typically a few mm) by
means of the ``Roman pot"
technique. It measures precisely the momentum of high energy scattered protons,
with accuracies of 0.4\% for the longitudinal and 5~MeV for the
transverse momentum.
As a first application, the cross section, the $M_{\pi\pi}$ and the
$t$ dependences of the reaction
$\gamma p \rightarrow \rho^0 p$
have been measured in the kinematic range $Q^2 <1$~GeV$^2$,
$50<W<100$~GeV, $0.55< M_{\pi\pi}<1.2$~GeV and $0.073<|t|<0.40$~GeV$^2$.
Elastic events were tagged by demanding that $x_L$ be larger than 0.98,
i.e. that the scattered proton carry at least 98\% of the incoming
proton momentum.
For the first time at these energies, $t$ was measured directly.
Compared to our previous analysis, the present technique
based on the use of the LPS eliminates
the contamination from events with diffractive dissociation
of the proton into low mass states.
In the range $0.073<|t|<0.40$~GeV$^2$, the differential cross section
$d\sigma/dt$ is described by an exponential distribution with a slope
parameter
$b = 9.8\pm 0.8~(\mbox{stat.}) \pm 1.1~(\mbox{syst.})~\mbox{GeV}^{-2}$.
The systematic uncertainty is dominated by the uncertainty on the
LPS acceptance and the
effect of the intrinsic transverse momentum spread of the beam.
In the measured $t$ and $M_{\pi\pi}$ intervals, the integrated
$\rho^0$ photoproduction cross section at
$\langle W \rangle =73$~GeV was found to be
$5.8\pm 0.3~(\mbox{stat.}) \pm 0.7~(\mbox{syst.})~\mu$b, consistent with our
previous measurement~\cite{rho93} obtained in a slightly different kinematic
range.
\section{Acknowledgements}
We thank the DESY directorate for their strong support and encouragement.
We are also very grateful to the HERA machine group: collaboration
with them was crucial to the successful installation and operation
of the LPS.
We also want to express our gratitude to all those who have participated
in the construction of the LPS, in particular to the very
many people from the University of Bologna and INFN Bologna~($B$), CERN~($C$),
the University of Calabria and INFN Cosenza~($Cs$), DESY~($D$),
LAA~($L$)~\cite{laa}, the University of Torino and INFN Torino~($T$), the
University of California at Santa Cruz~($S$),
who have so greatly contributed to the LPS project at various stages.
\noindent
For financial support we are grateful to the Italian
Istituto Nazionale di Fisica Nucleare (INFN), to the LAA project and
to the US Department of Energy.
\noindent
For the mechanical design and construction of the stations and their
interface with HERA: G.~Alfarone$^T$, F.~Call\`a$^T$,
J.~Dicke$^D$, G. Dughera$^T$, P. Ford$^L$,
H. Giesenberg$^D$, R.~Giesenberg$^D$, G. Giraudo$^T$,
M.~Hourican$^L$, A. Pieretti$^B$, P. Pietsch$^D$.
\noindent
For discussions on the optical design and on the mechanical interface with HERA
and for making some modifications to the machine layout to improve the LPS
acceptance:
W. Bialowons$^D$, R.~Brinkman$^D$,
D.~Degele$^D$, R.~Heller$^D$, B.~Holzer$^D$,
R. Kose$^D$, M. Leenen$^D$,
G. Nawrath$^D$, K.~Sinram$^D$, D. Trines$^D$, T.~Weiland$^D$ and F. Willeke$^D$.
\noindent
For advice on radiation doses and hardness of service electronics:
H. Dinter$^D$, B.~Lisowski$^B$ and H. Schoenbacher$^C$.
\noindent
For metrology and survey: C. Boudineau$^C$, R. Kus$^D$,
F.~Loeffler$^D$ and the DESY survey team.
\noindent
For special monitor information from HERA and the design of
the interlock system: P.~Duval$^D$, S.~Herb$^D$,
K.-H.~Mess$^D$, F. Peters$^D$, W. Schuette$^D$, M. Wendt$^D$.
\noindent
For installation and services: W. Beckhusen$^D$, H. Grabe-Celik$^D$,
G. Kessler$^D$,
G. Meyer$^D$, N.~Meyners$^D$, W. Radloff$^D$, U. Riemer$^D$
and F.R. Ullrich$^D$.
\noindent
For developing elliptical cutting of detectors: N. Mezin$^C$ and I. Sexton$^C$.
\noindent
For vacuum design, testing and urgent repairs: R. Hensler$^D$,
J. Kouptsidis$^D$, J. Roemer$^D$ and H-P. Wedekind$^D$.
\noindent
For front-end electronics design and front-end assembly: D.~Dorfan$^S$,
J. De Witt$^S$, W.A. Rowe$^S$, E. Spencer$^S$ and A. Webster$^S$.
\noindent
For the development of the special multi-layer boards which support the
detectors and the front-end electronics: A. Gandi$^C$,
L. Mastrostefano$^C$, C.~Millerin$^C$, A. Monfort$^C$, M. Sanchez$^C$ and
D.~Pitzl$^S$, a former member of ZEUS.
\noindent
For the loan of S5 and S6 and part of their modification as well as help
with urgent repairs: B.~Jeanneret$^C$,
R.~Jung$^C$, R.~Maleyran$^C$ and M.~Sillanoli$^C$.
\noindent
For service, control and readout electronics:
F. Benotto$^T$, M.~Ferrari$^B$, H.~Larsen$^L$, F. Pellegrino$^{Cs}$,
J. Schipper$^L$,
P.P. Trapani$^T$ and A. Zampieri$^T$.
| proofpile-arXiv_065-340 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
The study of the elementary excitations of ultrarelativistic plasmas,
such as the quark-gluon plasma, has received much
attention in the recent past.
(See \cite{BIO96,MLB96} for recent reviews and more references.)
The physical picture which emerges is that of a system with
two types of degrees of freedom:
{\it i}) the plasma quasiparticles,
whose energy is of the order of the temperature $T$;
{\it ii}) the collective excitations, whose typical energy
is $gT$, where $g$ is the gauge coupling,
assumed to be small: $g\ll 1$ (in QED, $g=e$ is the electric charge).
For this picture to make sense, however, it is important that the
lifetime of the excitations be large compared to the
typical period of the modes.
Information about the lifetime is obtained from the
retarded propagator. A usual expectation is that
$S_R(t,{\bf p})$ decays {\it exponentially} in time,
$S_R(t,{\bf p})\,\sim\,{\rm e}^{-i E(p)t} {\rm e}^{ -\gamma({p}) t}$,
where $E(p) \sim T$ or $gT$ is the average energy of the excitation,
and $\gamma(p)$ is the damping rate.
Therefore, $|S_R(t,{\bf p})|^2\,\sim\,{\rm e}^{ -\Gamma({p}) t}$
with $\Gamma(p)=2\gamma(p)$, which identifies the lifetime
of the single particle excitation as $\tau(p) = 1/\Gamma(p)$.
The exponential decay may then be associated to a pole
of the Fourier transform $S_R(\omega,{\bf p})$,
located at $\omega = E-i\gamma$.
The quasiparticles are well defined
if their lifetime $\tau$ is much larger than the period $\sim 1/E$
of the field oscillations, that is, if the damping rate
$\gamma$ is small compared to the energy $E$. If this is the case,
the respective damping rates
can be computed from the imaginary part of the on-shell
self-energy, $\Sigma(\omega=E(p), {\bf p})$.
Previous calculations \cite{Pisarski89} suggest that
$\gamma\sim g^2T$
for both the single-particle and the collective excitations.
In the weak coupling regime $g\ll 1$,
this is indeed small compared to the corresponding
energies (of order $T$ and $gT$, respectively),
suggesting that the quasiparticles are well defined, and the
collective modes are weakly damped. However, the computation of
$\gamma$ in perturbation theory
is plagued with infrared (IR) divergences, which casts doubt on the
validity of these statements [3---9].
The first attempts to calculate the damping rates
were made in the early 80's. It was then found that,
to one-loop order, the damping rate of the soft
collective excitations in the hot QCD plasma was gauge-dependent,
and could turn out negative in some gauges (see Ref. \cite{Pisarski91}
for a survey of this problem). Decisive progress on this
problem was made by Braaten and Pisarski \cite{Pisarski89}
who identified the resummation needed to obtain the
screening corrections in a gauge-invariant way
(the resummation of the so called ``hard thermal loops'' (HTL)).
Such screening corrections are sufficient to
make IR-finite the transport cross-sections \cite{Baym90,BThoma91},
and also the damping rates of excitations
with zero momentum \cite{Pisarski89,KKM}.
At the same time, however,
it has been remarked \cite{Pisarski89} that the HTL resummation
is not sufficient to render finite
the damping rates of excitations with non vanishing momenta.
The remaining infrared divergences are due to collisions involving the
exchange of longwavelength, quasistatic, magnetic photons (or gluons),
which are not screened in the hard thermal loop approximation.
Such divergences affect the computation of the damping rates
of {\it charged} excitations (fermions and gluons),
in both Abelian and non-Abelian gauge theories.
Furthermore, the problem appears for both soft ($p \sim gT$) and hard
($p \sim T$) quasiparticles. In QCD this problem is generally
avoided by the {\it ad-hoc} introduction of an IR cut-off
(``magnetic screening mass'') $\sim g^2T$, which is
expected to appear dynamically from gluon
self-interactions \cite{MLB96}.
In QED, on the other hand, it is known that no magnetic
screening can occur \cite{Fradkin65},
so that the solution of the problem must lie somewhere else.
In order to make the damping rate $\gamma$ finite,
Lebedev and Smilga proposed
a self-consistent computation of the damping rate,
by including $\gamma$ also in internal propagators \cite{Lebedev90}.
However, the resulting self-energy
is not analytic near the complex mass-shell, and the logarithmic
divergence actually reappears when the discontinuity of the self-energy is evaluated
at $\omega= E - i\gamma$ \cite{Baier92,Pisarski93}. More
thorough investigations along the same lines led to the conclusion
that the full propagator has actually no quasiparticle
pole in the complex energy plane \cite{Pilon93}. These analyses left
unanswered, however, the question of the large time
behavior of the retarded propagator.
As we have shown recently for the case of QED \cite{prl}, the answer
to this question requires a non perturbative treatment,
since infrared divergences occur in {\it all}
orders of perturbation theory. We have identified the
{\it leading} IR divergences in all orders, and
solved exactly an effective theory which reproduces all
these leading divergences. The resulting fermion
propagator $S_R(\omega)$ turns out to be {\it analytic}
in the vicinity of the mass-shell.
Moreover, for large times $t\gg 1/gT$,
the Fourier transform $S_R(t)$ does not show the usual exponential decay alluded
to before, but the more complicated behavior
$S_R(t)\,\sim\,{\rm e}^{-iE t} {\rm exp}\{-\alpha T \, t
\ln\omega_pt \}$, where $\alpha=g^2/4\pi$ and $\omega_p\sim gT$ is the plasma frequency.
This corresponds to a typical lifetime $\tau^{-1}\sim g^2T\ln (1/g)$,
which is similar to the one provided by the perturbation
theory with an IR cut-off of the order $g^2T$.
\section{THE INFRARED PROBLEM}
Let me briefly recall how the infrared problem
occurs in the perturbative calculation of the damping rate $\gamma$.
For simplicity, I consider an Abelian plasma, as described by QED,
and compute the damping rate of a hard electron, with momentum $p\sim T$
and energy $E(p)=p$.
To leading order in $g$, and after the resummation of the screening
corrections, $\gamma$ is obtained from the imaginary part of the
effective one-loop self-energy in Fig.~\ref{effS}.
The blob on the photon line in this figure
denotes the effective photon propagator in the HTL approximation,
commonly denoted as ${}^*D_{\mu\nu}(q)$. In the Coulomb gauge, the only non-trivial
components of ${}^*D_{\mu\nu}(q)$
are the electric (or longitudinal)
one ${}^*D_{00}(q)\equiv {}^*\Delta_l(q)$, and the magnetic (or transverse) one
${}^*D_{ij}(q)=(\delta_{ij}-\hat q_i\hat q_j){}^*\Delta_t(q)$, with
\beq\label{effd}
{}^*\Delta_l(q_0,q)\,=\,\frac{- 1}{q^2- \Pi_l(q_0,q)},\qquad
{}^*\Delta_t(q_0,q)\,=\,\frac{-1}{q_0^2-q^2 -\Pi_t(q_0,q)},\eeq
where $\Pi_l$ and $\Pi_t$ are the respective pieces
of the photon polarisation tensor \cite{BIO96,MLB96}.
Physically, the on-shell
discontinuity of the diagram in Fig.~\ref{effS} accounts
for the scattering of the incoming electron (with four momentum
$p^\mu=(E(p), {\bf p})$) off a thermal fermion (electron or positron),
as mediated by a soft, dressed, virtual photon. (See Fig.~\ref{Born}.)
\begin{figure}
\protect \epsfxsize=8.cm{{\epsfbox{resummed1l.eps}}}
\caption{The resummed one-loop self-energy}
\label{effS}
\end{figure}
The interaction rate corresponding to Figs.~\ref{effS} or \ref{Born}
is dominated by soft momentum transfers $q\ll T$. It is
easily computed as \cite{Pisarski93,BThoma91}
\beq\label{G2L}
\gamma \simeq\, \frac{g^4 T^3}{12}\,
\int_{0}^{q^*}{\rm d}q \int_{-q}^q\frac{{\rm d}q_0}{2\pi}
\left\{|{}^*\Delta_l(q_0,q)|^2\,+\,\frac{1}{2}\left(1-\frac{q_0^2}{q^2}\right)^2
|{}^*\Delta_t(q_0,q)|^2\right\}\,,\eeq
where the upper cut-off $q^*$ distinguishes between
soft and hard momenta: $gT\ll q^* \ll T$.
Since the $q$-integral is dominated by IR momenta, its leading
order value is actually independent of $q^*$.
\begin{figure}
\protect \epsfxsize=6.cm{\centerline{\epsfbox{effcoll.eps}}}
\caption{Fermion-fermion elastic scattering in the Born approximation}
\label{Born}
\end{figure}
The two terms within the parentheses in eq.~(\ref{G2L})
correspond to the exchange of an electric and of
a magnetic photon respectively.
For a bare (i.e., unscreened) photon, we have $|\Delta_l(q_0,q)|^2= 1/q^4$ and
$|\Delta_t(q_0,q)|^2= 1/(q_0^2-q^2)^2$, so that
the $q$-integral in eq.~(\ref{G2L}) shows a quadratic IR divergence:
\beq\label{G2L0}
\gamma\simeq \frac{g^4T^3}{8\pi} \,
\int_{0}^{q^*}\frac{{\rm d}q}{q^3}\,.\eeq
This divergence reflects the singular behaviour
of the Rutherford cross-section for forward scattering.
As well known, however, the quadratic divergence is removed by the
screening corrections contained in the photon polarization tensor.
We shall see below that the leading IR contribution
comes from the domain $q_0\ll q \ll T$, where we can
use the approximate expressions \cite{BIO96,MLB96} (with $\omega_p=eT/3$)
\beq\label{pltstatic}
\Pi_l(q_0\ll q) \simeq 3{\omega_p^2}\,\equiv m_D^2,\qquad
\Pi_t(q_0\ll q) \simeq \,-i\,\frac{3\pi}{4}\,{\omega_p^2}\,\frac{q_0}{q}\,
.\eeq
We see that screening occurs in different ways
in the electric and the magnetic sectors.
In the electric sector, the familiar static Debye screening provides
an IR cut-off $m_{D}\sim gT$. Accordingly,
the electric contribution to $\gamma$ is finite,
and of the order $\gamma_l \sim g^4 T^3/m_{D}^2
\sim g^2 T$. Its exact value can be computed by numerical
integration \cite{Pisarski93}. In the magnetic sector,
screening occurs only for nonzero frequency $q_0$ \cite{Baym90}.
This comes from the imaginary part of the polarisation tensor,
and can be associated to the Landau damping \cite{PhysKin}
of space-like photons ($q_0^2<q^2$).
This ``dynamical screening'' is not sufficient to completely
remove the IR divergence of $\gamma_t\,$, which is only reduced to a logarithmic one:
\beq\label{G2LR}
\gamma_t &\simeq& \frac{g^4 T^3}{24}\,
\int_{0}^{q^*}{\rm d}q \int_{-q}^q\frac{{\rm d}q_0}{2\pi}
\,\frac{1}{q^4 + (3\pi \omega_p^2 q_0/4q)^2} \nonumber\\
&\simeq & \frac{g^2T}{4\pi}
\int_{\mu}^{\omega_p}\frac{{\rm d}q}{q}\,=\, \frac{g^2T}{4\pi}\,
\ln \frac{\omega_p}{\mu}.\eeq
The unphysical lower cut-off $\mu$ has been introduced by hand,
in order to regularize the IR divergence of the integral over $q$.
The upper cut-off $\omega_p\sim gT$
accounts approximately for the terms which have been
neglected when going from the first to the second line
of eq.~(\ref{G2LR}). As long as we are interested only in
the coefficient of the logarithm,
the precise value of this cut-off is unimportant. The scale $\omega_p$ however
is uniquely determined by the physical process responsible for the existence
of space like photons, i.e., the Landau damping. As we
shall see later, this is the scale which fixes
the long time behavior of the retarded propagator.
The remaining IR divergence in eq.~(\ref{G2LR}) is due to collisions involving the
exchange of very soft ($q\to 0$),
{\it quasistatic} ($q_0\to 0$) magnetic photons,
which are not screened by plasma effects.
To see that, note that the IR contribution to
$\gamma_t$ comes from momenta $q\ll gT$,
where $|{}^*\Delta_t(q_0,q)|^2$ is almost a delta function of $q_0$:
\beq \label{singDT}
|{}^*\Delta_t(q_0,q)|^2\,\simeq\,
\frac{1} {q^4 + (3\pi \omega_p^2 q_0/4q)^2}\,
\longrightarrow_{q\to 0}\,\frac{4}{3 q \omega_p^2}\,\delta(q_0)\,.\eeq
This is so because,
as $q_0\to 0$, the imaginary part of the polarisation
tensor vanishes {\it linearly}
(see the second equation (\ref{pltstatic})),
a property which can be related to the behaviour of the
phase space for the Landau damping processes.
Since energy conservation requires $q_0=q\cos\theta$,
where $\theta$ is the angle
between the momentum of the virtual photon (${\bf q}$) and that
of the incoming fermion (${\bf p}$),
the magnetic photons which are responsible for the singularity
are emitted, or absorbed, at nearly 90 degrees.
\section{A NON PERTURBATIVE CALCULATION}
The IR divergence of the leading order calculation
invites to a more thorough investigation
of the higher orders contributions to $\gamma$. Such an
analysis \cite{prl} reveals strong, power-like, infrared
divergences, which signal the breakdown of the perturbation theory.
(A similar breakdown occurs in the computation of the
corrections to the non-Abelian Debye mass \cite{debye}.)
To a given order in the loop expansion, the most
singular contributions to $\gamma$ arise from self-energy diagrams
of the kind illustrated in Fig.~\ref{effN}. These diagrams have no
internal fermion loops (quenched QED), and all the internal photons
are of the magnetic type (the electric photons, being screened,
give no IR divergences). Furthermore, the {\it leading} divergences arise,
in all orders, from the same kinematical regime as in the
one loop calculation, namely from the regime where the
internal photons are soft ($q\to 0$) and quasistatic ($q_0\to 0$).
This is so because of the specific IR behaviour
of the magnetic photon propagator, as illustrated in
eq.~(\ref{singDT}). Physically, these divergences come
from multiple magnetic collisions.
\begin{figure}
\protect \epsfxsize=14.cm{\centerline{\epsfbox{Nloop.eps}}}
\caption{A generic $n$-loop diagram (here, $n=6$)
for the self-energy in quenched QED.}
\label{effN}
\end{figure}
This peculiar kinematical regime can be conveniently
exploited in the imaginary time formalism (see, e.g., \cite{MLB96}),
where the internal photon lines carry only discrete
(and purely imaginary) energies, of the form $q_0=i \omega_n=
i 2\pi n T$, with integer $n$ (the so-called Matsubara frequencies).
The non-static modes with $n\ne 0$ are well separated from
the static one $q_0=0$ by a gap of order $T$. I have argued before that
the leading IR divergences come from the kinematical limit $q_0\to 0$.
Correspondingly, it can be verified \cite{prl} that, in the Matsubara formalism,
all these divergences are concentrated in diagrams in which
the photon lines are static, i.e., they
carry zero Matsubara frequency. (To one loop order,
this has been also notified in Refs. \cite{Marini92}.)
In what follows, we shall restrict
ourselves to these diagrams, and try to compute their contribution
to the fermion propagator near the mass-shell, in a non perturbative way.
Note that, for these diagrams, all the loop integrations
are three-dimensional (they run over
the three-momenta of the internal photons), so that the associated IR divergences
are those of a three-dimensional gauge theory. This clearly
emphasizes the non perturbative character of the leading IR structure.
As we shall see now, this
``dimensional reduction'' brings in simplifications which allows one
to arrive at an explicit solution of the problem \cite{prl}.
The point is that three-dimensional quenched QED can be
``exactly'' solved in the Bloch-Nordsieck approximation \cite{Bogoliubov},
which is the relevant approximation for the infrared structure
of interest. Namely, since the incoming fermion is interacting
only with very soft ($q\to 0$) static ($q_0=0$) magnetic photons,
its trajectory is not significantly deviated by the
successive collisions, and its spin state does not change.
This is to say, we can ignore the spin degrees of freedom,
which play no dynamical role, and we can assume the fermion to
move along a straightline trajectory with constant velocity
${\bf v}$ (for the ultrarelativistic hard fermion,
$|{\bf v}|=1$; more generally, for the soft excitations,
${\bf v}(p) \equiv \del E(p)/\del {\bf p} = v(p) {\bf \hat p}$
is the corresponding group velocity, with $|v(p)|< 1$).
Under these assumptions, the fermion propagator can be easily
computed as \cite{prl}
\beq\label{SRT}
S_R(t,{\bf p})&=&i\,\theta(t) {\rm e}^{-iE(p)t}\,\Delta(t),\eeq
where
\beq\label{SR0}
\Delta(t)= {\rm exp}\left \{-g^2T
\int^{\omega_p} \frac{{\rm d}^3q}{(2\pi)^3}
\,\frac{1}{q^2}\,\frac
{1- {\rm cos}\,t ({\bf v}(p) \cdot {\bf q})}{
({\bf \hat p \cdot q})^2} \right\},\eeq
contains all the non-trivial time dependence.
The integral in eq.~(\ref{SR0}) is formally identical to that one would
get in the Bloch-Nordsieck model in 3 dimensions.
Note, however, the upper cut-off $\omega_p \sim gT$, which occurs for
the same reasons as in eq.~(\ref{G2LR}). Namely, it reflects the dynamical
cut-off at momenta $\sim gT$, as provided by the Landau damping.
The integral over $q$ has no infrared divergence, but one can verify
that the expansion of $\Delta(t)$ in powers of $g^2$ generates
the most singular pieces of the usual perturbative expansion
for the self-energy \cite{prl}.
Because our approximations preserve only
the leading infrared behavior of the perturbation theory,
eq.~(\ref{SR0}) describes only the leading {\it large-time} behavior
of $\Delta(t)$. Since the only energy scale in the momentum integral of
eq.~(\ref{SR0}) is the upper cut-off, of order $gT$,
the large-time regime is achieved for $t\gg 1/gT$.
Note that, strictly speaking, eq.~(\ref{SR0}) holds only in the Feynman gauge.
However, its leading large time behaviour ---
which is all what we can trust anyway ! ---
is actually gauge independent \cite{prl} and of the form (we set here $\alpha
= g^2/4\pi$ and $v(p)=1$ to simplify writing)
\beq\label{DLT}
\Delta(\omega_pt\gg 1)\,\simeq \,{\rm exp}\Bigl( -\alpha Tt \ln \omega_p
t\Bigr).\eeq
A measure of the decay time $\tau$ is given by
\beq \frac{1}{\tau}=\alpha T\ln \omega_p \tau=
\alpha T\left(\ln \frac{\omega_p}{\alpha T} - \ln
\ln \frac{\omega_p}{\alpha T} + \,...\right).\eeq
Since $\alpha T \sim g\omega_p$, $\tau \sim
1/(g^2 T \ln (1/g))$. This corresponds to a damping rate
$\gamma\sim1/\tau\sim g^2 T\ln (1/g)$, similar to that obtained in a
one loop calculation with an IR cut-off
$\mu \sim g^2T$ (cf. eq.~(\ref{G2LR})).
However, contrary to what perturbation theory predicts,
$\Delta(t)$ is decreasing faster than any exponential. It follows that
the Fourier transform
\beq\label{SRE}
S_R(\omega, {\bf p})\,=\,
\int_{-\infty}^{\infty} {\rm d}t \,{\rm e}^{-i\omega t}
S_R(t,{\bf p})\,=\,
i\int_0^{\infty}{\rm d}t
\,{\rm e}^{it(\omega- E(p)+i\eta)}\,\Delta(t),\eeq
exists
for {\it any} complex (and finite) $\omega$. Thus, the retarded propagator
$S_R(\omega)$ is an entire function, with sole singularity at
Im$\,\omega\to -\infty$. The associated spectral density
$\rho(\omega, p)$ (proportional to the imaginary part of
$S_R(\omega, {\bf p})$) retains the shape
of a {\it resonance} strongly peaked around the perturbative mass-shell
$\omega = E(p)$,
with a typical width of order $\sim g^2T \ln(1/g)$ \cite{prl}.
\section{CONCLUSIONS}
The previous analysis, and, in particular, the last conclusion
about the resonant shape of the spectral density,
confirm that the quasiparticles are well defined,
even if their mass-shell properties cannot be computed
in perturbation theory. The infrared divergences occur because
of the degeneracy between the mass shell of the charged
particle and the threshold for the emission or the absorbtion
of $n$ ($n \ge 1$) static transverse photons.
Note that the emitted photons are virtual, so,
strictly speaking, the physical
processes that we have in mind are the collisions between
the charged excitation and the thermal particles, with
the exchange of quasistatic magnetic photons.
The resummation of these multiple collisions to all orders
in $g$ modifies the analytic structure of the fermion
propagator and yields an unusual, non-exponential,
damping in time.
This result solves the IR problem of the damping rate
in the case of QED. Since a similar problem occurs in QCD
as well, it is natural to ask what is the relevance
of the present solution for the non-Abelian plasma.
It is generally argued --- and also supported by lattice
computations \cite{Tar} --- that the self-interactions
of the chromomagnetic gluons
may generate magnetic screening at the scale $g^2 T$
(see \cite{MLB96} and Refs. therein).
As a crude model, we may include a screening mass $\mu\sim g^2T$
in the magnetostatic propagator in the
QED calculation. This amounts to replacing $1/q^2 \to
1/(q^2 + \mu^2)$ for the photon propagator in eq.~(\ref{SR0}).
After this replacement, the latter equation provides,
at very large times $t\simge 1/g^2T$,
an exponential decay: $\Delta(t)
\sim \exp(-\gamma t)$, with $\gamma = \alpha T\ln(\omega_p/\mu)
= \alpha T\ln(1/g)$.
However, in the physically more interesting regime of intermediate
times $1/gT \ll t \ll 1/g^2 T$, the behavior is governed
uniquely by the plasma frequency, according to our result
(\ref{DLT}): $\Delta(t)\sim \exp ( -\alpha Tt \ln \omega_p
t)$. Thus, at least within this limited model,
which is QED with a ``magnetic mass'', the time behavior
in the physical regime remains controlled by the
Bloch-Nordsieck mechanism. But, of course, this result gives no
serious indication about the real situation in QCD, since
it is unknown whether, in the present problem,
the effects of the gluon self-interactions
can be simply summarized in terms of a magnetic mass.
To conclude, the results of Refs. \cite{prl,debye} suggest
that the infrared divergences of the ultrarelativistic
Abelian plasmas can be eliminated by soft photon resummations,
\`a la Bloch-Nordsieck. For non-Abelian plasmas, on the other hand,
much work remains to be done, and this requires, in particular,
the understanding of the non-perturbative sector of the
magnetostatic gluons.
| proofpile-arXiv_065-341 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\@startsection {section}{1}{\z@}{-1cm plus-1ex
minus-.2ex}{1.5ex plus.2ex}{\reset@font\normalsize\bf}}
\def\subsection{\@startsection{subsection}{2}{\z@}{-0.7cm plus-1ex
minus-.2ex}{1.0ex plus.2ex}{\reset@font\normalsize\it}}
\def\subsubsection{\@startsection{subsubsection}{3}{\z@}{-0.7cm plus-1ex
minus-.2ex}{1.0ex plus.2ex}{\reset@font\normalsize}}
\def\paragraph{\@startsection
{paragraph}{4}{\z@}{3.25ex plus1ex minus.2ex}{-1em}{\reset@font
\normalsize\bf}}
\def\subparagraph{\@startsection
{subparagraph}{4}{\parindent}{3.25ex plus1ex minus
.2ex}{-1em}{\reset@font\normalsize\bf}}
\def\arabic{section}. {\arabic{section}.}
\def\thesection\arabic{subsection}. {\arabic{section}.\arabic{subsection}.}
\def\thesubsection\arabic{subsubsection}. {\thesection\arabic{subsection}.\arabic{subsubsection}.}
\def\thesubsubsection\arabic{paragraph}. {\thesubsection\arabic{subsubsection}.\arabic{paragraph}.}
\def\theparagraph\arabic{subparagraph}. {\thesubsubsection\arabic{paragraph}.\arabic{subparagraph}.}
\topmargin 0 pt
\ifcase \@ptsize
\textheight 58\baselineskip
\or
\textheight 48\baselineskip
\or
\textheight 45\baselineskip
\fi
\advance\textheight by \topskip
\oddsidemargin 0 cm
\evensidemargin 0 cm
\marginparwidth 2 cm
\textwidth 16 cm
\setlength{\headheight}{0pt}
\setlength{\headsep}{0pt}
\advance\footskip by 6mm
\newcount\dfttsuboption
\dfttsuboption=0
\def\dfttnum#1{\def\@dfttnum{#1}}
\dfttnum{??/xx}
\def\dfttdraft#1{\dfttsuboption=1\defINTERNAL MEMO #1\ }{DRAFT #1\ }%
\def1{1.4}}
\def\dfttmemo#1{\dfttsuboption=2\defINTERNAL MEMO #1\ }{INTERNAL MEMO #1\ }}
{ \newcount\hour \newcount\minutes \newcount\hourint%
\minutes=\time \hour=\time \divide\hour by 60%
\hourint=\hour \multiply\hourint by -60 \advance\minutes by \hourint%
\xdef\@time{\the\hour:\ifnum\minutes>9\else0\fi\the\minutes}%
}
\def\@maketitle{\newpage
\null\begingroup\def1{1}
\raggedleft\normalsize DFTT 50/96 \\
\raggedleft\normalsize MPI-PhT/96-80 \\
\raggedleft\normalsize LU TP 96-23 \\
\raggedleft\normalsize August 28th, 1996 \\
\vskip 0.8cm
\begin{center}%
{\Large \@title \par}%
\vskip 1.5em
{\normalsize
\lineskip .5em
\begin{tabular}[t]{c}\@author
\end{tabular}\par}%
\ifnum\dfttsuboption=0 \vskip 1em{\footnotesize \@date}\fi%
\end{center}%
\par\endgroup
\vskip 1cm}
\newenvironment{summary}{\begin{quote}\begin{center}\bf\abstractname\par%
\end{center}\vskip 0.25em}{\end{quote}\vskip 2em}
\flushbottom
\renewcommand{\textfraction}{.1}
\renewcommand{\floatpagefraction}{.7}
\def\eref#1{(\ref{#1})}
\makeatother
\dfttmemo{1}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand\pp{$\pm$}
\newcommand{\E}{\mathrm{e}}
\newcommand{\I}{\mathrm{i}}
\newcommand{\cp}{\cal{P}}
\newcommand{\inte}{\int\limits}
\newcommand{\GeV}{\mathrm{GeV}}
\newcommand{\nbar}{\bar n}
\newcommand{\Nbar}{\bar N}
\newcommand{\bar n_c}{\bar n_c}
\newcommand{\avg}[1]{\langle #1 \rangle}
\newcommand{{\sc Jetset}}{{\sc Jetset}}
\newcommand{{\sc Delphi}}{{\sc Delphi}}
\newcommand{{\sc Luclus}}{{\sc Luclus}}
\newcommand{\Y}{\cal{Y}}
\newcommand{\NF}{\cal{N}_{\kern -1.9pt f}}
\newcommand{\NC}{\cal{N}_{\kern -1.7pt c}}
\newcommand{\as}{\alpha_s}
\newcommand{\roots}[1]{$\sqrt{s} = #1$ GeV}
\newcommand{\pT}{{p\kern -.2pt\lower 4pt\mbox{\tiny T}}}
\newcommand{\ycut}{y_{\mathrm{cut}}}
\newcommand{\Ycut}{Y_{\mathrm{cut}}}
\newcommand{\pt_{\mathrm{cut}}}{\pt_{\mathrm{cut}}}
\newcommand{\pL}{{p\kern -.2pt\lower 4pt\hbox{\tiny L}}}
\newcommand{\gmax}{g_{\mathrm{max}}}
\newcommand{\Delta y}{\Delta y}
\newcommand{{\mathcal N}}{{\mathcal N}}
\newcommand{\ymin}{y_{\mathrm{min}}}
\title{ \bf The negative binomial
distribution in quark jets with fixed flavour
\thanks{\it Work supported in part by M.U.R.S.T. (Italy) under grant 1995.} }
\author{A.\ GIOVANNINI$^1$ \thanks{E-mail: giovannini@to.infn.it}\ , \
S.\ LUPIA$^2$ \thanks{E-mail: lupia@mppmu.mpg.de}\ , \
R.\ UGOCCIONI$^3$ \thanks{E-mail: roberto@thep.lu.se} \\ \\
\it $^1$ Dip. Fisica Teorica and I.N.F.N. -- Sezione di Torino, \\
\it via Giuria 1, I-10125 Torino, Italy \\ \\
\it $^2$ Max-Planck-Institut f\"ur Physik (Werner-Heisenberg-Institut), \\
\it F\"ohringer Ring 6, D-80805 M\"unchen, Germany \\ \\
\it $^3$ Dept. of Theoretical Physics, University of Lund, \\
\it S\"olvegatan 14 A, S 223 62, Lund, Sweden}
\begin{document}
\maketitle
\vspace{-1.0cm}
\begin{summary}
We show that
both the multiplicity distribution and the ratio of factorial cumulants
over factorial moments for 2-jet events
in $e^+e^-$ annihilation at the $Z^0$ peak can
be well reproduced by the weighted superposition of two negative
binomial distributions, associated to the contribution of $b\bar b$
and light flavoured events respectively.
The negative binomial distribution is then suggested to describe the
multiplicity distribution of 2-jet events with fixed flavour.
\end{summary}
\vspace{-0.5cm}
PACS: 13.65
\newpage
\section{Introduction}
Two different experimental effects in the
Multiplicity Distributions (MD's) of charged particles
in full phase space in $e^+e^-$ annihilation at the $Z^0$ peak,
i.e.,
the shoulder visible in the intermediate multiplicity
range\cite{delphi:2,opal,aleph} and the quasi-oscillatory behaviour of
the ratio of factorial cumulants over factorial
moments of the MD, $H_q$, when plotted as a function of its order $q$
\cite{sld,Gianini}, have been quantitatively reproduced in \cite{hq2}
in terms of a weighted superposition of two Negative Binomial Distributions
(NBD's), associated to two- and multi-jet events respectively.
A further test of this picture, in which
the simple NBD appears at a very elementary level of
investigation, is provided by the study of
samples of events with a fixed number of jets.
In \cite{delphi:3}, the {\sc Delphi}\ Collaboration has shown that a single NBD can
describe the MD's for events with a fixed number of jets for a range of
values of the jet resolution parameter $\ymin$.
This was indeed the starting point of the parametrization proposed in
\cite{hq2}.
In this letter, by extracting the ratio $H_q$ from
published MD's according to the procedure described in \cite{hq2}, we show
that the oscillations observed experimentally are larger than those
predicted by a single NBD, even after taking into account the truncation effect,
which was shown\cite{hq} to be important in the behavior of $H_q$'s.
These results suggest that, while
hard gluon radiation plays a relevant role in the
explanation of the shoulder structure of MD's and of oscillations of the ratio
$H_q$, some other effects should be taken into account
for a detailed description of
experimental data of events with a fixed number of jets.
In this respect, it is worth recalling the interesting results obtained by the
OPAL Collaboration\cite{opalfb} on forward-backward correlations and on
the increase of transverse momentum of produced hadrons in the intermediate
multiplicity range: it has been found indeed that both effects are mainly due
to hard gluon radiation, i.e., to the superposition of events with a fixed
number of jets. However, a residual positive correlation has been found in
a sample of 2-jet events; via Monte Carlo simulations, this effect has been
associated to the combined action of superposition of events with different
quark flavours and, in the central region, to a residual effect due to
resonances' decays.
Let us remind, however, that
the presence of heavy flavours has been shown not to
affect the increase of the transverse momentum of produced hadrons in the
intermediate multiplicity range, thus suggesting that not all observables are
sensitive to the original quark flavour.
A theoretical study based on Monte Carlo simulations has first
suggested that
the study of MD's can indeed point out interesting features of particle
production in $b\bar b$ events\cite{GUV}.
Recently the {\sc Delphi}\ Collaboration has established experimentally
the sensitivity of MD's to the original quark flavour, by comparing the
MD for the full sample of events with
a sample enriched in $b\bar b$ events\cite{delphibb}.
We propose
in this letter to associate a NBD to the MD in 2-jet events of fixed flavour.
We show, after examining possible alternatives,
that the weighted superposition of two NBD's,
which we associate to $b\bar b$ and light flavoured events,
describes very well both the MD's and the ratio $H_q$ for 2-jet events; the two
NBD's have the same $k$ parameters and differ
in the average multiplicity only.
Some consequences of this fact are examined in the conclusions.
\section{MD's in $b$-jets}
The {\sc Delphi}\ Collaboration has studied the effect of quark
flavour composition on the MD in one hemisphere, by comparing the
MD for the full sample of events with
that for a sample enriched in $b\bar b$ events\cite{delphibb}:
the MD extracted from the $b\bar b$ sample
was found to be essentially identical in shape to the MD obtained for
the full sample, apart from a shift of one unit, which may be related to the
effect of weak decays of $B$-hadrons\cite{dias}.
To give a quantitative comparison of the MD's in
single hemisphere in the $b\bar b$ sample
and in the sample with a mixture of all flavours,
we have fitted both experimental MD's with a NBD and with a NBD shifted by
one or two units.
The results of the fit are shown in Table~\ref{singleb}.
A single NBD gives a poor description of the MD for the $b\bar b$ sample;
the description improves strongly if one introduces a shift by one unit, and
becomes even better with a shift by two units.
The reason is that with the shift the NBD is able to reproduce better
the head of the distribution; however the tail remains underestimated.
In this respect, one should
remember that the single NBD cannot give a good description
of the MD with all flavours in full phase space,
since it cannot reproduce the shoulder structure
due to the superposition of events with different number of
jets\cite{delphi:3}. The fact that this feature should be present for
the $b\bar b$ sample too should be verified experimentally.
\begin{table}
\caption[table:singleb]{Parameters and $\chi^2$/NDF of the
fit to experimental data\cite{delphibb} on single
hemisphere MD's for $b\bar b$ sample and for all flavours
with a single NBD and with a NBD shifted by one or two
units. }\label{singleb}
\begin{center}
\vspace{4mm}
\begin{tabular}{||c|c|c||}
\hline
& $b\bar b$ sample & all flavours \\ \hline
\multicolumn{3}{||l||}{NBD} \\ \hline
$\bar n$ & 11.67\pp 0.07 & 10.67\pp 0.02 \\
$k$ & 24\pp 2 & 14.5\pp 0.3 \\
$\chi^2$/NDF & 118/26 & 140/28 \\ \hline
\multicolumn{3}{||l||}{NBD (shift by 1 unit)} \\ \hline
$\bar n$ & 10.62\pp 0.07 & 9.63\pp 0.02 \\
$k$ & 15.8\pp 0.6 & 10.42\pp 0.2 \\
$\chi^2$/NDF & 60/26 & 64/28 \\ \hline
\multicolumn{3}{||l||}{NBD (shift by 2 units)} \\ \hline
$\bar n$ & 9.55\pp 0.07 & \\
$k$ & 10.4\pp 0.3 & \\
$\chi^2$/NDF & 20/26 & \\ \hline
\end{tabular}
\end{center}
\end{table}
In any case, interesting information on the properties of these MD's
can be extracted without using any parametrization at all. In what follows
we will consider only 2-jet events, selected with a suitable algorithm,
but the same reasoning can be carried out also for the full sample.
Let us call $p_n$, $p^b_n$ and $p^l_n$
the MD's in a single hemisphere for all events,
for $b\bar b$ events and for light flavoured (non $b \bar b$) events
respectively, and $g(z) \equiv \sum_{n=0}^{\infty} p_n z^n$, $g^b(z)$ and
$g^l(z)$ the associated generating functions.
With $\alpha$ the fraction of $b\bar b$ events, one has:
\begin{equation}
p_n = \alpha p^b_n + (1-\alpha) p^l_n
\end{equation}
i.e.,
\begin{equation}
g(z) = \alpha g^b(z) + (1-\alpha) g^l(z)
\label{peso}
\end{equation}
These relations are valid in general; {\sc Delphi}\ data and our analysis shown in
Table~\ref{singleb} suggest that $p^b_n$ is given by
$p_n$ with a shift of one unit:
\begin{equation}
p^b_n = p_{n-1} \qquad n > 0;
\qquad\qquad p^b_0 = 0; \label{pshift}
\end{equation}
i.e.,
\begin{equation}
g^b(z) = z g(z)
\label{shift}
\end{equation}
Substituting now eq.~\eref{shift} in eq.~\eref{peso}, one gets:
\begin{equation}
g(z) = \frac{1-\alpha}{1-\alpha z} g^l(z)
\end{equation}
i.e.,
\begin{equation}
g^b(z) = \Bigl[ \frac{z (1-\alpha)}{1-\alpha z} \Bigr] g^l(z)
\label{result}
\end{equation}
The MD in $b\bar b$ events
is the convolution of a shifted geometric distribution,
of average value $1/(1-\alpha)$, with the MD in light flavoured events.
The shifted geometric MD could be related to the MD of the decay products
of B hadrons in the framework of \cite{dias}.
The connection of the MD in a single hemisphere to that
in full phase space is not entirely trivial, since one has to take into
account additional effects, like for instance charge conservation,
which requires that the final multiplicity be even.
Let us use the same notation for MD's as in the previous paragraph, but with
capital letters to denote the MD's in full phase space;
by taking the two hemispheres as independent (which, as suggested in
\cite{opalfb}, should be a good approximation at least for light flavours)
but applying charge conservation, we obtain:
\begin{eqnarray}
P^i(n_1,n_2) = \cases{ 2 p^i_{n_1} p^i_{n_2}
&if $n_1+n_2$ is even \cr
0 & \hbox{otherwise} \cr}
\end{eqnarray}
Here $P^i(n_1,n_2)$ is
the probability to produce $n_1$ particles in one hemisphere and
$n_2$ in the other hemisphere and $i$ denotes
either all 2-jet events (no label),
or $b\bar b$ events ($i=b$) or light flavoured
events ($i=l$). The factor 2 is for normalization, assuming that the
$p_n^i$ do not privilege the even or odd component; in any case this
effect can be easily taken into account and results similar to those
given below are obtained.
The MD in full phase space is given by definition by:
\begin{equation}
P_n^i = \sum_{n_1=0}^n P^i(n_1, n-n_1)
\end{equation}
In terms of the generating functions, one obtains
\begin{eqnarray}
G^i(z) &=& \sum_{n=0}^{\infty} z^n P^i_n =
2 \sum_{\scriptstyle n=0 \atop\scriptstyle n \mathrm{~even}}^{\infty}
\sum_{n_1=0}^n z^{n_1} p^i(n_1) z^{n-n_1} p^i(n-n_1) \nonumber \\
&=& \bigl( [g^i(z)]^2 + [g^i(-z)]^2 \bigr)
\label{fps}
\end{eqnarray}
We can see now how the relations which we obtained in the
previous paragraph are modified going from a single hemisphere to full phase
space: by putting eq.~\eref{result} in eq.~\eref{fps}, one has
\begin{equation}
G^b(z) = z^2 \frac{ (1-\alpha)^2}{(1-\alpha z)^2} [g^l(z)]^2
+ z^2 \frac{ (1-\alpha)^2}{(1+\alpha z)^2} [g^l(-z)]^2
\label{fps2}
\end{equation}
For $\alpha$ = 0, one obtains that in full phase space
the MD in $b\bar b$ events coincides with the MD for light flavoured events
with a shift of two units, as we get $G^b(z) = z^2 G^l(z)$.
The MD for small values of $\alpha$ should not be too far from this limit, as
can be easily checked with numerical examples.
In conclusion,
going from single hemispheres to full phase space by taking into account only
charge conservation, the MD of $b\bar b$ events becomes close not to the total
MD but to the MD of light flavoured events.
The two MD's seem to have the same
characteristics, like average value and dispersion,
and the only difference should lie in
a shift of two units.
\section{MD's and $H_q$'s ratio in 2-jet events}
The analysis of MD's for events with a fixed number of jets, and in particular
2-jet events, has been performed in \cite{delphi:3},
where a single NBD has been shown to
reasonably describe the MD's for events with a fixed number of jets, for
several values of the jet resolution parameter $\ymin$
(the JADE jet-finding algorithm has been used in \cite{delphi:3}).
A comparison of {\sc Delphi}\ data with $\ymin$ = 0.02 with a single NBD
is shown in Figure~\ref{fit}a together with the residuals, i.e., the
normalized difference between data and theoretical predictions, which
point out the presence of substructures in experimental data.
In view of previous results, it is then interesting to investigate whether
these substructures can be explained in terms of
the different contribution of quark-jets with different flavours.
\begin{figure}
\begin{center}
\mbox{\begin{turn}{90}%
\epsfig{file=fig1finale.ps,bbllx=0.cm,bblly=2.cm,bburx=22cm,bbury=26.cm,width=15cm}
\end{turn}}
\end{center}
\caption[figure:fit]
{\bf a)}: charged particles' MD for two-jet events
in full phase space, $P_n$, at the $Z^0$ peak from
{\sc Delphi}\cite{delphi:3} with $\ymin$ = 0.02 are
compared with the fit with a single NBD as
performed by {\sc Delphi}\ Collaboration (solid lines);
the lower part of the figure shows the residuals, $R_n$, i.e.,
the difference between data and theoretical predictions,
expressed in units of standard deviations.
The even-odd effect has been taken into account (see eq.~\protect\eref{pnfps}).
{\bf b)}:
Same as in {\bf a)}, but the solid line here shows
the result of fitting eq.~\protect\eref{2par},
with parameters given in Table~\protect\ref{fits}.
{\bf c)}:
Same as in {\bf a)}, but the solid line here shows
the result of fitting eq.~\protect\eref{poisson},
with parameters given in Table~\protect\ref{fits}.}
\label{fit}
\end{figure}
\begin{table}
\caption[table:fits]{Parameters and $\chi^2$/NDF of
the fit to experimental data on 2-jet events
MD's from {\sc Delphi}\cite{delphi:3} with three different MD's:
the weighted superposition of a NBD and a
shifted NBD with the same parameters (eq.~\eref{2par}),
the weighted superposition of a Poisson plus a
shifted Poisson (eq.~\eref{poisson})
and the weighted superposition of two NBD's with the
same $k$ (eq.~\eref{3par}).
The weight used is the fraction of $b\bar b$ events.
The even-odd effect has been taken into account (see eq.~\protect\eref{pnfps}).
Results are shown for different values of the jet-finder parameter $\ymin$.}
\label{fits}
\begin{center}
\begin{tabular}{||c|c|c|c||}
\hline
& $y_{min}$ = 0.01 & $y_{min}$ = 0.02 & $y_{min}$ = 0.04 \\ \hline
\multicolumn{4}{||l||}{NBD + shifted NBD (same $\bar n$ and $k$)
eq.~\eref{2par}} \\ \hline
$\bar n$ & 17.17\pp 0.05 & 18.01\pp 0.04 & 18.99\pp 0.05 \\
$k$ & 69\pp 5 & 57\pp 3 & 44\pp 1 \\
$\chi^2$/NDF & 18.2/17 & 27.9/17 & 53.9/21 \\ \hline
\multicolumn{4}{||l||}{Poisson + shifted Poisson eq.~\eref{poisson}} \\ \hline
$\bar n_1$ & 19.67\pp 0.13 & 21.10\pp 0.10 & 22.83\pp 0.09 \\
$\bar n_2$ & 16.37\pp 0.07 & 16.95\pp 0.06 & 17.51\pp 0.06 \\
$\chi^2$/NDF & 21.7/17 & 20.75/17 & 45.05/21 \\ \hline
\multicolumn{4}{||l||}{2 NBD (same $k$) eq.~\eref{3par}} \\ \hline
$\bar n_l$ & 16.81\pp 0.21 & 17.22\pp 0.15 & 17.98\pp 0.15 \\
$\bar n_b$ & 20.26\pp 1.71 & 21.96\pp 1.57 & 23.61\pp 1.64 \\
$k$ & 124\pp 51 & 145\pp 53 & 120\pp 33 \\
$\chi^2$/NDF & 17.4/16 & 12.6/16 & 27.5/20 \\
$\delta_{bl}$ & 3.44\pp 0.83 & 4.6\pp 0.5 & 5.6\pp 0.5 \\ \hline
\end{tabular}
\end{center}
\end{table}
We parametrize the
experimental data on MD's for 2-jet events
in full phase space in $e^+e^-$
annihilation at the $Z^0$ peak\cite{delphi:3}
in terms of the superposition of 2 NBD's,
associated to the contribution of
$b$- and light flavours (we include the charm among the light flavours
to a first extent).
We fix therefore the weight parameter
to be equal to the fraction of $b \bar b$ events,
$\alpha$ = 0.22\cite{rb}.
Following the results of the previous section,
we ask that the NBD associated to the $b$ flavour be shifted by two units and
that both parameters of the two NBD's, $\bar n$ and $k$, be the same.
Formally, we perform then a fit with the following 2-parameter distribution:
\begin{equation}
P_n(\bar n, k) = \alpha
P_{n-2}^{\mathrm{NB}}(\bar n, k) + (1 - \alpha ) P_n^{\mathrm{NB}}(\bar n, k)
\label{2par}
\end{equation}
where $P_{n-2}^{\mathrm{NB}}(\bar n, k) = 0$ for $n < 2$.
$P_n^{\mathrm{NB}}(\bar n,k)$ is here the NBD,
expressed in terms of two parameters, the average multiplicity $\bar n$
and the parameter $k$, linked to the dispersion by
$D^2/\bar n^2 = 1/\bar n + 1/k$, as:
\begin{equation}
P_n^{\mathrm{NB}}(\bar n, k) = \frac{k(k+1)\dots (k+n-1)}{n!}
\left( \frac{k}{\bar n +k} \right)^k
\left( \frac{\bar n}{\bar n + k} \right)^n
\end{equation}
As far as MD's in full phase space are concerned, one has also
to take care of the ``even-odd'' effect,
i.e., of the fact that the total number of final charged particles
must be even due to charge conservation;
accordingly, the actual form used in the fit procedure is given by:
\begin{equation}
P_n^{fps} = \cases{ A P_n
&if $n$ is even \cr
0 & \hbox{otherwise} \cr}
\label{pnfps}
\end{equation}
where $A$ is the normalization
parameter, so that $\sum_{n=0}^{\infty} P_n^{fps} = 1$.
Table~\ref{fits} (first group)
shows the two parameters $\bar n$ and $k$ of eq.~\eref{2par}
for different values of the jet resolution parameter $\ymin$;
the proposed parametrization gives a rather good description with only two
parameters; the agreement is worse for $\ymin$ = 0.04,
which could be due to the contamination of 3-jet events.
However, as shown in
Figure~\ref{fit}b for $\ymin$ = 0.02,
the oscillatory structure in the residuals does not disappear
with this parametrization.
Let us remind that the values of $\chi^2$/NDF shown in the Table
should be considered just indicative,
since we did not know the full covariance matrix and we
could not then treat properly the
correlations between different channels of the MD.
This forbids also a direct comparison of the $\chi^2$/NDF of the present
parametrization with the values of $\chi^2$/NDF for a single NBD
obtained in \cite{delphi:3}, where correlations between bins
were taken into account.
Finally, let us say that we also fit eq.~\eref{fps} by assuming that
$p^l_n$ is a NBD, and we found the same results.
It is interesting at this point to investigate
a minimal model where no physical correlations are present
at the level of events with fixed number of jets and fixed flavour: we have
performed a fit using a weighted sum of a shifted
Poisson plus a Poisson distribution
(with the correction for the even-odd effect according to eq.~\eref{pnfps}):
\begin{equation}
P_n(\bar n_1, \bar n_2) = \alpha
P_{n-2}^{\mathrm{P}}(\bar n_1) + (1 - \alpha ) P_n^{\mathrm{P}}(\bar n_2)
\label{poisson}
\end{equation}
where
\begin{equation}
P_n^{\mathrm{P}}(\bar n) = \frac{\bar n^n}{n!} e^{-\bar n}
\end{equation}
Also in this case, we have two free parameters.
Results of the fit are shown in Table~\ref{fits} (second group);
also in this case a
reasonable fit is achieved, even though the MD at $\ymin=0.04$
shows again some anomaly.
The MD with the parameters shown in Table~\ref{fits}
is compared to experimental data with $\ymin$ = 0.02 in
Figure~\ref{fit}c; it should be pointed out that the
structure in the residuals is present in this case too. From
this analysis, one would then conclude that physical
correlations visible in
$e^+e^-$ annihilation result trivially from the superposition of samples of
events with different quark flavours.
A more accurate analysis with full covariance matrix
is needed to see which parametrization is preferred by experimental data.
However, independent of the chosen parametrization,
two different components,
which can be associated to $b$- and light flavours contributions,
are visible in the MD of 2-jet events.
A more detailed analysis of the tail of the MD, which can help distinguish
different parametrizations, comes from the
study of the ratio of unnnormalized factorial cumulant over
factorial moments
\begin{equation}
H_q = \frac{\tilde K_q}{\tilde F_q} \label{hmom}
\end{equation}
as a function of the order $q$.
The factorial moments, $\tilde F_q$, and
factorial cumulant moments, $\tilde K_q$, can be obtained from the MD,
$P_n$, through the relations:
\begin{equation}
\tilde F_q = \sum_{n=q}^{\infty} n(n-1)\dots(n-q+1) P_n , \label{facmom}
\end{equation}
and
\begin{equation}
\tilde K_q = \tilde F_q -
\sum_{i=1}^{q-1} {q-1 \choose i} \tilde K_{q-i} \tilde F_i .
\label{faccum}
\end{equation}
Since the $H_q$'s were shown to be sensitive to the truncation of the
tail due to the finite statistics of data samples\cite{hq}, moments have to be
actually extracted from a truncated MD defined as follows
(including again the correction for the even-odd effect as
in eq.~\eref{pnfps}):
\begin{equation}
P_n^{trunc} = \cases{ A' P_n
&if ($n_{min} \le n \le n_{max}$)
and \ $n$ is even \cr
0 & \hbox{otherwise} \cr}
\label{pntrunc}
\end{equation}
Here $n_{min}$ and $n_{max}$ are the minimum and the maximum observed
multiplicity and $A'$ is a new normalization parameter.
In Figure~\ref{hqfit} the $H_q$'s extracted from the
experimental MD published by {\sc Delphi}\ Collaboration~\cite{delphi:3}
(here $\ymin$ = 0.02) with the procedure explained in~\cite{hq2}
are compared with the predictions of a single NBD as fitted by
{\sc Delphi}\ Collaboration~\cite{delphi:3}, and
of eq.s~\eref{2par} and \eref{poisson}.
It is clear that all three parametrizations fail to describe the experimental
behaviour of the ratios $H_q$, i.e., the description of the tail of the MD is
not accurate.
We then conclude that a single NBD cannot describe
accurately the MD in 2-jet events, as already suggested by the study of
residuals of MD's. The superposition of two NBD's
with the same parameters turns out also to be inadequate; one concludes
that the imposed constraints are too strong and that
some additional differences between $b\bar b$ and light flavoured
events should be allowed.
Finally, the observed deviation from the Poisson-like fit suggests
that there are indeed dynamical correlations beyond the purely
statistical ones.
\begin{figure}
\begin{center}
\mbox{\epsfig{file=fig2finale.ps,bbllx=4.cm,bblly=3.cm,bburx=16cm,bbury=27.cm,height=20cm}}
\end{center}
\vspace{-0.5cm}
\caption[figure:hqfit]{The ratio of factorial cumulant over factorial moments,
$H_q$, as a function of $q$.
{\bf a)}: Experimental data (diamonds) for 2-jet events with $\ymin$ = 0.02
are compared with the fit with a single NBD as
performed by {\sc Delphi}\ Collaboration (solid lines).
{\bf b)}:
Same as in {\bf a)}, but the solid line here shows
the result of fitting eq.~\protect\eref{2par},
with parameters given in Table~\protect\ref{fits}.
{\bf c)}:
Same as in {\bf a)}, but the solid line here shows
the result of fitting eq.~\protect\eref{poisson},
with parameters given in Table~\protect\ref{fits}.
In all three cases the even-odd and the truncation effects have
been taken into account (see eq.~\protect\eref{pntrunc}).}
\label{hqfit}
\end{figure}
\section{A new parametrization of MD's in 2-jet events}
In the previous
paragraph, the difference between the average multiplicity
in $b\bar b$ and light flavoured events in
the parametrization~\eref{2par} has been fixed
to 2; by using the parametrization~\eref{poisson}, where this difference is not a priori
constrained, larger values have been obtained. Let us also remind that
the experimental value of this observable at the $Z^0$ peak
is close to 2.8\cite{opalmult};
theoretical predictions in the framework of Modified Leading Log Approximation
plus Local Parton Hadron Duality\cite{MLLAmult} give even larger values.
It is therefore interesting to investigate whether one can reproduce not only
the shape of the MD, but also its tail and then the ratio $H_q$, by using a
superposition of two NBD's, but relaxing the constraint on the average
multiplicities. The only constraint we impose is
that the parameters $k$ of the two NBD's
be the same, while we allow a variation of the difference between
the average multiplicities. For the sake of simplicity and in order to be more
independent from any theoretical prejudice, we do not include any shift in the
MD for $b\bar b$ events.
Formally, we perform then a fit
with the following 3-parameter MD (plus the correction for the even-odd effect
in eq.~\eref{pnfps}):
\begin{equation}
P_n(\bar n_l, \bar n_b, k) = \alpha
P_n^{\mathrm{NB}}(\bar n_b, k) + (1 - \alpha ) P_n^{\mathrm{NB}}(\bar n_l, k)
\label{3par}
\end{equation}
The parameters of the fits and the corresponding $\chi^2$/NDF
are given in Table~\ref{fits} (third group)
for different values of the resolution parameter $\ymin$.
A really accurate description of experimental data is achieved.
Notice that the best-fit value for the difference between the average
multiplicities in the two samples, $\delta_{bl}$,
also given in Table~\ref{fits}, is quite large. This difference
grows with increasing $\ymin$, i.e., with increasing contamination
of 3-jet events.
\begin{figure}
\begin{center}
\mbox{\begin{turn}{90}%
\epsfig{file=fig3finale.ps,bbllx=0.cm,bblly=2.cm,bburx=22.cm,bbury=26.cm,width=15cm}
\end{turn}}
\end{center}
\caption[figure:fit3par]{
Charged particles' MD for two-jet events in full phase space, $P_n$, at the
$Z^0$ peak from
DELPHI\protect\cite{delphi:3} with different values of $\ymin$ are
compared with a fit with the sum of 2 NBD's with the same parameter $k$
as in eq.~(\ref{3par}). The even-odd effect has
been taken into account (see eq.~\protect\eref{pnfps}).
The lower part of the figure shows the residuals, $R_n$, i.e.,
the difference between data and theoretical predictions,
expressed in units of standard deviations.}
\label{fit3par}
\end{figure}
Figure~\eref{fit3par} compares the predictions of eq.~\eref{3par}
with the experimental MD's for two-jet events at
different values of the resolution parameter $\ymin$.
The residuals are also shown in units of standard deviations.
One concludes that
the proposed parametrization can reproduce the experimental data on MD's
very well; no structure is visible in the residuals.
As already discussed, the ratio $H_q$ gives a more stringent test of
theoretical parametrizations; it is then interesting to study the predictions
of eq.~\eref{3par} for this ratio.
In this case, one can obtain a closed expression for the factorial moments
in terms of the parameters $\delta_{bl}$, $\bar n_l$ and $k$.
Let us notice indeed that, since the two components are given by a NBD with the
same $k$, they have the same normalized factorial moments, which for a NBD are
given by:
\begin{equation}
F_q^{(l)} = F_q^{(b)} = \prod_{i=1}^{q-1} \bigl( 1 + \frac{i}{k} \bigr)
\end{equation}
From eq.~\eref{3par}, one obtains a similar relation for the generating
function:
\begin{equation}
G(z) = \alpha G^{(b)}(z) + (1-\alpha) G^{(l)}(z)
\end{equation}
By differentiating
the previous equation, one then gets the following expression for
the unnormalized factorial moments, $\tilde F_q$:
\begin{equation}
\bar n = \tilde F_1 = \bar n_l + \alpha \delta_{bl}
\end{equation}
\begin{eqnarray}
\tilde F_q &=& \alpha \tilde F_q^{(b)} + (1-\alpha) \tilde F_q^{(l)} = \\
&=& \biggl[ \alpha (\bar n_l + \delta_{bl})^q + (1-\alpha)
{\bar n_l}^q \biggr] \,\,
\prod_{i=1}^{q-1} \left( 1 + \frac{i}{k} \right) \nonumber
\end{eqnarray}
\begin{figure}
\begin{center}
\mbox{\epsfig{file=fig4finale.ps,bbllx=4.cm,bblly=3.cm,bburx=16cm,bbury=27.cm,height=20cm}}
\end{center}
\caption[figure:hq2nbd]{The ratio of factorial cumulant
over factorial moments, $H_q$, as a function of $q$;
experimental data (diamonds) for 2-jet events
with different values of $\ymin$
are compared with equation~\protect\eref{3par} (solid lines).
The parameters used are shown in Table~\protect\ref{fits}.
The even-odd and the truncation effects have
been taken into account (see eq.~\protect\eref{pntrunc}).}
\label{hq2nb}
\end{figure}
Predictions of the ratio $H_q$ as a function of the order $q$ are obtained by
inserting eq.~\eref{3par} into eq.~\eref{pntrunc}. These predictions
with parameters fitted to reproduce the MD as given in Table~\ref{fits}
(third group) are compared in Figure~\ref{hq2nb}
with the $H_q$'s extracted from experimental data on MD's
for 2-jet events\cite{delphi:3} at different values of
$\ymin$ according to the procedure described in \cite{hq2}.
The new parametrization gives an accurate description of the shape
of MD's and is shown to describe well the ratio $H_q$ too.
Small deviations are still present for $\ymin=0.01$, where
2-jet events are more collimated. They might be due to more subtle
not yet understood effects. This consideration notwithstanding,
the overall description of 2-jet MD's and $H_q$'s appears quite satisfactory.
This result gives therefore further support to
the parametrization of the MD in quark-jets with fixed flavour
in terms of a single NBD.
It is also remarkable that the average number of particles
only depends on flavour quantum numbers, whereas the NBD parameter $k$ is
flavour-independent.
As a further check, we also investigated the MD of 2-jet
events with fixed flavour in the
Monte Carlo program {\sc Jetset}\ 7.4 PS\cite{jetset}.
For each flavour, we generated a sample of 60000 events and of 60000 2-jet
events (selected using the JADE algorithm with $\ymin=0.02$)
by using the OPAL tuning\cite{opaltuning} and we fitted the MD's
in full phase space with a
single NBD, eventually with a finite shift.
In the all-events sample, the $\chi^2$/NDF is really bad, thus indicating that
a single NBD cannot describe the MD of events with fixed flavour.
By requiring the 2-jet selection, the description improves strongly;
the MD for light quarks are indeed well reproduced by a single NBD,
while $b\bar b$ events are better described by a shifted NBD.
\section{Conclusions}
It has been shown that a single NBD cannot reproduce the observed behavior
of the ratio $H_q$ for events with a fixed number of jets in full phase in
$e^+e^-$ annihilation at the $Z^0$ peak.
A simple phenomenological parametrization of the MD in terms
of the weighted superposition of two NBD's
has been shown to describe
simultaneously both the MD's and the ratio $H_q$.
The weight of the first component was
taken equal to the fraction of $b \bar b$ events, i.e.,
the two components were identified with the $b$- and
the light flavours contribution respectively.
The simple NBD parametrization
is thus reestablished at the level of 2-jet events with fixed quark flavour
composition.
It is interesting to note that
this result is consistent with the results obtained in the context
of a thermodynamical model of hadronization\cite{bgl}.
It is remarkable that the two NBD's
associated to $b$- and light flavours contributions
have the same parameter $k$;
since $k^{-1}$ is the second order normalized factorial cumulant,
i.e., it is directly related to two-particle correlations, one concludes
that two-particle correlations are flavour-independent in this approach.
In addition, since both MD's are well described by a NBD, higher order
correlations show a hierarchical structure\cite{LPA}, which is
also flavour independent.
This result can also be interpreted
in the framework of clan structure analysis\cite{AGLVH:1}, where
$k^{-1}$ gets the meaning of an aggregation coefficient, being
the ratio of the probability to have two particles in the
same clan over the probability to have the two particles in two
separate clans: in this language, one concludes that
the aggregation among particles produced into clans
in $b\bar b$ and light flavoured events turns out to be the same.
The flavour quantum numbers affect then
the average multiplicity of the corresponding jets only, but
not the structure of particle correlations.
It would be interesting to see, when appropriate samples of events will
be available, whether this property established in
full phase space continues to be valid in restricted regions of it.
\section{Acknowledgements}
Useful discussions with W. Ochs are gratefully acknowledged.
\newpage
| proofpile-arXiv_065-342 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Over the last twenty years our understanding of structure formation
has benefitted substantially from numerical N-body simulations. It is
now clear that such simulations provide a robust and efficient, though
sometimes computationally expensive tool to obtain approximate
solutions for the evolution of a collisionless self-gravitating
``fluid'' from cosmologically relevant initial conditions
(Efstathiou et al 1985). Although such cosmological N--body
simulations can now be performed using particle numbers in excess of
$10^7$, and can follow density contrasts over a range of $10^6$, this
still turns out to be barely sufficient to address questions related
to the formation of galaxies in a proper cosmological context. Current
observational data appear to favour models in which structure is built up
hierarchically, and so it is of particular interest to ask how
numerical resolution affects the mass hierarchy at the low mass end.
It appears that while the detailed inner structure of halos can only
be analyzed reliably if they contain at least several hundred particles,
the distribution of halos and their total masses are reasonably
represented for halos with ten or more particles (e.g. Efstathiou et
al 1988). This convergence is of major importance because it means
that simulations performed with $\sim10^7$ particles can reliably
resolve the halos of galaxies like the Milky Way while simultaneously
covering a large enough region (a few hundreds of Mpc) to allow a
statistical comparison with galaxy surveys. The most serious
limitation of such models is the fact that the formation of real galaxies
was clearly strongly affected by a wide variety of physical processes
which are not included in the simulations.
Within the last few years it has become possible to include a number
of additional
processes in such models by following, in addition to the dark matter,
a dissipative gaseous component. However, even in the simplest case of a
nonradiative, non--star--forming gas, the combined dark matter/gas
system exhibits much more complex behaviour than a pure N--body
system. In addition, such simulations are more CPU--intensive and so
typically have poorer resolution than the best N--body simulations.
Our current understanding of the effects of this limited resolution is
still quite rudimentary. Systematic analyses of the convergence of such
simulations are only just
beginning (Kang et al 1994; Frenk et al, in preperation).
In this paper we investigate how the finite dark
matter particle mass affects the dynamics and the cooling capabilities of the
gas. We show that discreteness effects
give rise to a steady energy flow from the dark matter to
the gas. This heating is strong enough to affect the structure of the
gas within any halo made up of fewer than 1000 dark matter particles.
The outline of our paper is the
following. In the next section we derive an analytic formula for the
rate at which discreteness effects transfer energy from the dark
matter to the gas. We compare this with the expected radiative cooling
rate, we discuss how these rates scale with physical and simulation
parameters, and we draw some first conclusions about how numerical
simulations should be designed. In section 3 we check this analytic
theory using a set of numerical simulations based on smoothed particle
hydrodynamics. Simulations with and without radiative cooling are
investigated separately. Section 4 discusses and summarizes our results.
\section{Analytic theory}
Consider a fluid element of mass $m_{\rm g}$ and density $\varrho_{\rm
g}$ which is at rest. This fluid element encounters a dark matter particle
of mass $M_{\rm DM}$ and relative velocity $v$ with a closest approach
distance $b$. In the impulse approximation (see, {e.g.,} \thinspace Binney \&
Tremaine 1987), the fluid element is accelerated to velocity
\begin{equation}
\Delta v = \frac{2\,G\,M_{\rm DM}}{b\,v}\, ,
\end{equation}
or to a corresponding kinetic energy
\begin{equation}
\Delta E = \frac{2\,G^2\,M^2_{\rm DM}\,m_{\rm g}}{b^2\,v^2}\, .
\end{equation}
This energy is dissipated to heat by shocks, by artificial
viscosity, or by an adiabatic expansion of the
gas to a new equilibrium state. Such encounters
occur with a rate $2\pi\,v\,b\,db\,n_{\rm DM}$, so the heating rate
can be written as
\begin{equation}
\left.\frac{dE}{dt}\right|_{\rm heat} =
\int\,d^3v\,f(v)\,\int_{b_{\rm min}}^{b_{\rm max}}\,2\,\pi\,db\,
\frac{2\,G^2\,M_{\rm
DM}\,\varrho_{\rm DM} m_{\rm g}}{b\,v}\, ,
\end{equation}
where $f(v)$ is the velocity distribution function for the dark matter
particles. Assuming this to be Maxwellian, we obtain after
the evaluation of the integrals
\begin{equation}
\left.\frac{dE}{dt}\right|_{\rm heat} =
\sqrt{\frac{32\,\pi}{3}}\,G^2\,\ln\Lambda\,\frac{M_{\rm DM}\,m_{\rm
gas}\,\varrho_{\rm DM}}{\sigma_{\rm 1D}}\, ,
\end{equation}
$\sigma_{\rm 1D}$ being the 1D velocity dispersion of the dark matter
and $\ln\Lambda$ the Coulomb logarithm. For typical galaxy formation
experiments $\ln\Lambda$ is in the range 3 to 7.
In an equilibrium system the internal energy of the fluid element is
$E\sim 3 m_{\rm g}\sigma_{\rm 1D}^2 /2$, so we can define a
characteristic heating time by $t_h = E\big/ (dE/dt)$, or
\begin{equation}
t_h =
\sqrt{\frac{27}{128\,\pi}}\,\frac{\sigma_{\rm
1D}^3}{G^2\,\ln\Lambda\,M_{\rm DM}\,\varrho_{\rm DM}}\, .
\end{equation}
For an equilibrium dark matter dominated halo we can get a more
transparent formula by comparing the heating rate near the halo
half-mass radius $R_h$ to the circular orbit period at this radius
$t_c$. For a halo made up of $N$ dark matter particles in total, the
definitions and approximate relations, $N\,M_{\rm DM}/2\approx 2
R_h\,{\sigma_{\rm 1D}^2/G}$, $\varrho_{\rm DM}\approx N\,M_{\rm DM}/8\,\pi\,R_h^3$,
and $t_c = 2\,\pi\,R_h/\sqrt{2}\sigma_{\rm 1D}$, allow us to cast equation
(5) in the form
\begin{equation}
\frac{t_h}{t_c} =
\sqrt{\frac{3}{\pi}}\,\frac{3\,N}{32\,\ln\Lambda}.
\end{equation}
Thus for halos with around 50 dark matter particles the heating time
for gas at the half-mass radius is comparable to the orbital
period at that radius, and so to the halo formation time. The
gas distribution within such halos will clearly never be free from
substantial two-body heating effects. In a cosmological
simulation, even a halo of $10^3$ dark matter particles will typically
have been around for several nominal formation times,
and so will suffer 10 to 20\% effects near its half-mass radius.
Because of the strong $\varrho_{\rm DM}$ dependence of equation (5),
effects in the inner regions will be substantially stronger.
A worrying aspect of these results is that in hierarchical
clustering all massive objects build up through the
aggregation of smaller collapsed systems; two-body heating must be
important in the first generations of halos which form in any
simulation, and it is unclear how well such early artifacts will be
eliminated by later evolution.
Let us now compare the two-body heating rate with the radiative
cooling rates expected for the gas in a realistic simulation:
\begin{equation}
\left.\frac{dE}{dt}\right|_{\rm cool} = \varrho_{\rm g}\,m_{\rm g} (N_{\rm
A}\,X_{\rm H})^2 \Lambda(T)
\end{equation}
$N_{\rm A}$, $X_{\rm H}$, and $\Lambda(T)$ being Avogadro's constant,
the total mass fraction of hydrogen and the cooling function,
respectively. Comparing equations (4) and (7), two-body heating will
dominate over radiative cooling, if
\begin{equation}
\sqrt{\frac{32\,\pi}{3}}\frac{G^2\,\ln\Lambda\,M_{\rm DM}\,
\varrho_{\rm DM}}{\varrho_{\rm g}(N_{\rm A}\,X_{\rm H})^2
\Lambda(T)\sigma_{1D}}
> 1\, .
\end{equation}
Since heating and cooling are two--body processes they both scale
as $\varrho^2$. If we define $f\equiv\varrho_{\rm g}/\varrho_{\rm DM}$
and use convenient units for other quantities, we find that this
inequality is equivalent to
\begin{equation}
M_{\rm DM} < M_{\rm crit} \equiv
4~10^9\,\sigma_{100}\,(\ln\Lambda)_5\,f^{-1}_{0.05}\,\Lambda_{-23},
\end{equation}
where $\sigma_{100}$ is the 1D velocity dispersion in units of
100\,km\,s$^{-1}$,
$f_{0.05}$ the local baryon fraction divided by 0.05, $(\ln\Lambda)_5$ the
Coulomb logarithm divided by 5, and $\Lambda_{-23}$ the cooling function in
units of $10^{-23}\,$erg\,cm$^{-3}$ per (H atom cm$^{-3}$)$^2$,
respectively; here we have assumed $X_{\rm H} = 0.76$.
Identifying $\sigma_{1D}$ with the
corresponding virial temperature, {i.e.,} \thinspace $\sigma_{100}=1.2\sqrt{T_6}$,
where $T_6$ is temperature in units of $10^6$\,K, we can write
this critical mass in the alternative form
\begin{equation}
\label{masscrit}
M_{\rm crit} =
5~10^9\,\mbox{M$_{\hbox{$\odot$}}$}\,\sqrt{T_6}\,(\ln\Lambda)_5\,f^{-1}_{0.05}\,\Lambda_{-23},
\end{equation}
At first sight it is surprising that our critical mass turns out to be
of galactic scale; only atomic and gravitational constants contribute
to the right-hand-side of the inequality in equation (8), and we will
see that the temperature dependence of equation (10) is quite weak
over the range of interest. This ``coincidence'' reflects the
well-known fact that the characteristic masses of galaxies
appear to be determined by the condition that the cooling time for gas
in a protogalactic dark halo should be comparable to the halo
formation time ({e.g.,} \thinspace Rees \& Ostriker 1977; White \& Rees 1978).
It is important to note that equation (9) is purely local and makes
no specific assumptions about hydrostatic equilibrium or about the
relative distributions of gas and dark matter. Only in equation (10)
do we implicitly adopt such assumptions when we identify the gas
temperature with the dark matter velocity dispersion. In practice,
this is a weak assumption which is approximately correct in most
situations of interest. We also note that our derivation is not
specific to any particular numerical treatment of hydrodynamics;
it depends only on the assumption that the dark matter is represented
by particles of mass $M_{\rm DM}$. Our critical mass should be
relevant for almost all the numerical methods currently in use to
carry out cosmological hydrodynamics simulations.
The arguments given above apply only if the local cooling time is
comparable to or longer than the local dynamical time. If cooling
can occur on a much shorter timescale, the gas will lose its
internal energy faster than the typical encounter time and two-body
effects will be unable to reheat it. Such ``catastrophic'' cooling is
unaffected by the process we are discussing. Another complication,
which we will not discuss further, is that the energy deposited by an
encounter may not produce local heating, but may be transported by
sound waves to other regions of the system before it is dissipated.
On the basis of this analysis, we can make the following predictions for
galaxy formation experiments:
\begin{itemize}
\item In simulations where radiative cooling is not included, energy
will be steadily transferred from the dark matter to the gas. This
will lead to a gradual expansion of the gas component in supposedly
equilibrium systems.
\item In simulations which include radiative processes but
where the mass of a dark matter particle exceeds the critical value
of equations (9) and (10), cooling will be suppressed
wherever the cooling time is comparable to
or longer than the local dynamical time ({i.e.,} \thinspace in the cooling flow regime).
\end{itemize}
\begin{figure}
\mbox{\epsfxsize=1\hsize\epsffile{mass.eps}}
\caption[]{\label{colfig}Critical mass for dark matter particles
(according to equation (\ref{masscrit})) as
a function of the virial temperature of the simulated system. The
different curves correspond to different cooling functions:
primordial composition (solid line), solar metallicity (dashed--dotted),
$\frac{1}{10}$ of solar metallicity (dashed--triple dotted), and pure
bremsstrahlung (dotted).}
\end{figure}
Let us identify $T$ with the virial temperature of a dark matter halo
and assume gas and dark matter to be distributed similarly.
For a given cooling function, we can then use equation (10) to calculate
the critical mass of dark matter particles. Figure 1 shows the result
as a function of $T$. These calculations assume a baryon fraction of
5\% but are easily scaled to other values. For a cooling function
appropriate to solar metallicity gas, the critical
dark--matter particle mass is $\sim 10^{11}\,$M$_\odot$ for all temperatures
between $10^5$ and $10^8\,$K, hence for objects ranging from dwarf
galaxy halos to rich galaxy clusters. Similarly, for a metallicity
of $0.1\,Z_\odot$, the mass critical mass is $\sim 10^{10}\,$M$_\odot$ for
$T$ between $2\times 10^4$ to $5\times 10^6\,$K, the whole range
relevant to galaxy halos. We therefore come to the surprising
conclusion that one should use the same dark matter particle mass
when simulating small galaxies as when simulating
galaxy clusters. For gas of primordial composition, a realistic
galaxy/cluster formation simulation requires two--body heating to
be unimportant for any object with a virial temperature in the range
$10^5$ to $10^8$\,K, implying that the dark matter particle mass should not
exceed about $2~10^9$\,M$_\odot$. A simulation of a rich cluster would
then need about half a million particles within the virial radius,
a criterion which is failed by all simulations published so
far. Nevertheless, for cluster simulations with several thousand
particles, two--body heating times are comparable to the Hubble time
only in the inner regions, so it is possible that only the core
structure of the gas is affected by numerical artifacts.
Figure 1 also shows a cooling function due to bremsstrahlung alone.
This approximates the extreme case of cooling in the presence of a
strong UV background, where collisionally excited line emission can
be almost completely suppressed ({e.g.,} \thinspace Efstathiou 1992). In this case,
$\Lambda\propto \sqrt{T}$, and so
$M_{\rm crit}\propto T$. For a dwarf galaxy ($T_{\rm vir} = 10^5\,$K),
dark matter particle masses are required to be below $10^8\,\mbox{M$_{\hbox{$\odot$}}$}$,
{i.e.,} \thinspace the galaxy halo should be represented by several hundred particles.
Finally we note that $M_{\rm crit}$ is the value of the dark matter
particle mass for which radiative cooling and artificial two--body
heating are equal. To get realistic results any numerical simulation
should use particle masses which are at least a factor of two or three
smaller than $M_{\rm crit}$.
\section{Numerical verification}
In the previous section we concluded that for parameters
typical of current galaxy formation experiments, two--body heating
can substantially affect the properties of the gas. We predict that
in simulations without cooling the gas in equilibrium systems will
slowly expand, while in simulations that include radiative losses the
gas may still be prevented from cooling where it should.
In this section we will test these predictions by
means of some numerical experiments. These were performed using the
smoothed particle hydrodynamics code GRAPESPH (Steinmetz 1996).
We choose initial conditions which are relevant for galaxy
formation simulations but which also allow easy control over
experimental parameters. In particular, we choose the initial
conditions proposed by Katz (1991): a homogeneous, overdense, and
rigidly rotating sphere of total mass $8.1\times 10^{11}\,$M$_\odot$ with
superposed small-scale density fluctuations drawn from a CDM
power spectrum. The evolution was simulated with different particle
numbers, $N= 250$, 500, 1000, 4000 and 17000. The initial conditions
for the lower resolution simulations were obtained by random sampling
those of the model with 17000 particles. This object collapses at
$z\sim 2$ but the simulations were followed over a Hubble time and
the structure was analysed only after $z= 1$ when gas and dark
matter have relaxed and are approximately in equilibrium.
For simulations including cooling, the cooling was switched on
only after $z = 1$. (Throughout we assume $\Omega=1$ and $H_0=50$
km~s$^{-1}\,$Mpc$^{-1}$.) The density profiles of the relaxed system have
$\varrho\propto r^{-2}$ between 20 and 90\,kpc; the profile is steeper
at larger and shallower at smaller radii, resembling the ``universal''
halo structure proposed by Navarro, Frenk \& White (1996). The gas
temperature is almost uniform within $50\,$kpc, but drops at larger radii.
\subsection{Simulations without cooling}
\begin{figure}
\mbox{\epsfxsize=1\hsize\epsffile{proftime.eps}}
\caption[]{\label{profile}Time evolution of the cumulative mass profiles for
dark matter
(upper curves) and gas (lower curves) for three different resolutions: 4000
particles in each component (left), 250 particles in each component
(middle), and 250 gas particles but 4000 dark matter particles
(right). The curves represent epochs 0~Gyr (solid), 4~Gyr (dotted) and
8~Gyr (dashed) after hydrostatic equilibrium has been established.}
\end{figure}
In Figure \ref{profile} we show the cumulative mass profiles for gas and dark
matter in simulations with differing resolution. The dark matter
profiles show no significant evolution in any of these models. (The
fluctuations at small radii for $N=250$ are just
statistical noise.) In contrast, the gas distribution expands
substantially for $N=250$, but only slightly for $N=4000$.
To demonstrate that this expansion reflects the mass of the dark
matter particles we also show a simulation with 250 gas
particles and 4000 dark matter particles, {i.e.,} \thinspace gas and dark matter particles
have roughly the same mass. In this case the expansion of the gas
component is again small, and with the exception of noise effects
at very small radii, the gas profile is
compatible with that obtained using 4000 gas particles.
\begin{figure}[t]
\mbox{\epsfxsize=1\hsize\epsffile{entro.eps}}
\caption[]{\label{entro}Initial versus final specific entropy (in units of
($N_{\rm A}\,k_{\rm b}$)) of gas particles for simulations with
(from top left to bottom right) 17000, 1000, 500, 250 particles in
each of the two components, and with 250/4000 and 1000/17000
gas/dark matter particles.}
\end{figure}
It is also instructive to look at the heating of the gas in terms of its
entropy
evolution. After hydrostatic equilibrium is established, the specific entropy
$s=\ln(T^{1.5}/\varrho)$ of a gas particle should be constant $\frac{d}{dt}s =
0$. As shown in Figure
\ref{entro}, this is indeed the case for the $N=17000$ run. There is an
almost perfect linear relation between initial and final entropy. This also
demonstrates that for high particle numbers the amount of spuriously generated
entropy due to the artificial viscosity is negligible. Only at very large
radii a slight scatter can be seen, which can be explained by some dynamical
evolution close to
the virial radius. For smaller particle numbers, an evolution in $s$ becomes
visible: when $N=1000$ only particles with the lowest
entropies are affected, while for $N<500$ the whole system
is affected. The low entropy gas exhibits the strongest evolution
because it lies near the center of the halo and so has the shortest
heating time. Comparison with the 250/4000 and 1000/17000 models
shows that entropy generation is not significantly affected
by numerical resolution in the gas component, but is determined
primarily by the mass resolution of the dark matter component: lowering
the mass of the dark matter particles at constant gas resolution
suppresses the artificial heating. Note that two--body heating also
occurs when dark matter particle masses are smaller than those of
fluid elements since, unlike the standard stellar dynamical case, the
fluid elements are at rest when the system is in equilibrium.
\subsection{Simulations with cooling}
We now consider simulations which include radiative cooling. To simplify
comparison of our simulations with the predictions of our analytic
model, we use a schematic cooling function for which $\Lambda={\rm
const}$ above a lower cutoff of $10^4\,$K. As mentioned above, we
switch cooling on only after the initial relaxation phase. Thus the
phase we analyse begins with a relaxed halo, in which the gas has the
virial temperature and is in hydrostatic equilibrium within the dark
matter potential well. We choose $\Lambda$ to be
relatively small so that we can demonstrate two--body heating effects clearly
using about 1000 particles. Statistical noise then has little effect
on our conclusions. These physical conditions are similar to a
cooling flow. We took the mass of a dark matter particle to be
$7.7~10^8\,M_{\odot}$, assumed a gas fraction of 5\%, and performed
two simulations, one with $\Lambda_{-23}=0.4$, the other with
$\Lambda_{-23}=0.1$. The half-mass radius of the collapsed system is
then 60\,kpc, while the gravitational softening is 5\,kpc giving
a Coulomb logarithm of about 2.5. According to equation
\ref{masscrit}, for the appropriate virial temperature, $T_6=1.5$, our
chosen particle mass corresponds to the critical value for a
cooling coefficient of $\Lambda_{-23}=0.25$, midway between those of
our two simulations. We expect, therefore, that in
one of these two models the gas will be able to cool while in the
other it will not. The two
simulations were run for the same number of nominal cooling time
scales, {i.e.,} \thinspace the model
with the lower cooling coefficient was run for a correspondingly longer
time. In both cases the final cooling radius should be about
23\,kpc implying that about 16 per cent of the gas should be able to
cool. For an additional comparison we also consider a second model
with $\Lambda_{-23}=0.1$ which differs from the first only in that
the dark matter is represented by 17000 particles. Thus two--body heating
effects are suppressed by a factor of 17.
\begin{figure}
\mbox{\epsfxsize=0.9\hsize\epsffile{cool.eps}}
\caption[]{\label{colmod}Density (top) and temperature (bottom) as a function
of radius for two simulations starting from identical initial
conditions but with $\Lambda_{-23}=0.4$ (left) and $\Lambda_{-23}=0.1$ (middle).
These simulations have evolved for the same number of cooling
times since virialisation. The simulation on
the right has the same number of gas particles but 17 times more
dark matter particles. Its evolution time and $\Lambda_{-23}$ value
are identical to those of the model in the middle column. The vertical
dotted line
corresponds to the radius where the cooling time of the initial model
equals the time for which cooling was allowed.
}
\end{figure}
In Figure \ref{colmod} we show density and temperature profiles for the
final states of the three simulations. In the model with
$\Lambda_{-23}=0.4$ almost all the gas within the cooling radius has started to
cool, and a large fraction has already settled at $10^4$\,K, the
cutoff of the assumed cooling function. The gas distribution responds
dynamically to this cooling. The density near the center has already
increased by three orders of magnitude. The situation is different when
$\Lambda_{-23} = 0.1$. Although the simulation has evolved for
the same number of cooling times, only those particles which where dense enough to
cool catastrophically ($t_{\rm cool} \la t_{\rm dyn}$) have cooled and
the density has only increased by an order of magnitude. Almost no gas in the
cooling-flow regime ( $t_{\rm dyn} < t_{\rm cool} < t_{\rm Hubble}$) has cooled
down and the total fraction of cooled gas is reduced by a factor of 2.
Increasing the number of dark matter particles without altering the
cooling eliminates this difference (Fig.~\ref{colmod}, right). Again
all gas within the cooling radius can cool and and settles to a distribution
similar to the $\Lambda_{-23}=0.4$ run.
Since more dynamical
times are now available to react to the loss of pressure support,
and since the central potential cusp of the dark halo is now better
defined, the central gas density increases even more dramatically in
this case. These experiments show
impressively how two--body heating can alter the dynamics and
thermodynamics of the gas, especially in a cooling flow situation.
\section{Summary and Discussion}
We have analyzed how two--body encounters between fluid elements and dark
matter particles can lead to spurious heating of the gas component
in galaxy formation experiments. We have shown both analytically
and numerically that this process can affect not only the
thermodynamic state, but also the dynamics of the gas. Our analytic
work establishes an upper bound to the mass of a dark matter
particle for two--body heating to be subdominant when simulating
any given physical situation. Our numerical simulations show that
the predicted effect is indeed present, and that the critical mass of
equation (\ref{masscrit}) provides a reliable guide for designing
numerical experiments so that two--body heating does not cause a
qualitative change in their outcome. Simulations of cooling flows and
of galaxies forming in the presence of a strong UV background are
particularly susceptible to two-body heating, and must therefore be
designed with particular care.
Although the effect we have discussed is not
important in the catastrophic cooling regime, numerical simulations
have shown that when cooling occurs only in this regime, and no
additional physics is included,
it is not possible to make galaxy disks as large as those observed.
Disk-like objects do form but have too little angular momentum and
so are too concentrated (Navarro \& Benz 1991, Navarro \& White 1994,
Navarro \& Steinmetz 1996). Furthermore, the observed specific angular
momenta of giant spiral disks are so large that they must have formed
late and so most plausibly in the cooling flow regime ({e.g.,} \thinspace White
1991). The solution to this problem may be, as is usually claimed,
that feedback from stellar evolution is an
indispensable ingredient of galaxy formation; such feedback could
delay the condensation of gas so that most of it cools late,
and with relatively little loss of angular momentum to the dark
matter. Current semi-analytic models include a treatment of such
feedback processes and suggest that a substantial
fraction of the matter in the present galaxy population may have
condensed in the quasi-cooling-flow regime
(White \& Frenk 1991, Kauffmann, Guiderdoni \& White 1993; Fabian
\& Nulsen 1994). Thus, those numerical simulations which, as a result of poor
resolution, avoided excessive catastrophic cooling at early times, may nevertheless have
missed an important ingredient of galaxy formation because of of two-body heating,
in particular, the physics most relevant to the formation of spiral disks.
Our findings lead us to conclude that the common
practice of using the same number of dark matter and gas particles in
cosmological and galaxy formation simulations may be unwise.
The computing time spent per gas particle is usually much higher than for
a dark matter particle, especially if a multiple time-step scheme is
used. It may, therefore, be advantageous to use several times more dark
matter particles than gas particles. The total CPU time will be only moderately
increased while two-body heating can be suppressed by a substantial factor.
Many simulations of individual galaxies and clusters apply a multi-mass
technique (Porter 1985; Katz \& White 1993; Navarro \& White 1994) in which the
tidal field due to surrounding matter is represented by a relatively small
number of massive particles. These massive particles are supposed to stay
outside the high resolution region of interest. If, however, some of them
accidently pass through a forming galaxy, our experiments demonstrate that they
may prevent cooling of low density gas. It is, therefore, important that
simulations which use such techniques should be checked to ensure that
contamination by massive particles is kept to an acceptably low level.
Although our numerical experiments have used smoothed particle
hydrodynamics, our analytic theory makes no assumption about the
underlying numerical technique other than that the dark matter is
represented by discrete particles, and that sound waves and shocks
generated in the gas component will be dissipated as heat. As a
result, the considerations of this paper should apply to all the
techniques currently used to simulate cosmological
hydrodynamics and galaxy formation.
| proofpile-arXiv_065-343 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The experimental realization of
overscreened multichannel Kondo model has been vigorously searched
since the discovery of its non-fermi liquid behavior (NFL) by
Nozi\'{e}res and Blandin (NB) \cite{blandin}.
D. L. Cox \cite{cox} pointed out that the NFL behaviors
in heavy fermion systems like $ Y_{1-x} U_{x} Pd_{3} $ may be explained
by the 2 channel quadrupolar Kondo effects, but the observed electrical
conductivity of such systems is linear in $ T $ in contrast to $ \sqrt{T} $
behavior of the 2-channel Kondo model (2CK).
Vlad\'{a}r and Zawadowski \cite{zaw}
suggested that a non-magnetic impurity tunneling
between two sites in a metal can be mapped to the 2CK in which
the roles of channels and spins in the original formulation are interchanged.
Ralph {\sl et al.} \cite{ralph}
proposed that the conductance signals observed in ballistic
metal point contacts may be due to the 2-channel Kondo scattering from 2-level
tunneling systems, the conductance exponent $ 1/2 $ and the magnetic
field dependence observed in such device
is indeed in consistent with that predicted by Affleck-Ludwig's (AL) Conformal
Field Theory (CFT) solution of the 2CK \cite{affleck1,affleck2}
however the alternative interpretation was also proposed \cite{wingreen}.
Moustakas and Fisher \cite{fisher1} reexamined the problem of the electron assisted
tunneling of a heavy particle between two sites in a metal. In addition to bare hopping
( $ \Delta_{0} $ term in Eq. \ref{start0} ) and one electron assisted hopping term
( $ \Delta_{1} $ term in Eq. \ref{start0} ) found previously in Ref.\cite{zaw},
they found that an {\em extra}
two electrons assisted hopping term ( $ \Delta_{2} $ term in Eq. \ref{start0} )
also plays an important role. Treating all these
important processes carefully, they concluded that more than four channels (including spin)
are needed in order to localize the impurity. In Ref. \cite{fisher2}, they wrote down
an effective Hamiltonian which includes all the important processes and
employed Emery-Kivelson (EK)'s Abelian Bosonization solution of the 2CK
\cite{emery} to investigate the full phase diagram of this Hamiltonian, they found
the two electron assisted hopping term plays a {\em similar} role to the bare hopping term.
However, they overlooked the important fact
that the canonical transformation
operator $ U=e^{i S^{z} \Phi_{s} } $ in EK's solution is a boundary condition
changing operator \cite{xray}, therefore
their analysis of the symmetry of the fixed points and the operator contents
near these fixed points
are not complete. They didn't calculate the electron conductivity which is
the most important experimental measurable quantity.
Furthermore the nature of the stable Fermi liquid
fixed point was also left unexploited.
Affleck and Ludwig (AL) ~\cite{bound1,xray,ludwig},
using Conformal Field Theory, pointed out that for {\em any general} quantum impurity
problem, the impurity degree of freedoms completely {\em disappear} from the description
of the low temperature fixed point and leave behind conformally
invariant {\em boundary conditions}. CFT can also be used to classify all the possible
boundary operators near any low temperature fixed points and calculate any correlation
functions. For 4 pieces of bulk fermions which correspond to 8 pieces of
Majorana fermions, the non-interacting theory
possesses $ SO(8) $ symmetry,
Maldacena and Ludwig (ML) \cite{ludwig} showed that finding the symmetry
of the fixed points is exactly equivalent
to finding the boundary conditions of the 8 Majorana fermions
at the fixed points, the boundary conditions turned out to be {\em linear}
in the basis which separates charge, spin and flavor.
ML reduced the descriptions of the fixed points as free chiral bosons plus
different {\em linear} boundary conditions.
The linear boundary
conditions can also be transformed into the boundary conditions in the original
fermion basis by the triality transformation Eq.\ref{second} \cite{witten}.
The boundary conditions
in the original fermion basis only fall into
two classes: {\em NFL}
fixed points where the original fermions are scattered into spinors;
{\em fermi liquid}(FL) fixed points where the original fermions only suffer
just phase shifts at the boundary.
The important step in the CFT approach developed by AL is the identification
of the fusion rules at various fixed points. Although the fusion rule is simple
in the multichannel Kondo model, it is usually very difficult to identify in more
complicated models like the one discussed in this paper.
Recently, using EK's Abelian Bosonization approach
to the 2CK, the author developed a simple
and powerful method to study certain class of quantum impurity models
with 4 pieces of bulk fermions. the method can identify very quickly
all the possible boundary fixed points and their maximum symmetry,
therefore circumvent the difficult tasks to identify the fusion rules at
different fixed point or line of fixed points.
it can also demonstrate the physical picture at the boundary explicitly \cite{powerful}.
In this paper, using the method developed in Ref.\cite{powerful} and paying the special
attention to boundary condition changing nature of $ U=e^{i S^{z} \Phi_{s} } $,
we investigate the full phase diagram of the present problem again.
In Sec. II, we Abelian bosonize the effective Hamiltonian. By using the
Operator Product Expansion (OPE) \cite{cardy}, we get the
Renormalization Group(RG) flow equations near the weak coupling line of fixed points ,
therefore identify the two {\em independent} crossover scales. In the following
sections, we analyze all the possible fixed points of the bosonized Hamiltonian.
In Sec.III, we find {\em a line of NFL fixed points} which continuously interpolates
between the 2CK fixed point and the one channel two impurity
Kondo (2IK) fixed point \cite{twoimp}, its symmetry is
$ U(1) \times O(1) \times O(5) $ \cite{ising}. This line of NFL fixed points is unstable,
it has {\em one} relevant direction with scaling dimension 1/2. It also has one marginal
operator in the spin sector which is responsible for this line. The OPE of this marginal operator
and the leading irrelevant operator will always generate the relevant operator.
For the general position on the line, although the leading exponents of
the specific heat, {\em hopping } susceptibility
and the electron conductivity $ C_{imp}, \chi^{h}_{imp}, \sigma(T) $ are the same as those of the 2CK,
the finite size spectrum depend on the position on the line;
no universal relations can be found among the amplitudes of the
three quantities. Only at the 2CK point on the line, universal ratios can be formed.
However, at the 2IK point, the coefficient of $ \sqrt{T} $ vanishes;
we find two dimension 5/2 operators which lead to
$ \sigma(T) \sim 2 \sigma_{u}(1 + T^{3/2}) $,
{\em no} universal ratio can be formed either.
In Sec.IV, the additional {\em NFL fixed point} found by MS is shown to have the symmetry
$ O(7) \times O(1) $, therefore is the same fixed point as the 2IK. This fixed point
is also unstable, it has {\em two} relevant directions with scaling dimension 1/2.
Because the leading irrelevant operators near this NFL fixed point
are {\em first order Virasoro descendant } with scaling dimension 3/2 which
can be written as a total imaginary time derivative, therefore, can be droped.
The subleading irrelevant operators with dimension 2 give $ C_{imp} \sim T $.
However, because the 'orbital field' in Eq \ref{start}
couples to a {\em non-conserved } current, $ \chi^{h}_{imp} \sim \log T $.
We also find two dimension 5/2 irrelevant operators, one of them contributes to the leading
low temperature conductivity $ \sigma(T) \sim 2 \sigma_{u}(1+ T^{3/2}) $. No universal
ratios can be formed near this NFL fixed point.
In Sec. V, we find the system flows to a stable {\em line of FL fixed points}
which continuously interpolates between the non-interacting
fixed point and the 2 channel spin-flavor Kondo (2CSFK) fixed point discussed by the
author in Ref. \cite{spinflavor}, its symmetry is $ U(1) \times O(6) \sim U(4) $.
Along this line of fixed points , the electron fields of the even and
the odd parity under interchanging the two sites suffer opposite continuously- changing
phase shifts. We also discuss the effect of the marginal operator in the charge sector
due to the P-H symmetry breaking and compare it with the magical operator
in the spin sector. In Sec. VI, we analyse the effective Hamiltonian in
the external magnetic field which break the channel symmetry. In Sec. VII,
all the scaling functions for the physical measurable quantities including
the real spin susceptibility are derived in different regimes.
In the final section, the relevance
of the results of this paper to the experiments are examined,
some implications on the non-magnetic impurity hopping around three sites with
triangular symmetry are also given.
In Appendix A, the finite size spectrum of one {\em complex} fermion is listed.
In Appendix B, the boundary conditions in the original fermion basis are derived
by both Bosonization method and $\gamma $ matrix method.
In Appendix C, the results on the additional NFL fixed point in Sec. IV. are
rederived in a different basis.
\section{ Bosonization of the effective Hamiltonian and the weak coupling analysis}
We start from the following effective Hamiltonian for a non-magnetic
impurity hopping between two sites
in a metal first obtained by MF \cite{fisher1}, \cite{fisher2} :
\begin{eqnarray}
H &= & H_{0}
+ V_{1} (\psi^{\dagger}_{1 \sigma} \psi_{1 \sigma}+
\psi^{\dagger}_{2 \sigma} \psi_{2 \sigma}) \nonumber \\
& + & V_{2} (\psi^{\dagger}_{1 \sigma} \psi_{2 \sigma}+
\psi^{\dagger}_{2 \sigma} \psi_{1 \sigma} )
+ V_{3} ( d^{\dagger}_{1} d_{1}- d^{\dagger}_{2} d_{2} )
(\psi^{\dagger}_{1 \sigma} \psi_{1 \sigma}- \psi^{\dagger}_{2 \sigma}
\psi_{2 \sigma})
\nonumber \\
& + & d^{\dagger}_{2} d_{1} ( \frac{ \Delta_{0} }{2 \pi \tau_{c} }
+\frac{ \Delta_{1} }{2} \sum_{\sigma} \psi^{\dagger}_{1 \sigma}
\psi_{2 \sigma}+ \Delta_{2} 2 \pi \tau_{c}
\psi^{\dagger}_{1 \uparrow} \psi_{2 \uparrow}
\psi^{\dagger}_{1 \downarrow} \psi_{2 \downarrow} ) +h.c. + \cdots
\label{start0}
\end{eqnarray}
Here the two sites $ 1,2 $ ( the two real spin directions $\uparrow, \downarrow $ )
play the role of the two spin directions $ \uparrow, \downarrow $
( the two channels $ 1, 2 $ ) in the
magnetic Kondo model. All the couplings have been made to be {\em dimensionless}.
As emphasized by MF, even initially $ \Delta_{1}, \Delta_{2} $
maybe negligible, they will be generated at lower energy scales, $ \cdots $
stands for irrelevant terms \cite{note}. In the following, we use the notation of
the magnetic Kondo model and rewrite the above Hamiltonian as:
\begin{eqnarray}
H &= & H_{0} + V_{1} J_{c} (0) + 2 V_{2} J^{x}(0) +
4 V_{3} S^{z} J^{z}(0) +\Delta_{1} (J^{x}(0) S^{x} + J^{y}(0) S^{y} )
\nonumber \\
& + & \frac{ \Delta_{0} }{ \pi \tau_{c} } S^{x}
+\Delta_{2} 2 \pi \tau_{c} ( S^{-}
\psi^{\dagger}_{1 \uparrow} \psi_{1 \downarrow}
\psi^{\dagger}_{2 \uparrow} \psi_{2 \downarrow} +h.c.) + h(\int dx J^{z}(x) + S^{z})
\label{start}
\end{eqnarray}
where $ S^{+} =d_{1}^{\dagger} d_{2}, S^{-} =d_{2}^{\dagger} d_{1},
S^{z} =\frac{1}{2} ( d_{1}^{\dagger} d_{1} - d_{2}^{\dagger} d_{2} ) $.
We also add a 'uniform field' which corresponds to strain or pressure
in the real experiment of Ref.\cite{ralph}. The real magnetic field
will break the channel symmetry, its effect will be discussed in Sec. VI.
Hamiltonian Eq. \ref{start} has the global $ Z_{2} \times
SU_{f}(2) \times U_{c}(1) $ symmetry and Time reversal symmetry.
Under the $ Z_{2} $ symmetry in the spin sector : $ \psi_{i \uparrow}
\longleftrightarrow \psi_{i \downarrow } ,
S^{x} \rightarrow S^{x}, S^{y} \rightarrow - S^{y}, S^{z}
\rightarrow -S^{z} $.
Under Time reversal symmetry \cite{atten} : $ i \rightarrow -i,
\psi_{L} \rightarrow \psi_{R},
S^{x} \rightarrow S^{x}, S^{y} \rightarrow - S^{y},
S^{z} \rightarrow S^{z} $. The potential scattering term $ V_{1} $ is
the {\em only } term which breaks P-H symmetry:
$\psi_{i \alpha}(x) \rightarrow \epsilon_{\alpha \beta}
\psi^{\dagger}_{j \beta}(x) $. In contrast to the 2CK, we only have $ Z_{2} $ symmetry
in the spin sector, the total spin current to which $ h $ couples is not conserved
In the following, we closely follow the notations of Emery-Kivelson
\cite{emery}. Abelian-bosonizing the four bulk Dirac fermions separately :
\begin{equation}
\psi_{i \alpha }(x )= \frac{ P_{i \alpha}}{\sqrt{ 2 \pi \tau_{c} }}
e ^{- i \Phi_{i \alpha}(x) }
\label{first}
\end{equation}
Where $ \Phi_{i \alpha} (x) $ are the real chiral bosons satisfying
the commutation relations
\begin{equation}
[ \Phi_{i \alpha} (x), \Phi_{j \beta} (y) ]
= \delta_{i j} \delta_{\alpha \beta} i \pi sgn( x-y )
\end{equation}
The cocycle factors
have been chosen as: $ P_{1 \uparrow} = P_{1 \downarrow } = e^{i \pi N_{1 \uparrow} }
, P_{2 \uparrow} = P_{2 \downarrow } = e^{i \pi (
N_{1 \uparrow} + N_{ 1 \downarrow} +N_{2 \uparrow} )} $.
It is convenient to introduce the following charge, spin, flavor,
spin-flavor bosons:
\begin{equation}
\left ( \begin{array}{c} \Phi_{c} \\
\Phi_{s} \\
\Phi_{f} \\
\Phi_{sf} \end{array} \right )
=\frac{1}{2} \left ( \begin{array}{c} \Phi_{1 \uparrow }+ \Phi_{1\downarrow }+
\Phi_{2 \uparrow }+ \Phi_{2 \downarrow } \\
\Phi_{1 \uparrow }- \Phi_{1\downarrow }+
\Phi_{2 \uparrow }- \Phi_{2 \downarrow } \\
\Phi_{1 \uparrow }+ \Phi_{1\downarrow }-
\Phi_{2 \uparrow }- \Phi_{2 \downarrow } \\
\Phi_{1 \uparrow }- \Phi_{1\downarrow }-
\Phi_{2 \uparrow }+ \Phi_{2 \downarrow } \end{array} \right )
\label{second}
\end{equation}
Following the four standard steps in EK solution: {\em step 1 :}
writing the Hamiltonian in terms of the chiral bosons Eq.\ref{second}.
{\em step 2:} making the canonical transformation
$ U= \exp ( -i V_{1} \Phi_{c} (0)+
i S^{z} \Phi_{s}(0)) $. {\em step 3} shift the spin boson
by $ \partial_{x} \Phi_{s} \rightarrow \partial_{x}\Phi_{s} +
\frac{h}{v_{F}} $ \cite{gogolin}. {\em step 4:} making the following refermionization:
\begin{eqnarray}
S^{x} &= & \frac{ \widehat{a}}{\sqrt{2}} e^{i \pi N_{sf}},~~~
S^{y}= \frac{ \widehat{b}}{\sqrt{2}} e^{i \pi N_{sf} },~~~
S^{z}= -i \widehat{a} \widehat{b} = d^{\dagger} d - \frac{1}{2} \nonumber \\
\psi_{sf} & = & \frac{1}{\sqrt{2}}( a_{sf} - i b_{sf} ) =
\frac{1}{\sqrt{ 2 \pi \tau_{c}}} e^{i \pi N_{sf} }
e^{-i \Phi_{sf} } \nonumber \\
\psi_{s,i} & = & \frac{1}{\sqrt{2}}( a_{s,i} - i b_{s,i} )=
\frac{1}{\sqrt{ 2 \pi \tau_{c}}} e^{i \pi d^{\dagger} d } e^{i \pi N_{sf} }
e^{-i \Phi_{s} }
\label{ek}
\end{eqnarray}
Note $ \psi_{s,i}(x) $ defined above contains the impurity operator $ e^{ i \pi
d^{\dagger} d } $ in order to satisfy the anti-commutation relations with the other
fermions.
The transformed Hamiltonian $ H^{\prime}= U H U^{-1} $ can be written
in terms of the Majorana fermions as:
\begin{eqnarray}
H^{\prime} &= & H_{0}
+ 2 y \widehat{a} \widehat{b} a_{s,i}(0) b_{sf}(0)
+ 2 q \widehat{a} \widehat{b} a_{s,i}(0) b_{s,i}(0)-i \widehat{a} \widehat{b} q \frac{h}{v_{F}}
\nonumber \\
& - & i \frac{ \Delta_{1}}{\sqrt{ 2 \pi \tau_{c} }} \widehat{a} b_{sf}(0)
+i \frac{ \Delta_{+}}{\sqrt{ 2 \pi \tau_{c} }} \widehat{b} a_{s,i}(0)
+i \frac{ \Delta_{-}}{\sqrt{ 2 \pi \tau_{c} }} \widehat{a} b_{s,i}(0)
\label{group}
\end{eqnarray}
where $ y=2 V_{2}, q=\frac{1}{ \pi} ( V_{3}- \frac{ \pi v_{F}}{2} )
, \Delta_{\pm} = \Delta_{0} \pm \Delta_{2} $.
As observed by MF, the above equation clearly indicate that
the two electron assisted hopping term plays a similar role
to the bare hopping term.
From the OPE \cite{cardy}
of the various operators in Eq.~\ref{group},
the R. G. flow equations near the weak coupling fixed point $ q=0 $ is \cite{anti}
\begin{eqnarray}
\frac{d \Delta_{+}}{d l} & = & \frac{1}{2} \Delta_{+} + 2y \Delta_{1} \nonumber \\
\frac{d \Delta_{-}}{d l} & = & \frac{1}{2} \Delta_{-} \nonumber \\
\frac{d \Delta_{1}}{d l} & = & \frac{1}{2} \Delta_{1} + 2y \Delta_{+} \nonumber \\
\frac{d y}{d l} & = & \Delta_{1} \Delta_{+} \nonumber \\
\frac{d q}{d l} & = & \Delta_{+} \Delta_{-}
\label{weak1}
\end{eqnarray}
MF got the same RG flow equations ( Eq.23 in Ref.\cite{fisher2} )
near the weak coupling line of fixed points by using Anderson-Yuval Coulomb gas picture.
Eq. \ref{weak1} shows that $ \Delta_{+}, \Delta_{-}, \Delta_{1} $ have the same
scaling dimension 1/2 at $ q=0 $, so are equally important.
Define $\tilde{b}_{s,i}(x), \tilde{b}_{sf,i}(x)$ (similarly for $\tilde{ q }, \tilde{y} $)
\begin{equation}
\left( \begin{array}{c} \tilde{b}_{s,i}(x) \\
\tilde{b}_{sf,i}(x) \end{array} \right )
= \frac{1}{ \Delta_{K} } \left( \begin{array}{cc}
\Delta_{1} & \Delta_{-} \\
- \Delta_{-} & \Delta_{1} \\
\end{array} \right )
\left( \begin{array}{c} b_{s,i}(x) \\
b_{sf}(x) \end{array} \right )
\label{twist}
\end{equation}
Where $ \Delta_{K} =\sqrt{ \Delta_{1}^{2} +\Delta_{-}^{2} } $.
Eq.\ref{group} can be rewritten as:
\begin{eqnarray}
H^{\prime} = H_{0}
+ 2 \tilde{q} \widehat{a} \widehat{b} a_{s,i}(0) \tilde{b}_{s,i}(0)
+ 2 \tilde{y} \widehat{a} \widehat{b} a_{s,i}(0) \tilde{b}_{sf,i}(0) \nonumber \\
- i \frac{ \Delta_{K}}{\sqrt{ 2 \pi \tau_{c} }}
\widehat{a} \tilde{b}_{sf,i}(0)
+i \frac{ \Delta_{+}}{\sqrt{ 2 \pi \tau_{c} }} \widehat{b} a_{s,i}(0)
-i \widehat{a}\widehat{b} q
\frac{h}{v_{F}}
\label{final}
\end{eqnarray}
The R. G. flow equations which are equivalent to Eq.\ref{weak1} are (we set $ h=0 $ )
\begin{eqnarray}
\frac{d \Delta_{+}}{d l} & = & \frac{1}{2} \Delta_{+} + 2 \tilde{y} \Delta_{1} \nonumber \\
\frac{d \Delta_{K}}{d l} & = & \frac{1}{2} \Delta_{K} + 2 \tilde{y} \Delta_{+} \nonumber \\
\frac{d \theta }{d l} & = & 4 \tilde{q} \frac{\Delta_{+}}{\Delta_{K}} \nonumber \\
\frac{d \tilde{y}}{d l} & = & \Delta_{K} \Delta_{+} \nonumber \\
\frac{d \tilde{q}}{d l} & = & 0
\label{weak2}
\end{eqnarray}
Where the angle $\theta $ is defined by $ \cos\theta =\frac { \Delta_{-}^{2}
- \Delta_{1}^{2} }{ \Delta_{K}^{2}},
\sin \theta =\frac { 2 \Delta_{-} \Delta_{1} }{ \Delta_{K}^{2}} $.
The crossover scale from the weak coupling fixed point $ q=0 $ to
a given point on the line of NFL fixed points to be discussed in section III is given by
$ T_{K1} \sim D ( \Delta_{K} )^{2} $, to the additional NFL fixed point
to be discussed in section IV.
is given by $ T_{K2} \sim D (\Delta_{+})^{2} $.
As emphasized in \cite{xray,powerful}, the canonical transformation $ U $ is
a boundary condition changing operator,
the transformed field $ \psi^{\prime}_{s}(x) $ is related to the original
field $ \psi_{s}(x) $ by \cite{not}:
\begin{equation}
\psi^{\prime}_{s}(x) = U^{-1} \psi_{s,i}(x) U =
e^{i \pi d^{\dagger} d } e^{i \pi S^{z} sgnx} \psi_{s}(x)=-isgnx \psi_{s}(x)
\label{define}
\end{equation}
As expected, the impurity spin $ S^{z} $ {\em drops } out in the prefactor of
the above equation.
The above Eq. can be written out explicitly in terms of the Majorana fermions
\begin{eqnarray}
a^{\prime L}_{s}(0)=- b^{L}_{s}(0),~~~~ b^{\prime L}_{s}(0)= a^{L}_{s}(0) \nonumber \\
a^{\prime R}_{s}(0)= b^{R}_{s}(0),~~~~ b^{\prime R}_{s}(0)=- a^{R}_{s}(0)
\label{general}
\end{eqnarray}
We find the physical picture can be more easily demonstrated in
the corresponding action:
\begin{eqnarray}
S &= & S_{0} + \frac{\gamma_{1}}{2} \int d\tau \widehat{ a}(\tau)
\frac{ \partial \widehat{a}(\tau)}{\partial \tau}
- i \frac{ \Delta_{K}}{\sqrt{ 2 \pi \tau_{c} }} \int d \tau
\widehat{a}(\tau) \tilde{b}_{sf,i}(0, \tau)
\nonumber \\
& + & \frac{\gamma_{2}}{2} \int d\tau \widehat{b}(\tau)
\frac{ \partial \widehat{b}(\tau)}{\partial \tau}
+i \frac{ \Delta_{+}}{\sqrt{ 2 \pi \tau_{c} }} \int d \tau
\widehat{b}(\tau) a_{s,i}(0, \tau)
\nonumber \\
& + & 2 \tilde{q} \int d \tau \widehat{a}(\tau) \widehat{b}(\tau)
a_{s,i}(0, \tau) \tilde{b}_{s,i}(0,\tau)
+ 2 \tilde{y} \int d \tau \widehat{a}(\tau) \widehat{b}(\tau)
a_{s,i}(0, \tau) \tilde{b}_{sf,i}(0, \tau)
\label{action}
\end{eqnarray}
When performing the RG analysis of the action $ S $, we keep \cite{known}
1: $\gamma_{2}=1 $, $ \Delta_{K}$ fixed,
2: $\gamma_{1}=1 $, $ \Delta_{+}$ fixed,
3: $\Delta_{K}, \Delta_{+}$ fixed.
We will identify all the possible fixed points or the line of fixed
points and derive the R. G. flow equations near these fixed points
in the following sections respectively.
\section{ The line of NFL fixed points }
If $ \tilde{q}=\tilde{y} =\Delta_{+} =0 $,
this fixed point is located
at $ \gamma_{1}=0, \gamma_{2}=1 $ where $\widehat{b} $ decouples,
but $ \widehat{a} $ loses its kinetic energy and becomes a
Grassmann Lagrangian multiplier, integrating $ \widehat{a} $ out leads to
the {\em boundary conditions} :
\begin{equation}
\tilde{b}^{L}_{sf,i}(0)= -\tilde{b}^{R}_{sf,i}(0)
\label{bound1p}
\end{equation}
We also have the trivial boundary conditions
\begin{equation}
a^{L}_{s,i}(0)=a^{R}_{s,i}(0) ,~~ \tilde{b}^{L}_{s,i}(0)= \tilde{b}^{R}_{s,i}(0)
\label{bound2p}
\end{equation}
By using Eqs.\ref{twist},\ref{define}, we find the boundary conditions in $ H^{\prime} $ \cite{scaling}
\begin{equation}
a^{\prime L}_{s}(0) = a^{\prime R}_{s}(0),~~~~
\left( \begin{array}{c} b^{\prime L}_{s}(0) \\
b^{\prime L}_{sf}(0) \end{array} \right )
= \left( \begin{array}{cc}
-\cos \theta & \sin \theta \\
\sin \theta & \cos \theta \\
\end{array} \right )
\left( \begin{array}{c} b^{\prime R}_{s}(0) \\
b^{\prime R}_{sf}(0) \end{array} \right )
\end{equation}
By using Eq.\ref{general}, we get the corresponding boundary conditions in $ H $
\begin{equation}
b^{L}_{s}(0) = -b^{R}_{s}(0) ,~~~ \psi^{L}_{a}(0) = e^{i 2 \delta} \psi^{R}_{a}(0),~~~
\theta=2 \delta
\label{vectornf}
\end{equation}
where $ \psi_{a}(0) = a_{s}(0) - i b_{sf}(0)$.
It is evident that at the fixed point, the impurity degree of freedoms
totally {\em disappear} and leave behind the conformally invariant boundary
conditions Eq.\ref{vectornf}.
These are {\em NFL} boundary conditions. {\em In contrast to} the boundary
conditions discussed previously ~\cite{ludwig,powerful,flavor,spinflavor},
although the boundary
conditions are still linear in the basis which separates charge, spin and flavor,
they are {\em not } in any four of the Cartan subalgebras of $ SO(8) $ group \cite{witten},
therefore {\em cannot} be expressed in terms of the chiral bosons in Eq.\ref{second}.
However, Eq.\ref{vectornf} indicates that
it is more convenient to regroup the Majorana fermions as \cite{decouple,cut}
\begin{eqnarray}
\psi^{n}_{s} & = & a_{s}-i b_{sf}= e^{-i \Phi^{n}_{s}} \nonumber \\
\psi^{n}_{sf} & = & a_{sf}+i b_{s}= e^{-i \Phi^{n}_{sf}}
\label{jinwu}
\end{eqnarray}
The boundary condition Eq.\ref{vectornf} can be expressed in terms of
the {\em new} bosons
\begin{equation}
\Phi^{n}_{s,L}=\Phi^{n}_{s,R}+ \theta,~~~~ \Phi^{n}_{sf,L}= - \Phi^{n}_{sf,R}
\label{obi}
\end{equation}
As pointed out in Ref. \cite{ludwig}, in the basis of Eq.\ref{second},
the {\em physical fermion fields} transform
as the spinor representation of $ SO(8) $, therefore in order to find the corresponding
boundary conditions in the physical fermion basis, we have to find the $ 16 \times 16 $
dimensional spinor representation of boundary conditions Eq. \ref{vectornf}.
The derivation of the boundary conditions is given in Appendix B,
the results are found to be :
\begin{equation}
\psi^{L}_{i \pm } = e^{ \pm i \theta/2 } S^{R}_{i \pm}, ~~~~~
S^{L}_{i \pm } = e^{ \pm i \theta/2 } \psi^{R}_{i \pm}
\label{spinor1}
\end{equation}
Where the new fields are defined by :
\begin{equation}
\psi_{i \pm } = \frac{1}{\sqrt{2}}
( \psi_{i \uparrow} \pm \psi_{i \downarrow} ) , ~~~~~
S_{i \pm } = \frac{1}{\sqrt{2}}
( S_{i \uparrow} \pm S_{i \downarrow} )
\end{equation}
It can be checked explicitly the above boundary conditions satisfy all the symmetry
requirement( namely $ Z_{2} \times SU_{f}(2) \times U_{c}(1) $, Time Reversal and
P-H symmetry).
Fermions with the even and the odd parity under $ Z_{2} $ symmetry are scattered into the
collective excitations of the corresponding parity which fit into
the $ S $ spinor representation
of $ SO(8) $, therefore the one particle S matrix and the residual electrical
conductivity are the {\em same}
with those of the 2CK. This is a {\em line of NFL fixed points } with $ g=\sqrt{2} $
and the symmetry $ O(1) \times U(1) \times O(5) $ which interpolates continuously between
the 2CK fixed point and the 2IK fixed point.
If $ \theta=\pi $, namely $ \Delta_{-}=0 $, the fixed point
symmetry is enlarged to $ O(3) \times O(5) $ which is
the fixed point symmetry of the 2CK. If $ \theta=0 $, namely $ \Delta_{1}=0 $, the fixed point
symmetry is enlarged to $ O(1) \times O(7) $ which is
the fixed point symmetry of the 2IK(b) (Fig.\ref{picture}).
The finite size $ -l < x < l $ spectrum
( in terms of unit $ \frac{ \pi v_{F} }{ l} $ )
can be obtained from the free fermion spectrum with {\em both}
periodic (R) {\em and} anti-periodic (NS) boundary conditions by twisting the
three Majorana fermions in the spin sector:
changing the boundary condition of the Majorana fermion $ b_{s} $ from NS sector
to R sector or vice versa, twisting the complex fermion $\psi_{s} $ by a
continuous angle $\theta =2 \delta $.
The finite size spectrum of one complex fermion is derived in Appendix A.
The complete finite size spectrum of this NFL fix line
is listed in Table \ref{nflpositive} if $ 0< \delta < \frac{\pi}{2} $ and
in Table \ref{nflnegative} if $ -\frac{\pi}{2} < \delta < 0 $.
The ground state energy is
$ E_{0} = \frac{1}{16} + \frac{1}{2} (\frac{\delta}{\pi} )^{2} $ with degeneracy $d=2$,
the first excited energy is
$ E_{1}-E_{0} = \frac{3}{8} - \frac{ |\delta| }{2\pi} $ ( if $ -\frac{\pi}{4}
< \delta < \frac{\pi}{2} $ ) or
$ E_{1} -E_{0} = \frac{1}{2} + \frac{\delta} {\pi} $
(if $ -\frac{\pi}{2} < \delta < -\frac{\pi}{4} $) with $ d=4 $.
If $ \delta=\pm \frac{\pi}{2} ( \delta=0 ) $,
the finite size spectrum of the 2CK ( the 2IK) is recovered.
The finite size spectrum of the 2CK and the 2IK are listed in Tables \ref{2CK}
and \ref{2IK} respectively.
The local correlation functions at this line of NFL fixed points are \cite{powerful}
\begin{equation}
\langle \widehat{a}(\tau) \widehat{a}(0) \rangle = \frac{1}{\tau},~~~~
\langle \tilde{b}_{sf,i}(0, \tau) \tilde{b}_{sf,i}(0,0) \rangle = \frac{\gamma^{2}_{1}}{\tau^{3}}
\label{local1}
\end{equation}
We can also read the scaling dimension of the various fields
$ [\widehat{b}]=0, [\widehat{a}]=1/2, [a_{s,i}]=[\tilde{b}_{s,i}]=1/2,
[\tilde{b}_{sf,i}]=3/2 $.
\begin{figure}
\epsfxsize=10 cm
\centerline{\epsffile{hopping.eps}}
\caption{ Phase diagram of a non magnetic impurity hopping between two sites
in a metal. The {\em line of NFL fixed points } has the symmetry $ O(1) \times U(1) \times O(5) $
with $ g=\sqrt{2} $ which interpolates continuously between
the 2CK fixed point and the 2IK fixed point.
If $ \theta=\pi $, the fixed point
symmetry is enlarged to $ O(3) \times O(5) $ which is
the fixed point symmetry of the 2CK. If $ \theta=0 $, the fixed point
symmetry is enlarged to $ O(1) \times O(7) $ which is
the fixed point symmetry of the 2IK(b). This {\em line of NFL fixed points } is unstable,
there is {\em one} relevant operator with dimension 1/2 which drives the system
to the {\em line of FL fixed points }. There is also a marginal operator
along the line.
This {\em line of FL fixed points } has the symmetry $ U(1) \times O(6) \sim U(4) $ with $ g=1 $
which interpolates continuously between the non-interacting
fixed point and the 2CSFK fixed point.
If $ \theta=0 $, the fixed point
symmetry is enlarged to $ O(8) $ which is the fixed point symmetry of the
non-interacting electrons. If $ \theta=\pi $, the fixed point
symmetry is enlarged to $ O(2) \times O(6) $ which is the the fixed point
symmetry of the 2CSFK. This {\em line of FL fixed points } is stable.
There is a marginal operator along this line.
The additional {\em NFL fixed point } with $ g=\sqrt{2} $ has the symmetry $ O(1) \times O(7) $
which is the fixed point symmetry of the 2IK(a).
This additional NFL fixed point is also {\em unstable}.
There are {\em two} relevant terms with scaling
dimension 1/2, any linear combination of the two terms will
drive the system to a given point of the {\em line of FL fixed points }.
See the text for the detailed discussions on the physical properties
of these fixed points or line of fixed points s and the crossovers between them.}
\label{picture}
\end{figure}
As shown in Ref.\cite{powerful}, at the line of fixed points , the impurity degree of freedoms
completely disappear: $ \widehat{b} $ decouples and $ \widehat{a} $ turns into
the {\em non-interacting} scaling field at the fixed point
\begin{equation}
\widehat{a}(\tau) \sim \tilde{b}_{sf,i}(0, \tau)
\end{equation}
The corresponding two scaling fields in $ H $ is
\begin{equation}
\frac{1}{\Delta_{K}}( -\Delta_{-} a_{s} + \Delta_{1} b_{sf})
\end{equation}
Following Ref.\cite{flavor}, we find the impurity spin turns into
\begin{eqnarray}
S_{x}(\tau) &\sim & - i( \widehat{b} b_{s} +
\frac{ \Delta_{1}}{\Delta_{K}}a_{s}b_{sf} ) +\cdots \nonumber \\
S_{y}(\tau) & \sim & i( \widehat{b} a_{s} +
\frac{1}{\Delta_{K}}( -\Delta_{-} a_{s} + \Delta_{1} b_{sf})b_{s} ) +\cdots \nonumber \\
S_{z}(\tau) & \sim & i \widehat{b}
\frac{1}{\Delta_{K}}( -\Delta_{-} a_{s} + \Delta_{1} b_{sf}) +\cdots
\end{eqnarray}
Where $\cdots $ stands for higher dimension operators \cite{flavor} and
$ \frac{\Delta_{1}}{ \Delta_{K}} =\sqrt{ \frac{1-\cos \theta}{2}},
\frac{\Delta_{-}}{ \Delta_{K}} =\sqrt{ \frac{1+\cos \theta}{2}} $.
The impurity spin-spin correlation function $ \langle S^{a}(\tau) S^{a}(0) \rangle
\sim 1/\tau $.
From Eq.\ref{group}, it is easy to see
that even $ y $ term itself is irrelevant near this line of fixed points ,
but Eq. \ref{weak1} shows that
it, when combined with $ \Delta_{0} $ term, will generate
$ \Delta_{1}, \Delta_{2} $ terms which
must be taken into account at this line of fixed points . $ y $ term is " dangerous"
irrelevant, similar " dangerous" irrelevant term occurred
in the two channel flavor anisotropic Kondo model \cite{flavor}.
This line of NFL fixed points is {\em unstable}.
$ \Delta_{+} $ term in Eq. \ref{action} has scaling
dimension 1/2, therefore is {\em relevant}, this term was first discovered
by MF \cite{fisher1}. The OPE of $ a_{s,i} $
with itself will generate the dimension 2 energy momentum tensor
of this Majorana fermion $ a_{s,i}(0,\tau) \frac{ \partial a_{s,i}(0,\tau) }{\partial \tau} $;
the OPE of this energy momentum tensor with $ a_{s,i} $ will generate a first order descendant
field of this primary field with dimension 3/2 $ L_{-1} a_{s,i}(0,\tau) =
\frac{ \partial a_{s,i}(0,\tau) }{\partial \tau} $ \cite{des}.
$ \tilde{q} $ term is the leading irrelevant operator with scaling
dimension 3/2, therefore contributes to
\begin{equation}
C_{imp} \sim (\tilde{q})^{2} T\log T
\label{heat1}
\end{equation}
Where $ \tilde{q} = \sqrt{\frac{1-\cos \theta}{2}} q
+ \sqrt{\frac{1+\cos \theta}{2}} y $.
It is important to see there is a {\em marginal} operator
$ \partial \Phi^{n}_{s}(0) $ in the { \em spin} space which changes the angle
$ \theta $ in Eq.\ref{obi}. This operator is very {\em different} from
the marginal operator $ \partial \Phi_{c}(0) $ in the charge sector
which changes the angle $ \theta_{ph} $ in Eq.\ref{charge}. Combined with
the leading irrelevant operator, it will always generate the dimension
1/2 relevant operator. This indicates that the existence of the line of NFL fixed
points and the existence of one relevant operator are intimately connected.
However, from Eqs.\ref{final}, \ref{local1}, only $ q $ term contributes
to the impurity susceptibility
\begin{equation}
\chi^{h}_{imp} \sim q^{2} \log T
\label{sus1}
\end{equation}
As shown in Ref.\cite{line}, the Wilson Ration $ R=8/3 $ is universal for any
{\em general } spin anisotropic 2CK. However, near this line of NFL fixed points , $ R $
is {\em not} universal.
$ \gamma_{1} $ term has dimension 2.
$ \tilde{y} $ term has scaling dimension 5/2, in $ H^{\prime} $, it can be written as
\begin{equation}
\tilde{y} :\widehat{a}(\tau) \frac{\partial \widehat{a}(\tau)}{\partial \tau}: a_{s,i}(0,\tau)
\sim :\tilde{b}_{sf,i}(0, \tau) \frac{\partial \tilde{b}_{sf,i}(0, \tau) }{\partial \tau}: a_{s,i}(0,\tau)
\label{later}
\end{equation}
Where $ \tilde{y} = -\sqrt{\frac{1+\cos \theta}{2}} q
+ \sqrt{\frac{1-\cos \theta}{2}} y $.
This operator content is totally consistent with the following CFT analysis
on the special point $ \theta=\pi $:
if $ \theta=\pi $, the fixed point symmetry is $ O(3) \times O(5) $,
the CFT analysis of Ref.\cite{line} can be borrowed.
Under the restriction of $ Z_{2} $ and Time-Reversal symmetry \cite{atten},
it is easy to see there is only one relevant
operator $ \phi^{1} $ with scaling dimension 1/2 which can be identified
as $ a_{s,i} $, two leading irrelevant
operators with scaling dimension 3/2, one is the spin 0 operator
$ T^{0}_{0} $ which is {\em Virasoro primary} and can be identified as $ q $ term,
another is the spin 1 operator $ L_{-1} \phi^{1} = \frac{ d \phi^{1} }{ d \tau} $
which is the {\em first order Virasoro descendant}.
By Eq.\ref{twist} and Eq.\ref{define}, the spin-0 and spin-1 leading irrelevant operators
in $ H^{\prime} $ become \cite{check}
\begin{eqnarray}
\tilde{q} \widehat{a}(\tau) \widehat{b}(\tau)
a_{s,i}(0, \tau) \tilde{b}_{s,i}(0,\tau) \sim
\tilde{b}_{sf,i}(0, \tau) a_{s,i}(0, \tau) \tilde{b}_{s,i}(0,\tau) \nonumber \\
= b_{s,i}(0,\tau) b_{sf}(0, \tau) a_{s,i}(0, \tau)
= b^{\prime}_{s}(0,\tau) b_{sf}(0, \tau) a^{\prime}_{s}(0, \tau) \nonumber \\
\frac{\partial a_{s,i}(0,\tau) }{\partial \tau} \sim
\frac{\partial a^{\prime}_{s}(0,\tau) }{\partial \tau} ~~~~~~~~~~~~~~~~~~~~~~
\end{eqnarray}
By Eq.\ref{general}, the corresponding spin-0 and spin-1 operators in $ H $ are
\begin{equation}
\tilde{q} b_{s}(0,\tau)( a_{s}(0, \tau) b_{sf}(0, \tau)),~~~
\frac{\partial b_{s}(0,\tau) }{\partial \tau}
\end{equation}
Obviously, only the coefficient $\tilde{q} $ {\em depend} on the angle
$\theta $ which specifies the position on the line of NFL fixed points .
They can be written in terms of the {\em new} bosons introduced in Eq.\ref{jinwu},
\begin{equation}
\cos \Phi^{n}_{sf}(0,\tau) \partial \Phi^{n}_{s}(0,\tau),~~~
\frac{\partial }{\partial \tau} \cos \Phi^{n}_{sf}(0,\tau)
\label{leading}
\end{equation}
It is easy to see that the marginal operator $ \partial \Phi^{n}_{s}(0) $ makes
no contribution to the Green function.
Following Ref.\cite{powerful}, the first order correction to the single particle L-R Green
functions due to the spin-0 operator can be
calculated ( $ x_{1} >0, x_{2} <0 $ )
\begin{eqnarray}
\langle \psi_{1 +}( x_{1},\tau_{1} ) \psi^{\dagger}_{1 +}( x_{2},\tau_{2} ) \rangle \sim
\tilde{q} \int d \tau \langle \psi_{1+}(x_{1}, \tau_{1})
\cos \Phi^{n}_{sf}(0,\tau) \partial \Phi^{n}_{s}(0,\tau)
\psi^{\dagger}_{1+}(x_{2}, \tau_{2}) \rangle \nonumber \\
\sim \tilde{q} \int d\tau
\langle e^{-\frac{i}{2} \Phi_{c}( x_{1}, \tau_{1} )} e^{\frac{i}{2} \Phi_{c}( x_{2}, \tau_{2} )}\rangle
\langle e^{-\frac{i}{2} \Phi^{n}_{s}( x_{1}, \tau_{1} )} \partial \Phi^{n}_{s}(0, \tau)
e^{\frac{i}{2} ( \Phi^{n}_{s}( x_{2}, \tau_{2} ) + \theta )}\rangle \nonumber \\
\times \langle e^{-\frac{i}{2} \Phi_{f}( x_{1}, \tau_{1} )} e^{\frac{i}{2} \Phi_{f}( x_{2}, \tau_{2} )}\rangle
\langle e^{-\frac{i}{2} \Phi^{n}_{sf}( x_{1}, \tau_{1} )} e^{ i \Phi^{n}_{sf}( 0, \tau )}
e^{-\frac{i}{2} \Phi^{n}_{sf}( x_{2}, \tau_{2} )}\rangle \nonumber \\
\sim \tilde{q} e^{i \theta/2} ( z_{1}- \bar{z}_{2})^{-3/2} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\label{sin}
\end{eqnarray}
where $ z_{1}=\tau_{1}+i x_{1} $ is in the upper half plane,
$ \bar{z}_{2}=\tau_{2}+i x_{2} $ is in the lower half plane.
The first order correction to the single particle L-R Green
functions due to the spin-1 operator
vanishes because the spin-1 leading irrelevant operator can be written
as a total derivative and the three point functions are the periodic function of
the imaginary time.
The bosonized form of the $\tilde{y} $ term in Eq.\ref{later} in $ H $ is \cite{bose}
\begin{eqnarray}
\tilde{y} [ \cos \theta :\cos 2\Phi^{n}_{s}(0,\tau): -\frac{1}{2} :( \partial \Phi^{n}_{s}(0,\tau))^{2}:
~~~~~~~~~~\nonumber \\
-\sin \theta (:\sin 2\Phi^{n}_{s}(0,\tau): - \partial^{2} \Phi^{n}_{s}(0,\tau)) ]
\cos \Phi^{n}_{sf}(0,\tau)
\end{eqnarray}
The first order correction due to this dimension 5/2 operator is
\begin{eqnarray}
\langle \psi_{1 +}( x_{1},\tau_{1} ) \psi^{\dagger}_{1 +}( x_{2},\tau_{2} ) \rangle \sim
~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber \\
\tilde{y} \int d\tau
\langle e^{-\frac{i}{2} \Phi^{n}_{s}( x_{1}, \tau_{1} )} [-\frac{1}{2}
:(\partial \Phi^{n}_{s}(0, \tau))^{2}:+\sin \theta \partial^{2} \Phi^{n}_{s}(0,\tau) ]
e^{\frac{i}{2} ( \Phi^{n}_{s}( x_{2}, \tau_{2} ) + \theta )}\rangle \nonumber \\
\times \langle e^{-\frac{i}{2} \Phi^{n}_{sf}( x_{1}, \tau_{1} )} e^{ i \Phi^{n}_{sf}( 0, \tau )}
e^{-\frac{i}{2} \Phi^{n}_{sf}( x_{2}, \tau_{2} )}\rangle
\sim \tilde{y} e^{i \theta/2} (i C_{1} + C_{2} \sin \theta ) ( z_{1}- \bar{z}_{2})^{-5/2} ~~~~~~
\label{cos}
\end{eqnarray}
Where $ C_{1}, C_{2} $ are real numbers.
From Eq.\ref{leading}, it is easy to identify another dimension 5/2 operator
\begin{equation}
: ( \partial \Phi^{n}_{s}(0,\tau))^{2}: \cos \Phi^{n}_{sf}(0,\tau)
\label{onemore}
\end{equation}
The contribution from this operator can be similarly calculated.
By using the following OPE:
\begin{eqnarray}
: e^{-\frac{i}{2} \Phi( z_{1} )}: : e^{\frac{i}{2} \Phi( z_{2})}: =
(z_{1}-z_{2})^{-1/4}-\frac{i}{2}(z_{1}-z_{2})^{3/4} :\partial \Phi(z_{2}):
\nonumber \\
-\frac{i}{4}(z_{1}-z_{2})^{7/4} :\partial^{2} \Phi(z_{2}):
-\frac{1}{8}(z_{1}-z_{2})^{7/4} : (\partial \Phi(z_{2}) )^{2}: + \cdots
\label{ope}
\end{eqnarray}
We get the three point functions
\begin{eqnarray}
\langle e^{-\frac{i}{2} \Phi( z_{1} )} \partial \Phi(z) e^{\frac{i}{2} \Phi( z_{2})} \rangle
& = & \frac{-i/2}{ (z_{1}-z)(z_{1}-z_{2})^{-3/4}(z-z_{2})} \nonumber \\
\langle e^{-\frac{i}{2} \Phi( z_{1} )} \partial^{2} \Phi(z) e^{\frac{i}{2} \Phi( z_{2})} \rangle
& = & \partial_{z}[ \frac{-i/2}{ (z_{1}-z)(z_{1}-z_{2})^{-3/4}(z-z_{2})} ]
\label{odd}
\end{eqnarray}
By Conformal Ward identity, we can write the three point function with the energy momentum
tensor
\begin{eqnarray}
\langle e^{-\frac{i}{2} \Phi( z_{1} )} T(z) e^{\frac{i}{2} \Phi( z_{2})} \rangle
= \frac{1/8}{ (z_{1}-z)^{2}(z_{1}-z_{2})^{-7/4}(z-z_{2})^{2}}
\label{even}
\end{eqnarray}
In Ref.\cite{affleck2}, AL found that in the multi-channel Kondo model,
the electron self-energy has both real and imaginary
parts which are {\em non-analytic } function of the frequency $ \omega $.
In the presence of P-H symmetry, the imaginary part is {\em even } function of $ \omega $,
the real part is {\em odd} function of $ \omega $,
because the two parts are related by Kramers-Kronig relation. Only the part
of self-energy which is both {\em imaginary} and {\em even} function of $ \omega $ contributes
to the electron conductivity. The factor $ i $ will interchange {\em real } and {\em imaginary}
part. In evaluating Eq.\ref{cos}, we used the important fact that the two three point functions
in Eqs.\ref{odd},\ref{even} differ by the factor $ i $.
By conformal transforming Eq.\ref{sin} to finite temperature,
we get the {\em leading} term of the low temperature conductivity from channel one
and parity $ + $ fermions
\begin{equation}
\sigma_{1 +}(T) \sim \frac{\sigma_{u}}{2} (1- \tilde{q} \sin \frac{\theta}{2} \sqrt{T}),~~~~
\sigma_{u}= \frac{ 2 \pi (e \rho_{F} v_{F} )^{2} }{ 3 n_{i} }
\label{sign1}
\end{equation}
Where $ \rho_{F} $ is the density of states {\em per spin per channel} at the fermi energy,
$ n_{i} $ is the impurity density.
Similarly \cite{ambi}, we get the {\em leading} term of the low temperature conductivity
from channel one and parity $ - $ fermions
\begin{equation}
\sigma_{1 -}(T) \sim \frac{\sigma_{u}}{2} (1- \tilde{q} \sin \frac{\theta}{2} \sqrt{T})
\label{sign2}
\end{equation}
Because of the global $ SU(2)_{f} $ symmetry in the flavor sector, the same equations hold in
channel 2.
Even the symmetry in the spin sector is $ O(1) \times U(1) $ instead of $ O(3) $ of the
2CK, the 2 channel and 2 parity fermions do make the {\em same} leading contribution to the total conductivity
\begin{equation}
\sigma(T) \sim 2\sigma_{u} (1- \tilde{q} \sin \frac{\theta}{2} \sqrt{T})
\label{resist1}
\end{equation}
For $ \theta= \pi $, namely at the 2CK, $\tilde{q}=q $, then two universal ratios can be formed
from Eqs. \ref{heat1},\ref{sus1},\ref{resist1}.
For $ \theta=0 $, namely at the 2IK, $ \tilde{q}=y $, the coefficient of $ \sqrt{T} $ vanishes.
It is evident that the 2nd order correction ( actually any {\em even} order correction )
to the Green function vanishes, the 3rd order correction will lead to $ T^{3/2} $,
but the coefficients still vanishes due to $\sim \sin \theta/2=0 $, because {\em odd} order
corrections have the same $ i $ factor.
By conformal transforming Eq.\ref{cos} to finite temperature \cite{connect},
we get the {\em next-leading} term of the low temperature electrical conductivity
\begin{equation}
\sigma_{1+} \sim \tilde{y}( \cos \frac{\theta}{2} C_{1} + \sin \frac{\theta}{2} \sin \theta C_{2}) T^{3/2}
\label{higher}
\end{equation}
Putting $ \theta =0 $ ( then $\tilde{y}=-q $ ) in the above equation
and adding the contribution from the operator in Eq.\ref{onemore},
we get the leading term at the 2IK fixed point:
\begin{equation}
\sigma(T) \sim 2 \sigma_{u} (1+ T^{3/2})
\label{resist11}
\end{equation}
It is evident that even at the 2IK point, no universal ratios can be formed.
For general $\theta $, the leading low temperature behaviors of the three physical measurable
quantities are given by Eqs.\ref{heat1},\ref{sus1},\ref{resist1}, no universal ratios can be
formed.
The potential scattering term $ V_{1} $ is a marginal operator which
causes a phase shift in the charge sector:
\begin{equation}
\Phi_{c,L}= \Phi_{c,R} + \theta_{ph}
\label{charge}
\end{equation}
The symmetry of the fixed point is reduced to $ O(1) \times U(1) \times O(3) \times U(1) $,
Eqs.\ref{sign1}, \ref{sign2} become:
\begin{eqnarray}
\sigma_{1 +}(T) \sim \frac{\sigma_{u}}{2} (1- \tilde{q} \sin \frac{\theta +\theta_{ph}}{2} \sqrt{T}) \nonumber \\
\sigma_{1 -}(T) \sim \frac{\sigma_{u}}{2} (1- \tilde{q} \sin \frac{\theta-\theta_{ph}}{2} \sqrt{T})
\end{eqnarray}
It is easy to see that in the presence of P-H symmetry breaking, {\em in contrast to}
the 2CK, the different parity fermions do make different contributions to the
conductivity, Eq.\ref{resist1} becomes
\begin{equation}
\sigma(T) \sim 2\sigma_{u} (1- \tilde{q} \sin \frac{\theta}{2} \cos \frac{\theta_{ph}}{2} \sqrt{T})
\label{times}
\end{equation}
Eq.\ref{higher} becomes
\begin{eqnarray}
\sigma_{1+} & \sim & \tilde{y}( \cos \frac{\theta + \theta_{ph}}{2} C_{1} +
\sin \frac{\theta+\theta_{ph}}{2} \sin \theta C_{2}) T^{3/2} \nonumber \\
\sigma_{1-} & \sim & \tilde{y}( \cos \frac{\theta - \theta_{ph}}{2} C_{1} +
\sin \frac{\theta-\theta_{ph}}{2} \sin \theta C_{2}) T^{3/2}
\end{eqnarray}
The total conductivity becomes
\begin{equation}
\sim \tilde{y} \cos \frac{\theta_{ph}}{2}( \cos \frac{\theta}{2} C_{1} + \sin \frac{\theta}{2} \sin \theta C_{2}) T^{3/2}
\end{equation}
The total conductivity at the 2IK becomes
\begin{equation}
\sigma(T) \sim 2 \sigma_{u} (1+ \cos \frac{\theta_{ph}}{2} T^{3/2})
\end{equation}
As shown by AL \cite{affleck1}, P-H symmetry breaking does not change the leading results of the
specific heat and susceptibility.
In this section, we discussed two marginal operators, one in spin space $ \partial \Phi^{n}_{s}(0) $,
another in the charge space $ \partial \Phi_{c}(0) $. Both make {\em no} contributions
to the conductivity and make only subleading contributions to the thermodynamic
quantities. However, $ \partial \Phi^{n}_{s}(0) $ is much more important.
Combined with the leading irrelevant operator in the spin sector, it will
generate the dimension 1/2 relevant operator which will always make the line
of NFL fixed points unstable. Furthermore, as shown by Eqs.\ref{heat1},\ref{sus1},
even the coefficients of the leading terms of the thermodynamic quantities
depend on the position on the line caused by $ \partial \Phi^{n}_{s}(0) $.
\section{Additional NFL Fixed point }
If $ \tilde{q}=\tilde{y} =\Delta_{K} =0 $,
this fixed point is located
at $ \gamma_{1}=1, \gamma_{2}=0 $ where $\widehat{a} $ decouples,
$ \widehat{b} $ loses its kinetic energy and becomes a
Grassmann Lagrangian multiplier, integrating $ \widehat{b} $ out leads to
the {\em boundary conditions} :
\begin{equation}
a^{L}_{s,i}(0)=-a^{R}_{s,i}(0)
\end{equation}
We also have the trivial boundary conditions
\begin{equation}
b^{L}_{s,i}(0)= b^{R}_{s,i}(0),
~~~~~ b^{L}_{sf}(0)= b^{R}_{sf}(0)
\end{equation}
Using Eq.\ref{general}, the above boundary conditions of $ H^{\prime} $
correspond to the following boundary conditions of $ H $:
\begin{equation}
a^{s}_{L}(0) = -a^{s}_{R}(0) ,~~~ b^{s}_{L}(0) = b^{s}_{R}(0) ,~~~
b^{sf}_{L}(0) = b^{sf}_{R}(0)
\label{twofold}
\end{equation}
The above boundary conditions can be expressed in terms of the {\em new}
chiral bosons in Eq.\ref{jinwu}:
\begin{equation}
\Phi^{n}_{s,L}(0) = - \Phi^{n}_{s,R}(0) + \pi
\end{equation}
In terms of {\em new} physical fermions, it reads:
\begin{equation}
\psi_{ i \pm,L}(0) = e^{ \pm i \frac{\pi}{2} } S_{i \pm,R}(0)
\end{equation}
This is a {\em NFL fixed point } with $ g=\sqrt{2} $ and the
symmetry $ O(1) \times O(7) $ which is the fixed point symmetry of the 2IK(a)
(Fig.~\ref{picture}). At this fixed point, the original electrons scatter into
the collective excitations which fit into the $ S $ spinor representation of SO(8).
The finite size spectrum is listed in Table \ref{2IK}.
The local correlation functions at this fixed point are
\begin{equation}
\langle \widehat{b}(\tau) \widehat{b}(0) \rangle \sim \frac{1}{\tau}, ~~~~
\langle a_{s,i}(0, \tau) a_{s,i}(0,0) \rangle \sim \frac{\gamma^{2}_{2}}{\tau^{3}}
\label{local2}
\end{equation}
We can also read the scaling dimensions of the various fields
$ [\widehat{a}]=0, [\widehat{b}]=1/2, [b_{s,i}]=[b_{sf}]=1/2, [a_{s,i}]=3/2 $.
This NFL fixed point is more unlikely to be observed by experiments, because
it has {\em two} relevant directions $ \Delta_{1},\Delta_{-} $.
Similar to the discussions in the last section, at the 2IK fixed point,
$ \widehat{a} $ decouples and $ \widehat{b} $ turns into
the {\em non-interacting} scaling field at the fixed point
\begin{equation}
\widehat{b}(\tau) \sim a_{s,i}(0, \tau)
\end{equation}
The corresponding scaling field in $ H $ is
\begin{equation}
-b_{s}(0,\tau)
\end{equation}
The impurity spin turns into
\begin{eqnarray}
S_{x}(\tau) & \sim & i \widehat{a} a_{s} + \cdots \nonumber \\
S_{y}(\tau) & \sim & i( \widehat{a} b_{s} +a_{s} b_{s} ) +\cdots \nonumber \\
S_{z}(\tau) & \sim & i \widehat{a}b_{s} +\cdots
\end{eqnarray}
The impurity spin-spin correlation function $ \langle S^{a}(\tau) S^{a}(0) \rangle
\sim 1/\tau $.
On the line of NFL fixed points discussed in the previous section, if $ \theta =0$,
the fixed point symmetry is enlarged to the 2IK(b) ( Fig.~\ref{picture} ).
Although these two fixed points
have the same symmetry, therefore the same finite size spectrum,
but the {\em allowed} boundary operators are very {\em different}.
This additional NFL fixed point is also {\em unstable}.
$ \Delta_{1}, \Delta_{-} $ are {\em two} relevant terms with scaling
dimension 1/2, any linear combination of the two terms will
drive the system to a given point on the line of FL fixed points to be discussed
in the following section. They will generate two dimension $ 3/2 $
leading irrelevant operators $L_{-1} b_{sf}(0,\tau), L_{-1}b_{s,i}(0,\tau) $
respectively \cite{des} and two dimension 2 subleading irrelevant operators
(energy-momentum tensors )
$ b_{sf}(0,\tau) \frac{ \partial b_{sf}(0,\tau) }{\partial \tau}
, b_{s,i}(0,\tau) \frac{ \partial b_{s,i}(0,\tau) }{\partial \tau} $.
As explained in the last section, because the two leading irrelevant operators
are {\em first order Virasoro descendants}, they make {\em no} contributions.
$\gamma_{2} $ term also has dimension 2. In all there are {\em three} dimension 2 subleading
irrelevant operators which contribute $ C_{imp} \sim T $. In all, we have
\begin{equation}
C_{imp} \sim T
\label{heat2}
\end{equation}
From Eqs.\ref{final},\ref{local2}, we get the susceptibility
\begin{equation}
\chi^{h}_{imp} \sim q^{2} \log T
\label{sus2}
\end{equation}
By Eq.\ref{define}, the leading and subleading irrelevant operators in $ H^{\prime} $ become
\begin{eqnarray}
\frac{\partial b_{sf}(0,\tau)}{\partial \tau},~~~
\frac{\partial b^{\prime}_{s}(0,\tau)}{\partial \tau} ~~~~~~~~~~~~~ \nonumber \\
b_{sf}(0,\tau) \frac{ \partial b_{sf}(0,\tau) }{\partial \tau},~~~
b^{\prime}_{s}(0,\tau) \frac{ \partial b^{\prime}_{s}(0,\tau) }{\partial \tau} \nonumber \\
\gamma_{2} \widehat{b}(\tau) \frac{\partial \widehat{b}(\tau)}{\partial \tau}
= \gamma_{2} a^{\prime}_{s}(0,\tau)
\frac{\partial a^{\prime}_{s}(0,\tau)}{\partial \tau}
\end{eqnarray}
By Eq.\ref{general}, the corresponding operators in $ H $ are
\begin{eqnarray}
\frac{\partial b_{sf}(0,\tau)}{\partial \tau},~~~ \frac{\partial a_{s}(0,\tau)}{\partial \tau} ~~~~~~~~~~~~~~~~~~~~~
\nonumber \\
b_{sf}(0,\tau) \frac{ \partial b_{sf}(0,\tau) }{\partial \tau},~~~~
a_{s}(0,\tau) \frac{ \partial a_{s}(0,\tau) }{\partial \tau},~~~
\gamma_{2} b_{s}(0, \tau) \frac{\partial b_{s}(0,\tau)}{\partial \tau}
\label{twofold1}
\end{eqnarray}
They can be written in terms of the {\em new} bosons \cite{omit}
\begin{eqnarray}
\frac{\partial}{\partial \tau} \sin \Phi^{n}_{s}(0,\tau),~~~
\frac{\partial}{\partial \tau} \cos \Phi^{n}_{s}(0,\tau) \nonumber \\
\pm \cos 2\Phi^{n}_{s}(0,\tau)- \frac{1}{2} :( \partial \Phi^{n}_{s}(0,\tau))^{2}: \nonumber \\
\gamma_{2} ( \cos 2\Phi^{n}_{sf}(0,\tau)-
\frac{1}{2} :( \partial \Phi^{n}_{sf}(0,\tau))^{2}:)
\label{change}
\end{eqnarray}
The first order corrections to the single particle L-R Green function
due to the 1st and the 2nd operators in Eq.\ref{change} vanish.
Those due to the 3rd and 4th operators are \cite{omit}
\begin{eqnarray}
\langle \psi_{1 +}( x_{1},\tau_{1} ) \psi^{\dagger}_{1 +}( x_{2},\tau_{2} ) \rangle
\sim ~~~~~~~~~~~~~~~~~~~~~
\nonumber \\
\int d \tau \langle e^{-\frac{i}{2} \Phi^{n}_{s}( x_{1}, \tau_{1} )}
( \pm \cos 2\Phi^{n}_{s}(0,\tau) -\frac{1}{2} :( \partial \Phi^{n}_{s}(0,\tau))^{2}:)
e^{\frac{i}{2} (- \Phi^{n}_{s}( x_{2}, \tau_{2} )+ \pi)}\rangle =0
\end{eqnarray}
The 5th operator in Eq.\ref{change} makes {\em no} contributions to the Green function either:
\begin{eqnarray}
& \langle & \psi_{1 +}( x_{1},\tau_{1} ) \psi^{\dagger}_{1 +}( x_{2},\tau_{2} ) \rangle \sim
\gamma_{2} \int d \tau
\langle e^{\frac{i}{2} (-\Phi^{n}_{s}( x_{1}, \tau_{1} ) +\pi )}
e^{-\frac{i}{2} \Phi^{n}_{s}( x_{2}, \tau_{2} ) }\rangle \nonumber \\
& \times & \langle e^{-\frac{i}{2} \Phi^{n}_{sf}( x_{1}, \tau_{1} )}
( \cos 2\Phi^{n}_{sf}(0,\tau) -\frac{1}{2} :( \partial \Phi^{n}_{sf}(0,\tau))^{2}:)
e^{\frac{i}{2} \Phi_{sf} ( x_{2}, \tau_{2} )}\rangle =0
\end{eqnarray}
It is easy to see higher order corrections due to the above operators also vanish.
The $ y $ and $ q $ terms are two irrelevant operators
with scaling dimension 5/2, they can be written in $ H^{\prime} $ as
\begin{equation}
: \widehat{b}(\tau) \frac{\partial \widehat{b}(\tau)}{\partial \tau}: b_{sf}(0,\tau),~~~
: \widehat{b}(\tau) \frac{\partial \widehat{b}(\tau)}{\partial \tau}: b_{s,i}(0,\tau)
\end{equation}
The bosonized forms in $ H $ are
\begin{eqnarray}
( \cos 2\Phi^{n}_{sf}(0,\tau)- \frac{1}{2} :( \partial \Phi^{n}_{sf}(0,\tau))^{2}:)
\sin \Phi^{n}_{s}(0,\tau) \nonumber \\
( \cos 2\Phi^{n}_{sf}(0,\tau)- \frac{1}{2} :( \partial \Phi^{n}_{sf}(0,\tau))^{2}:)
\cos \Phi^{n}_{s}(0,\tau)
\end{eqnarray}
The first order correction due to the first operator is \cite{connect}
\begin{eqnarray}
& \langle & \psi_{1 +}( x_{1},\tau_{1} ) \psi^{\dagger}_{1 +}( x_{2},\tau_{2} ) \rangle \sim
i \int d \tau
\langle e^{-\frac{i}{2} \Phi^{n}_{s}( x_{1}, \tau_{1} )} e^{i \Phi^{n}_{s}(0,\tau)}
e^{\frac{i}{2} (- \Phi^{n}_{s}( x_{2}, \tau_{2} ) +\pi) }\rangle \nonumber \\
& \times & \langle e^{-\frac{i}{2} \Phi^{n}_{sf}( x_{1}, \tau_{1} )}
:( \partial \Phi^{n}_{sf}(0,\tau))^{2}:
e^{\frac{i}{2} \Phi_{sf}( x_{2}, \tau_{2} )}\rangle
\sim i (z_{1}-\bar{z}_{2})^{-5/2}
\label{save}
\end{eqnarray}
The above integral is essentially the same as the first part of that in Eq.\ref{cos}.
As explained in the preceding section, due to the factor $ i $ difference than the first
operator, the 2nd operator makes {\em no} contribution to the conductivity .
The other two dimension 5/2 operators are \cite{flavor}
\begin{equation}
L_{-2} b_{sf}(0,\tau) = \frac{3}{4} \frac{\partial^{2}}{\partial \tau^{2}} \sin \Phi^{n}_{s}(0,\tau),~~~
L_{-2} a_{s}(0,\tau) = \frac{3}{4} \frac{\partial^{2}}{\partial \tau^{2}} \cos \Phi^{n}_{s}(0,\tau)
\end{equation}
Because they can still be written as total derivatives, therefore make no contributions to
the Green functions.
Conformal transformation of Eq.\ref{save} to finite temperature and scaling analysis \cite{twoimp}
lead to
\begin{equation}
\sigma(T) \sim 2\sigma_{u} (1+ T^{3/2})
\label{resist2}
\end{equation}
There is {\em no} chance to form universal ratios from Eqs.\ref{heat2},\ref{sus2},\ref{resist2}.
In Appendix C, similar calculations in the {\em old} boson basis Eq.\ref{second} are performed.
\section{ The line of FL fixed points }
If $ \tilde{q}=\tilde{y} =0 $,
this fixed point is located
at $ \gamma_{1}= \gamma_{2}=0 $ where both $\widehat{a} $ and
$ \widehat{b} $ lose their kinetic energies and become two
Grassmann Lagrangian multipliers, integrating them out leads to
the following {\em boundary conditions} :
\begin{equation}
a^{L}_{s,i}(0)=-a^{R}_{s,i}(0),
~~~~~ \tilde{b}^{L}_{sf,i}(0)= -\tilde{b}^{R}_{sf,i}(0)
\end{equation}
We also have the trivial boundary conditions
\begin{equation}
\tilde{b}^{L}_{s,i}(0)= \tilde{b}^{R}_{s,i}(0)
\end{equation}
Following the same procedures as those in the discussion of
the line of NFL fixed points , the above boundary
conditions correspond to the boundary conditions in $ H $
\begin{equation}
b^{s}_{L}(0) = b^{s}_{R}(0), ~~~~~ \psi^{a}_{L}(0) = e^{i \theta} \psi^{a}_{R}(0)
\label{vectorf}
\end{equation}
The above boundary condition can be expressed in terms of
the {\em new} bosons in Eq.\ref{jinwu}
\begin{equation}
\Phi^{n}_{s,L}=\Phi^{n}_{s,R}+ \theta
\label{fermi}
\end{equation}
As discussed in the line of NFL fixed points , the physical fermions transform as
the spinor representation
of the above boundary conditions, the corresponding boundary conditions are
derived in Appendix B, the result is
\begin{equation}
\psi^{L}_{i \pm } = e^{ \pm i \theta/2 } \psi^{R}_{i \pm}
\label{last}
\end{equation}
It can be checked explicitly the above boundary conditions
satisfy all the symmetry requirement.
The fermion fields with the even and the odd parity under $ Z_{2} $
suffer opposite continuously changing phase shifts
$ \pm \frac{\theta}{2} $
along this line of fixed points . Depending on the sign of $ \Delta_{0} $, the impurity
will occupy either the even parity state with $ S^{x} =\frac{1}{2} $ or
the odd parity state $ S^{x}= -\frac{1}{2} $. This simple physical picture
should be expected from the starting Hamiltonian Eq.\ref{start}:
If we keep $ \Delta_{0} $ term fixed, then $ V_{3} $ term is irrelevant,
the $ y $ term ( $ y=2 V_{2} $ ) is exact marginal, it causes
continuous opposite phase shifts for $ \psi_{i \pm} $.
$ V_{3} $ term will generate
the dimension 2 operator $ : J^{z}(0)J^{z}(0) : $, the OPE of the $ y $ term
and the $ V_{3} $ term will generate the $ S^{z}J^{y}(0) $ term, this term will
generate {\em another} dimension 2 operator $ : J^{y}(0) J^{y}(0) : $.
The impurity spin-spin correlation function $ S^{z}(\tau) S^{z}(0)
\sim ( \frac{ V_{3}}{ \Delta_{0}})^{2} \frac{1}{ \tau^{2}} $ which is
consistent with Eq.\ref{kick}.
One particle matrix is $ S^{\pm}_{1} = e^{ \pm i \theta/2 } $,
the residual conductivity $ \sigma(0) = 2\sigma_{u}/(1- \cos \frac{\theta}{2} ) $.
This is a {\em line of FL fixed points } with $ g=1 $ and the symmetry $ U(1)
\times O(6) \sim U(4) $ which interpolates continuously between the non-interacting
fixed point and the 2CSFK fixed point found by the author
in Ref. \cite{spinflavor}.
If $ \theta=0 $, namely $ \Delta_{1}=0 $, the residual conductivity comes from
the potential scattering term discussed in Sec. V which is omitted in this section,
the fixed point
symmetry is enlarged to $ O(8) $ which is the fixed point symmetry of the
non-interacting electrons. We can see this very intuitively from Eq. \ref{start}:
if $ y= \Delta_{1} = \Delta_{2}=0 $, then they will remain zero, the impurity will
be in either the even parity state or the odd parity state depending on the sign
of $ \Delta_{0} $, the electrons and the impurity
will asymptotically decouple, therefore the electrons are asymptotically free \cite{deg}.
Non-zero $ \Delta_{2} $ will {\em not} change the above physical picture,
because it will not generate $ y $ and $ \Delta_{1} $ terms, however
if $ \Delta_{+} = \Delta_{0} + \Delta_{2}=0 $, the 2IK(b) is reached,
if $ \Delta_{-} = \Delta_{0} - \Delta_{2}=0 $, the 2IK(a) is reached,
As shown in Fig.\ref{picture}, both fixed points will flow to the free fixed point
asymptotically. If $ \theta=\pi $, namely $ \Delta_{-}=0 $, $ \sigma(0)=2 \sigma_{u} $,
the fixed point
symmetry is enlarged to $ O(2) \times O(6) $ which is the the fixed point
symmetry of the 2CSFK.
This line of FL fixed points is {\em stable}. There is {\em no} relevant operator.
There is one marginal operator in the spin sector $ \partial \Phi^{n}_{s}(0) $
which changes the angle $ \theta $ in Eq. \ref{fermi}.
$ \gamma_{1} $ and $\gamma_{2} $ terms which are leading irrelevant operators
have dimension 2, they lead to typical fermi liquid behaviors.
$ \tilde{q} $ term has dimension 3; in $ H^{\prime} $, it can be written as
$ : \widehat{b}(\tau) \frac{\partial \widehat{b}(\tau)}{\partial \tau} :
b_{s,i}(0,\tau) b_{sf}(0, \tau) $.
The $ \tilde{y} $ term has dimension 4; in $ H^{\prime} $, it can be written as
$ :\widehat{a}(\tau) \frac{\partial \widehat{a}(\tau)}{\partial \tau}:
: \widehat{b}(\tau) \frac{\partial \widehat{b}(\tau)}{\partial \tau} : $.
This operator contents
are completely consistent with our direct analysis mentioned above.
This complete agreement provides a very powerful check on the method developed
in Ref.\cite{powerful}.
The local correlation functions at this fixed point are
\begin{eqnarray}
\langle \widehat{a}(\tau) \widehat{a}(0) \rangle & \sim & \frac{1}{\tau}, ~~~~
\langle \tilde{b}_{sf,i}(0, \tau) \tilde{b}_{sf,i}(0,0) \rangle \sim \frac{\gamma^{2}_{1}}{\tau^{3}}
\nonumber \\
\langle \widehat{b}(\tau) \widehat{b}(0) \rangle & \sim & \frac{1}{\tau}, ~~~~
\langle a_{s,i}(0, \tau) a_{s,i}(0,0) \rangle \sim \frac{\gamma^{2}_{2}}{\tau^{3}}
\end{eqnarray}
From the above equation, we can also read the scaling dimension of the various fields
$ [\widehat{a}]=[\widehat{b}]=1/2, [\tilde{b}_{s,i}]=1/2,
[a_{s,i}]=[\tilde{b}_{sf,i}]=3/2 $.
At the fixed point, $ \widehat{a}, \widehat{b} $ turn into
the {\em non-interacting} scaling fields in $ H^{\prime} $
\begin{eqnarray}
\widehat{a}(\tau) & \sim & \tilde{b}_{sf,i}(0, \tau)=
\frac{1}{\Delta_{K}}( -\Delta_{-} b_{s,i}(0,\tau)
+ \Delta_{1} b_{sf}(0,\tau)) \nonumber \\
\widehat{b}(\tau) & \sim & a_{s,i}(0, \tau)
\end{eqnarray}
The corresponding two scaling fields in $ H $ are
\begin{eqnarray}
\frac{1}{\Delta_{K}}( -\Delta_{-} a_{s} + \Delta_{1} b_{sf}) \nonumber \\
-b_{s}(0, \tau) ~~~~~~~~~~~~~~~
\end{eqnarray}
The impurity spin turns into
\begin{eqnarray}
S_{x}(\tau) &\sim &- i \frac{\Delta_{1}}{\Delta_{K}} a_{s} b_{sf} +\cdots \nonumber \\
S_{y}(\tau) & \sim & i( a_{s} b_{s} +
\frac{1}{\Delta_{K}}( -\Delta_{-} a_{s} + \Delta_{1} b_{sf})b_{s} ) +\cdots \nonumber \\
S_{z}(\tau) & \sim & i \frac{1}{\Delta_{K}}( -\Delta_{-} a_{s} + \Delta_{1} b_{sf})b_{s} +\cdots
\end{eqnarray}
The impurity spin-spin correlation function shows typical FL behavior
\begin{equation}
\langle S^{a}(\tau) S^{a}(0) \rangle \sim 1/\tau^{2}
\label{kick}
\end{equation}
The two leading irrelevant operator in $ H $ become
\begin{eqnarray}
\gamma_{1} \widehat{a}(\tau) \frac{\partial
\widehat{a}(\tau)}{\partial \tau} = ~~~~~~~~~~~~~~~~~~~~~~~~~
\nonumber \\
(\frac{\Delta_{-}}{\Delta_{K}})^{2} a_{s} \frac{\partial a_{s}(\tau)}{\partial \tau}
- 2\frac{ \Delta_{-} \Delta_{1}}{ \Delta^{2}_{K}} a_{s} \frac{\partial b_{sf}(\tau)}{\partial \tau}
+ (\frac{\Delta_{1}}{\Delta_{K}})^{2} b_{sf} \frac{\partial b_{sf}(\tau)}{\partial \tau}
\nonumber \\
= \cos \theta :\cos 2\Phi^{n}_{s}(0,\tau): -\frac{1}{2} :( \partial \Phi^{n}_{s}(0,\tau))^{2}:
~~~~~~~~~~\nonumber \\
-\sin \theta (:\sin 2\Phi^{n}_{s}(0,\tau): - \partial^{2} \Phi^{n}_{s}(0,\tau))~~~~~~~~~~~~~~~~~
\nonumber \\
\gamma_{2} \widehat{b}(\tau) \frac{\partial \widehat{b}(\tau)}{\partial \tau}
\sim \gamma_{2}( :\cos 2\Phi^{n}_{sf}(0,\tau): -\frac{1}{2} :( \partial \Phi^{n}_{sf}(0,\tau))^{2}:)
\label{win}
\end{eqnarray}
Although the first operator do depend on the angle $\theta $, its scaling dimension
remains 2, therefore will not affect the exponents of
any physical measurable quantities.
We refer the readers to Ref.\cite{flavor} for the detailed similar calculations on
the single particle Green function and the electron conductivity .
Second order corrections to the single particle Green functions from the two leading
irrelevant operators lead to $ \sigma(T) \sim \sigma(0) + C(\theta) T^{2} $.
\section{ The effects of external magnetic field}
According to Ref.\cite{fisher1}, the parameters in Eq.\ref{start} are
\begin{eqnarray}
V_{1} &= & \pi \rho_{F} V \\
V_{2} &= & \pi \rho_{F} V \frac{\sin k_{F} R}{ k_{F} R} \\
V_{3} &= & \pi \rho_{F} V \sqrt{ 1- (\frac{\sin k_{F} R}{ k_{F} R})^{2} }
\end{eqnarray}
The external magnetic field $ H $ breaks the $ SU(2) $ flavor ( the real spin ) symmetry.
it causes the energy band of spin $ \uparrow $ electrons to shift downwards, that of spin $\downarrow $
to shift upwards. Channel 1 and 2 electrons have {\em different} fermi momenta, therefore
couple to the impurity with {\em different} strength. Setting the external strain $ h=0 $,
the Hamiltonian is
\begin{eqnarray}
H &= & H_{0} +i \delta v_{F} \int dx ( \psi^{\dagger}_{1 \alpha}(x)
\frac{ \partial \psi_{1 \alpha}(x)}{ \partial x}
- \psi^{\dagger}_{2 \alpha}(x) \frac{ \partial \psi_{2 \alpha}(x)}{ \partial x} ) \nonumber \\
&+ & V_{1} J_{c} (0) + + \delta V_{1} J^{z}_{f} (0) +
2 V_{2} J^{x}(0) + 2 \delta V_{2} \tilde{J}^{x}(0)
\nonumber \\
& + & 4 V_{3} S^{z} J^{z}(0) +\Delta_{1} (J^{x}(0) S^{x} + J^{y}(0) S^{y} ) \nonumber \\
&+ & 4 \delta V_{3} S^{z} \tilde{J}^{z}(0) + \delta \Delta_{1} ( \tilde{J}^{x}(0) S^{x} + \tilde{J}^{y}(0) S^{y} )
\nonumber \\
& + & \frac{ \Delta_{0} }{ \pi \tau_{c} } S^{x}
+\Delta_{2} 2 \pi \tau_{c} ( S^{-}
\psi^{\dagger}_{1 \uparrow} \psi_{1 \downarrow}
\psi^{\dagger}_{2 \uparrow} \psi_{2 \downarrow} +h.c.)
\label{field}
\end{eqnarray}
Where $ \tilde{J}^{a}(x)=J^{a}_{1}(x)-J^{a}_{2}(x) $ and all the $\delta $ terms are
$ \sim H $.
The term $ \frac{\delta v_{F}}{ 2 \pi} \int dx(
\partial \Phi_{c}(x) \partial \Phi_{f}(x) +
\partial \Phi_{s}(x) \partial \Phi_{sf}(x) ) $ does not couple to the impurity,
therefore can be neglected.
It is important to observe the bare hopping term and the two electron assisted hopping
term are {\em not} affected by the magnetic field. Following Ref.\cite{flavor},
the transformed Hamiltonian $ H^{\prime}= U H U^{-1} $ is
\begin{eqnarray}
H^{\prime} &= & H_{0}+ 2 y \widehat{a} \widehat{b} a_{s,i}(0) b_{sf}(0)
+ 2 \delta y \widehat{a} \widehat{b} a_{sf}(0) b_{s,i}(0) \nonumber \\
&+ & 2 q \widehat{a} \widehat{b} a_{s,i}(0) b_{s,i}(0)
+ 2 \delta q \widehat{a} \widehat{b} a_{sf}(0) b_{sf}(0)
\nonumber \\
& - & i \frac{ \Delta_{1}}{\sqrt{ 2 \pi \tau_{c} }} \widehat{a} b_{sf}(0)
+ i \frac{ \delta \Delta_{1}}{\sqrt{ 2 \pi \tau_{c} }} \widehat{b} a_{sf}(0)
\nonumber \\
& + & i \frac{ \Delta_{+}}{\sqrt{ 2 \pi \tau_{c} }} \widehat{b} a_{s,i}(0)
+i \frac{ \Delta_{-}}{\sqrt{ 2 \pi \tau_{c} }} \widehat{a} b_{s,i}(0)
\end{eqnarray}
Performing the rotation Eq.\ref{twist}, the above equation can be rewritten as
\begin{eqnarray}
H^{\prime} & = & H_{0}
+ 2 \tilde{q} \widehat{a} \widehat{b} a_{s,i}(0) \tilde{b}_{s,i}(0)
+ 2 \delta \tilde{q} \widehat{a} \widehat{b} a_{sf}(0) \tilde{b}_{sf,i}(0) \nonumber \\
& + & 2 \tilde{y} \widehat{a} \widehat{b} a_{s,i}(0) \tilde{b}_{sf,i}(0)
+ 2 \delta\tilde{y} \widehat{a} \widehat{b} a_{sf}(0) \tilde{b}_{s,i}(0)
\nonumber \\
& - & i \frac{ \Delta_{K}}{\sqrt{ 2 \pi \tau_{c} }}
\widehat{a} \tilde{b}_{sf,i}(0)
+i \frac{ \Delta_{+}}{\sqrt{ 2 \pi \tau_{c} }} \widehat{b} a_{s,i}(0)
+i \frac{ \delta \Delta_{1}}{\sqrt{ 2 \pi \tau_{c} }} \widehat{b} a_{sf}(0)
\label{sy}
\end{eqnarray}
It is evident that the magnetic field $ H $ introduces, another relevant operator
with scaling dimension 1/2. Under the combination of the two relevant directions in
the above equation, the system flows to the line of FL fixed points with the boundary
conditions
\begin{equation}
\Phi^{n}_{s,L}=\Phi^{n}_{s,R}+ \theta_{s},~~~~ \Phi^{n}_{sf,L}=\Phi^{n}_{sf,R}+ \theta_{sf}
\end{equation}
If $ H=0 $, $ \theta_{sf}=0 $, the boundary condition Eq.\ref{fermi} is recovered.
If $ \Delta_{+}=0 $, $ \theta_{sf}=\pi $.
There are two marginal operators along this line of FL fixed points,
one $ \partial \Phi^{n}_{s}(0) $ is in the spin sector which
changes the angle $ \theta_{s} $
another $ \partial \Phi^{n}_{sf}(0) $ is in the spin-flavor sector which
changes the angle $ \theta_{sf} $. The corresponding boundary conditions
in the original fermion basis can be similarly worked out as in the last section.
\section{ Scaling analysis of the physical measurable quantities }
In this section, following the methods developed in Ref.\cite{qc,detail}
and also considering the correction due to the {\em leading} irrelevant operators,
we derive the scaling functions of the conductivity, impurity specific heat and
susceptibility :
\begin{equation}
A(T, \Delta_{+}, H, \lambda) = F( \frac{a \Delta_{+}}{\sqrt{T}}, \frac{b H}{\sqrt{T}}, \lambda \sqrt{T} )
\label{cond}
\end{equation}
Where $ a,b $ are non-universal metric factors which depend on $ \theta, \theta_{ph} $
and the cutoff of the low energy theory, the dependence on $ \theta $ is due to the existence
of the exactly marginal operator $ \partial \Phi^{n}_{s}(0) $ in the spin sector \cite{irre},
the Kondo temperature is given by $ T_{K} \sim \lambda^{-2} $.
We confine $ T < T_{K} $, so $ \lambda \sqrt{T} $ is a small parameter,
we expand the right hand side of Eq.\ref{cond} in terms of the leading irrelevant operator
\begin{equation}
A(T, \Delta_{+}, H, \lambda) = f_{0}( \frac{a \Delta_{+}}{\sqrt{T}}, \frac{b H}{ \sqrt{T}} ) +
\lambda \sqrt{T} f_{1}( \frac{ a \Delta_{+}}{\sqrt{T}}, \frac{b H}{ \sqrt{T}} ) +
(\lambda \sqrt{T})^{2} f_{2}( \frac{ a \Delta_{+}}{\sqrt{T}}, \frac{b H}{ \sqrt{T}} ) + \cdots
\label{expand}
\end{equation}
For simplicity, we only consider $ \Delta_{+} \neq 0 $ or $ H \neq 0 $. The general
case Eq.\ref{cond} can be discussed along the similar line of Ref.\cite{read}.
From Eq.\ref{sy}, it is easy to observe that
$ \Delta_{+} $ term and the magnetic field $ H $ term play very similar roles. In
the following, we only explicitly derive the scaling function in terms of
$ \Delta_{+} $. The scaling functions in the presence of $ H $ can be
obtained by replacing $ \Delta_{+} $ by $ H $.
As discussed in Sec. V, depending on the sign of $ \Delta_{+} $, the impurity is either in even parity
or odd parity states, but the physical measurable quantities
should not depend on if the system flows to FL1 (even parity)
or FL2 (odd parity ), so the above scaling function should only depend on $ |\Delta_{+}| $.
In the following, we discuss the scaling functions of $ \sigma(T)-\sigma(0), C_{imp}, \chi^{h}_{imp}
,\chi_{imp} $ respectively. Here $ \chi^{h}_{imp}, \chi_{imp} $ are the impurity
hopping and spin susceptibility respectively.
{\em The scaling function of the conductivity}
We look at the two different limits of the $ f $ functions.
Keeping $ T < T_{K} $ fixed, let $ \Delta_{+} \rightarrow 0 $, the system is in the Quantum Critical(QC)
regime controlled by the line of NFL fixed points. We can do perturbative expansions in terms of
$ \Delta_{+}/\sqrt{T} $. As discussed in Refs.\cite{powerful,flavor},
the {\em overall sign ambiguity} in the spinor representation
of the NFL boundary condition Eq.\ref{obi} should be fixed by the requirement of
symmetry and analyticity \cite{ambi}.
The perturbative expansions are
\begin{eqnarray}
\sigma_{i +}(T, \Delta_{+}, \lambda=0 ) - \sigma(0) & \sim &
\frac{\Delta_{+}}{\sqrt{T}} + ( \frac{\Delta_{+}}{\sqrt{T}} )^{3} + \cdots \nonumber \\
\sigma_{i -}(T, \Delta_{+}, \lambda=0 ) - \sigma(0) & \sim &
-\frac{\Delta_{+}}{\sqrt{T}} - ( \frac{\Delta_{+}}{\sqrt{T}} )^{3} + \cdots
\label{cancel}
\end{eqnarray}
The total conductivity $ \sigma(T, \Delta_{+}, \lambda=0 ) -\sigma(0) = 0 $, therefore
$ f_{0}(x) \equiv 0 $. The conductivities
from the different parities have to cancel each other, otherwise, we get a {\em non-analytic}
dependence at {\em small} magnetic field at {\em finite} temperature \cite{thank}.
\begin{eqnarray}
f_{1}( x ) & = & 1 + x^{2} +x^{4} + \cdots, ~~~~ x \ll 1
\end{eqnarray}
Substituting the above equation into Eq.\ref{expand}, we get
\begin{equation}
\sigma(T, \Delta_{+}, \lambda)-\sigma(0) =
\lambda \sqrt{T} + \frac{ \lambda \Delta_{+}^{2}}{ \sqrt{T}} + \cdots
\end{equation}
Keep $ \Delta_{+} $ fixed, but small, let $ T \rightarrow 0 $, the system is in FL1 (or FL2) regime,
the conductivity should reduce to the FL form
\begin{eqnarray}
f_{0}( x ) & \equiv & 0 \nonumber \\
f_{1}( x ) & = & |x| + |x|^{-3} + \cdots, ~~~~ x \gg 1 \nonumber \\
f_{2}( x ) & \equiv & 0
\end{eqnarray}
Substituting the above equation into Eq.\ref{expand}, we have
\begin{equation}
\sigma(T, \Delta_{+}, \lambda)-\sigma(0) = \lambda |\Delta_{+} | + (\lambda |\Delta_{+} | )^{3} +
+ ( \lambda |\Delta_{+}|^{-3} + \cdots ) T^{2}+\cdots
\label{test1}
\end{equation}
The above equation indicates that the coefficient of $ T^{2} $ diverges
as $ \lambda | \Delta_{+} |^{-3} $ instead of $ \Delta_{+}^{-4} $ as we approach
to the line of NFL fixed points.
This means the relevant operator $ \Delta_{+} $ with scaling dimension 1/2 combined
with the leading irrelevant operator $ \lambda $ with scaling dimension $ -1/2 $
near the line of NFL fixed points will turn into one of the irrelevant operators $ \lambda_{FL,-2} $
with scaling dimension $ -2 $ near the line of FL fixed points
\begin{equation}
\lambda_{FL,-2} \sim \lambda |\Delta_{+}|^{-3}
\end{equation}
First order perturbation in this operator leads to Eq.\ref{test1}.
{\em The scaling function of the impurity specific heat}
In the QC regime, the perturbative expansions give (up to possible logarithm):
\begin{eqnarray}
g_{0}( x ) & = & x^{2} + x^{4} + \cdots, ~~~~ x \ll 1 \nonumber \\
g_{1}( x ) & \equiv & 0 \nonumber \\
g_{2}( x ) & = & 1 + x^{2} + \cdots, ~~~~ x \ll 1
\end{eqnarray}
Substituting the above equation into Eq.\ref{expand}, we get
\begin{equation}
C_{imp} = \frac{\Delta_{+}^{2}}{T} + \frac{\Delta_{+}^{4}}{T^{2}} + \lambda^{2} T \log T
+ \lambda^{2} \Delta_{+}^{2} + \cdots
\end{equation}
It was known that there are {\em accidental logarithmic violations} of scaling when the number of
channel is two \cite{affleck1}. This violation has nothing to do with the existence of
marginally irrelevant operators \cite{irre}. Similar violation occur in itinerant magnetism \cite{millis}.
In the FL regime, the impurity specific heat should reduce to the FL form
\begin{eqnarray}
g_{0}( x ) & = & x^{-2} + \cdots, ~~~~ x \gg 1 \nonumber \\
g_{2}( x ) & = & c + \cdots, ~~~~ x \gg 1
\end{eqnarray}
Substituting the above equation into Eq.\ref{expand}, we get
\begin{equation}
C_{imp} = T( \Delta_{+}^{-2} + \lambda^{2} + \cdots) + \cdots
\label{test2}
\end{equation}
The above equation indicates that the coefficient of $ T $ diverges
as $ \Delta_{+}^{-2} $ as we approach to the line of NFL fixed points.
This means the relevant operator $ \Delta_{+} $ with scaling dimension 1/2
near the line of NFL fixed points will turn into one of the leading irrelevant operators $ \lambda_{FL,-1} $
with scaling dimension $ -1 $ near the line of FL fixed points
\begin{equation}
\lambda_{FL,-1} \sim \Delta_{+}^{-2}
\end{equation}
First order perturbation in this operator leads to Eq.\ref{test2}. However, as shown
in Eq.\ref{test1}, this leading irrelevant operator make {\em no} contribution to the {\em total}
conductivity, even though it makes contribution to even and odd parity conductivity separately
(see Eq.\ref{cancel} ).
{\em The scaling function of the impurity hopping susceptibility }
In the QC regime, the perturbative expansions give
(up to possible logarithm)
\begin{equation}
\chi^{h}_{imp} = \lambda^{2} \log T + \frac{ \lambda^{2} \Delta_{+}^{2} }{T} + \cdots
\end{equation}
In the FL regime
\begin{equation}
\chi^{h}_{imp} = \lambda^{2} \log 1/\Delta_{+}^{2} + \cdots
\end{equation}
The exact crossover function can be calculated along the EK line in Eq.\ref{final}.
In the FL regime, the Wilson Ratio $ R= T \chi^{h}_{imp}/ C_{imp} \sim \lambda^{2}
\Delta_{+}^{2} \log 1/ \Delta^{2}_{+} $ is very small as $ \Delta_{+} \rightarrow 0 $.
{\em The scaling function of the impurity spin susceptibility}
In this part, we set $ \Delta_{+}=0 $ and consider finite $ H $.
In the QC regime, the perturbative expansions give (up to possible logarithm)
\begin{eqnarray}
h_{0}( x ) & = & \log aT + \cdots, ~~~~ x \ll 1 \nonumber \\
h_{1}( x ) & \equiv & 0 \nonumber \\
h_{2}( x ) & = & c_{2} + x^{2} + \cdots, ~~~~ x \ll 1
\end{eqnarray}
Substituting the above equation into Eq.\ref{expand}, we get
\begin{equation}
\chi_{imp} = \log aT + \lambda^{2} T + \lambda^{2} H^{2} + \cdots
\end{equation}
In the FL regime, it was shown in Ref.\cite{gogolin2}
\begin{equation}
\chi_{imp} = \log aH + \cdots
\label{test3}
\end{equation}
Actually, the whole crossover functions $ g_{0}(x), h_{0}(x) $ have been
calculated along the EK line in Ref. \cite{gogolin2}.
\section{ Discussions on experiments and Conclusions}
In this paper, we brought about the rich phase diagram of a non- magnetic impurity
hopping between two sites in a metal (Fig.~\ref{picture}). As discussed
in Sec. IV, the NFL fixed point with the symmetry 2IK(a) is very unlikely
to be observed, although it has very interesting behaviors $ C_{imp} \sim T,
\chi^{h}_{imp} \sim \log T $ and $ \sigma(T) \sim 2\sigma_{u}(1- T^{3/2}) $.
The peculiar behaviors of $ C_{imp}, \chi^{h}_{imp} $ are due to the 'orbital
field' couples to a {\em non conserved} current.
Ralph {\sl et al} found that the experiment data show
$ \sqrt{T} $ behavior for $ 0.4K< T <T_{K1} \sim 4K $ and concluded
the two sites system fall in the Quantum Critical (QC) regime
controlled by the " 2CK fixed point". They also
discussed the crossover to the FL regime in the presence of
magnetic field $ H $ which acts as a channel
anisotropy of scaling dimension 1/2 \cite{flavor} and in the presence of asymmetry
of the two sites which acts as a local magnetic field
of scaling dimension 1/2 \cite{ralph}.
As first pointed out by MS, even the two sites are exactly symmetric,
therefore, the two channel are exactly equivalent, there is
another dimension 1/2 operator $ \Delta_{+} $ which will
drive the system to FL regime\cite{fisher2}.
In this paper, we find the " 2CK fixed point " actually is a line of NFL fixed points
which interpolates continuously between the 2CK and the 2IK(a).
As Gan \cite{gan} showed that under {\em different} canonical transformation
than employed in this paper \cite{line}, the 2IK model can be mapped to
the 2CK model. This paper discussed the two apparent different fixed points
in a unified framework. Although P-H symmetry breaking is a relevant
perturbation in the original 2IK model discussed in Refs. \cite{twoimp,gan},
but its effect is trivial in this model. Because the two models have
different {\em global } symmetry, although the fixed point is exactly
the {\em same}, the allowed boundary operators are {\em different}.
We discovered a marginal operator in the spin sector which is responsible for this
line of NFL fixed points.
In a real system, there is always P-H symmetry breaking, therefore there is
always a marginal operator in the charge sector. Eq.\ref{times} show that
the coefficient of $ \sqrt{T} $ depend on this breaking any way. However,
the marginal operator identified in this paper is in the spin sector, it, combined with
the leading irrelevant operator which contributes to the $ \sqrt{T} $ behavior
of the conductivity, will always generate the dimension $ 1/2 $ relevant operator.
The existence of the line of NFL fixed points and the existence of the relevant operator
is closely related.
There is no reason why the coefficient of this relevant operator is so small
that it can be neglected in the scaling analysis in the last section.
The crossover scale from the weak coupling fixed point $ q=0 $
to {\em any} point on the line of NFL fixed points is given by the Kondo scale
$ T_{K1} \sim D ( \Delta_{K} )^{2} \sim \lambda^{-2} $ ( see Eq.\ref{cond} ),
the crossover scale
from a given point on the line of NFL fixed points to the corresponding point
on the line of FL fixed points is set by $ \Lambda \sim D (\Delta_{+})^{2} $,
the finite size
scaling at finite temperature T leads to the universal scaling function Eq. \ref{cond}.
Because there is no reliable way to estimate the magnitude of $ \Delta_{+} $
which is always present. It is very hard to say if the experimental situation
do fall in the QC regime controlled by any fixed point on this line.
The experiment measured the magnetic field
dependence of conductance signal. {\em Assuming} $ \Delta_{+} $ is so small that it
can be neglected, scaling Eqs \ref{test1} in the field $ H $ shows
that in the FL regime, the conductance should depend on $ \lambda |H| $
which is consistent with the experimental finding \cite{ralph}.
The coefficient of $ T^{2} $ of the conductivity should scale as $ H^{-3} $.
Because $ T_{K} \sim \lambda^{-2} \sim 4K $, the lowest temperature
achieved in the experiment
is $ T_{min} \sim 0.1 K $, if $ 0.1 K < |H| < \lambda^{-1} \sim \sqrt{T_{K}} = 2K $,
Eq.\ref{test2} shows that the linear $ T $ coefficient of the impurity
specific heat should scale as $ H^{-2} $. Eq.\ref{test3} shows that the impurity
susceptibility should scale as $ \log |H| $.
So far, there is
no experiment test of these scaling relations in this range of magnetic field.
It should be possible to extract $ \chi_{imp} $ from experimental data,
because the impurity does not carry real spin. There is no
difficulty caused by the conduction electrons and the impurity
having different Land\'{e} factors \cite{line}.
It is very difficult, but not impossible to measure $ \chi^{h}_{imp} $ by adding pressure
and strain to the system. Because $ \chi^{h}_{imp} $ here is the hopping susceptibility.
The difficulty caused by the conduction electrons and the impurity
having different Land\'{e} factors is not a issue either.
Unfortunately, {\em no} universal ratios can
be formed among the amplitudes of the three quantities except at the 2CK point on this
line of NFL fixed points, this is because (1) the strain
couples to a non conserved current (2) this is a line of NFL fixed points instead of a fixed point.
The experiment also measured the conductance signal when a finite voltage $ V $
is applied to the point contacts and find $ e V/ T $ scaling in the temperature range $ 0.4K < T < 4 K $.
It is not clear how the position on this line of NFL fixed points enter the expression
of the non-equilibrium conductivity calculations. This question deserve to be addressed
seriously \cite{delft}.
Ref\cite{three} showed that a non-magnetic
impurity hopping around 3 sites arranged around a triangular
is delocalized
and the metal shows either the one channel Kondo fermi liquid behavior
or the 2CK non-fermi liquid behavior.
They also conjectured that there may be
a " NFL fixed point " possessing local conformal symmetry
$ SU_{3}(2) \times SU_{2}(3) \times U(1) $
separating these two fixed points.
The insights gained from the two sites problem discussed in this paper
implies that "the NFL fixed point" separating the one channel Kondo
fixed point and the 2CK fixed point may be a {\em line of NFL fixed points } instead of
a NFL fixed point. The symmetry of this line of NFL fixed points is a interesting
open question, but it should be {\em smaller}
than $ SU(3) $, just like the symmetry in the spin sector of the line of NFL fixed points in the
two sites problem is $ U(1) \times O(1)$ which is smaller than
$ SU(2) $ \cite{sloppy}. The higher
symmetry $SU(3)$ is realized at just one point on this line of NFL fixed points .
It was shown in Ref.\cite{three} that the 2CK fixed point with the
symmetry $SU(2)$ can be realized in $ C_{3v} $ or higher symmetry, because it is
indeed possible for the ground state of the impurity-electrons complex
to transform as the $ \Gamma_{3} $ representation of $ C_{3v} $ group,
therefore a doublet. This NFL fixed point was also shown to be stable.
Similarly, Ref \cite{three} pointed out that the stable $ SU(3) $ NFL fixed point
can be realized in the system of a non-magnetic impurity hopping around the
tetrahedral or octahedral sites in a cubic crystal when the ground
state is a triplet. However, as the symmetry get higher, the NFL fixed
point with higher symmetry will be more unlikely to be realized by
experiments, because the number of relevant processes will increase.
\centerline{\bf ACKNOWLEDGMENTS}
We thank A. W. W. Ludwig, C. Vafa, E. Zaslow for very helpful
discussions on the spinor
representation of $ SO(8) $. We also thank D. Fisher, B, Halperin,
N. Read, S. Sachdev for very interesting discussions. I am indebted
to A. Millis for very helpful discussions on the last three sections.
This research was supported by
NSF Grants Nos. DMR 9106237, DMR9400396 and Johns Hopkins University.
\begin{table}
\begin{tabular}{ |c|c|c|c|c| }
$ O(1) $ & $ U(1) $ & $ O(5) $ & $ \frac{l}{v_{F} \pi}( E-E_{0}) $ & Degeneracy \\ \hline
$ R $ & $ NS_{\delta} $ & $ NS $ & 0 & 2 \\ \hline
$ NS $ & $ R_{\delta} $ & $ R $ & $ \frac{3}{8}-\frac{\delta}{2 \pi} $ & 4 \\ \hline
$ R $ & $ NS_{\delta} $ & $ NS+1st $ & $ \frac{1}{2} $ & 10 \\ \hline
$ NS $ & $ R_{\delta}+1st $ & $ R $ & $ \frac{3}{8}+\frac{\delta}{2 \pi} $ & 8
\end{tabular}
\caption{ The finite size spectrum at the line of NFL fixed points with the symmetry $ O(1) \times U(1) \times O(5) $
when $ 0 < \delta < \frac{\pi}{2} $. $ E_{0}=\frac{1}{16} +\frac{1}{2} (\frac{\delta}{\pi})^{2} $.
$ NS_{\delta} $ is the state achieved by twisting $ NS $ by the angle $ 2 \delta $,
namely $ \psi(-l)=-e^{i 2 \delta} \psi(l) $.
$ R_{\delta} $ is the state achieved by twisting $ R $ by the angle $ 2 \delta $,
namely $ \psi(-l)=e^{i 2 \delta} \psi(l) $. NS+1st is the first excited state in NS sector {\sl et. al}.
Only when $ \frac{\pi}{4} < \delta < \frac{\pi}{2} $, the 4th row has lower energy than the 5th row.
If $ \delta=0 $, the symmetry is enlarged to $ O(1) \times O(7) $, the finite size spectrum of the 2IK fixed point
is recovered. If $ \delta=\frac{\pi}{2} $, the symmetry is enlarged to $ O(3) \times O(5) $, the finite size spectrum
of the 2CK fixed point is recovered. }
\label{nflpositive}
\end{table}
\begin{table}
\begin{tabular}{ |c|c|c|c|c| }
$ O(1) $ & $ U(1) $ & $ O(5)$ & $ \frac{l}{v_{F} \pi}( E-E_{0}) $ & Degeneracy \\ \hline
$ R $ & $ NS_{\delta} $ & $ NS $ & 0 & 2 \\ \hline
$ NS $ & $ R_{\delta} $ & $ R $ & $ \frac{3}{8}+\frac{\delta}{2 \pi} $ & 4 \\ \hline
$ R $ & $ NS_{\delta}+1st $ & $ NS $ & $ \frac{1}{2} +\frac{\delta}{\pi} $ & 4 \\ \hline
$ R $ & $ NS_{\delta} $ & $ NS+1st $ & $ \frac{1}{2} $ & 10 \\ \hline
$ NS+1st $ & $ R_{\delta}$ & $ R $ & $ \frac{3}{8}+\frac{\delta}{2 \pi} +\frac{1}{2} $ & 4
\end{tabular}
\caption{ The finite size spectrum at the line of NFL fixed points with the symmetry $ O(1) \times U(1) \times O(5) $
when $ -\frac{\pi}{2} < \delta < 0 $.
Only when $ -\frac{\pi}{4} < \delta < 0 $, the 3rd row has lower energy than the 4th row.
If $ \delta=0 $, the symmetry is enlarged to $ O(1) \times O(7) $, the finite size spectrum of the 2IK fixed point
is recovered. If $ \delta=-\frac{\pi}{2} $, the symmetry is enlarged to $ O(3) \times O(5) $,
the finite size spectrum of the 2CK fixed point is recovered. }
\label{nflnegative}
\end{table}
\begin{table}
\begin{tabular}{ |c|c|c|c| }
$ O(3) $ & $ O(5) $ & $ \frac{l}{v_{F}\pi}( E-\frac{3}{16}) $ & Degeneracy \\ \hline
$ R $ & $ NS $ & 0 & 2 \\ \hline
$ NS $ & $ R $ & $ \frac{1}{8} $ & 4 \\ \hline
$ R $ & $ NS+1st $ & $ \frac{1}{2} $ & 10 \\ \hline
$ NS+1st $ & $ R $ & $ \frac{5}{8} $ & 12 \\ \hline
$ R+1st $ & $ NS $ & 1 & 6 \\
$ R $ & $ NS+2nd $ & 1 & 20 \\ \hline
$ NS $ & $ R+1st $ & $ 1+\frac{1}{8} $ & 20 \\
$ NS+2nd $ & $ R $ & $ 1+\frac{1}{8} $ & 12
\end{tabular}
\caption{ The finite size spectrum of the 2CK fixed point}
\label{2CK}
\end{table}
\begin{table}
\begin{tabular}{ |c|c|c|c| }
$ O(1) $ & $ O(7) $ & $ \frac{l}{v_{F} \pi}( E-\frac{1}{16}) $ & Degeneracy \\ \hline
$ R $ & $ NS $ & 0 & 2 \\ \hline
$ NS $ & $ R $ & $ \frac{3}{8} $ & 8 \\ \hline
$ R $ & $ NS+1st $ & $ \frac{1}{2} $ & 14 \\ \hline
$ NS+1st $ & $ R $ & $ \frac{7}{8} $ & 8 \\ \hline
$ R $ & $ NS+2nd $ & 1 & 42 \\
$ R+1st $ & $ NS $ & 1 & 2 \\
\end{tabular}
\caption{ The finite size spectrum of the 2IK fixed point}
\label{2IK}
\end{table}
\begin{table}
\begin{tabular}{ |c|c|c|c| }
$ U(1) $ & $ O(6) $ & $ \frac{l}{v_{F} \pi}( E-E_{0}) $ & Degeneracy \\ \hline
$ NS_{\delta} $ & $ NS $ & 0 & 1 \\ \hline
$ R_{\delta} $ & $ R $ & $ \frac{1}{2}-\frac{\delta}{2 \pi} $ & 8 \\ \hline
$ NS_{\delta} $ & $ NS+1st $ & $ \frac{1}{2} $ & 6 \\ \hline
$ R_{\delta}+1st $ & $ R $ & $ \frac{1}{2}+\frac{\delta}{2 \pi} $ & 16 \\ \hline
$ NS_{\delta} +1st $ & $ NS $ & $ \frac{1}{2}+\frac{\delta}{\pi} $ & 2
\end{tabular}
\caption{ The finite size spectrum at the line of FL fixed points with the symmetry $ U(1) \times O(6) $
when $ 0 < \delta < \frac{\pi}{2} $. $ E_{0}=\frac{1}{2} (\frac{\delta}{\pi})^{2} $.
If $ \delta=0 $, the symmetry is enlarged to $ O(8) $, the finite size spectrum
of the free fermion fixed point is recovered.
If $ \delta=\frac{\pi}{2} $, the symmetry is enlarged to $ O(2)\times O(6) $, the finite size spectrum
of the 2CSFK is recovered.}
\label{flpositive}
\end{table}
\begin{table}
\begin{tabular}{ |c|c|c|c| }
$ U(1) $ & $ O(6) $ & $ \frac{l}{v_{F} \pi}( E-E_{0}) $ & Degeneracy \\ \hline
$ NS_{\delta} $ & $ NS $ & 0 & 1 \\ \hline
$ R_{\delta} $ & $ R $ & $ \frac{1}{2}+\frac{\delta}{2 \pi} $ & 8 \\ \hline
$ NS_{\delta}+1st $ & $ NS $ & $ \frac{1}{2}+\frac{\delta}{\pi} $ & 2 \\ \hline
$ NS_{\delta} $ & $ NS+1st $ & $ \frac{1}{2} $ & 6 \\ \hline
$ NS_{\delta} +2nd $ & $ NS $ & $ 2(\frac{1}{2}+\frac{\delta}{\pi}) $ & 1 \\ \hline
$ NS_{\delta} +1st $ & $ NS+1st $ & $ 1+\frac{\delta}{\pi} $ & 12 \\ \hline
$ R_{\delta}+1st $ & $ R $ & $ \frac{3}{2}(1+\frac{\delta}{ \pi}) $ & 16
\end{tabular}
\caption{ The finite size spectrum at the line of FL fixed points with the symmetry $ U(1) \times O(6) $
when $ -\frac{\pi}{2} < \delta < 0 $. $ E_{0}=\frac{1}{2} (\frac{\delta}{\pi})^{2} $.
If $ \delta=0 $, the symmetry is enlarged to $ O(8) $, the finite size spectrum
of the free fermion fixed point is recovered.
If $ \delta=-\frac{\pi}{2} $, the symmetry is enlarged to $ O(2)\times O(6) $, the finite size spectrum
of the 2CSFK is recovered.}
\label{flnegative}
\end{table}
\begin{table}
\begin{tabular}{ |c|c|c| }
$ O(8) $ & $ \frac{l}{v_{F} \pi} E $ & Degeneracy \\ \hline
$ NS $ & 0 & 1 \\ \hline
$ R $ & $ \frac{1}{2} $ & 16 \\
$ NS+1st $ & $ \frac{1}{2} $ & 8 \\ \hline
$ NS+2nd $ & 1 & 28 \\ \hline
$ NS+3rd $ & $ \frac{3}{2} $ & 64 \\
$ R+1st $ & $ \frac{3}{2} $ & 8
\end{tabular}
\caption{ The finite size spectrum of the free fermions with both NS and R sectors \cite{double1}}
\label{free}
\end{table}
\begin{table}
\begin{tabular}{ |c|c|c|c| }
$ O(2) $ & $ O(6) $ & $ \frac{l}{v_{F} \pi}( E-\frac{1}{8}) $ & Degeneracy \\ \hline
$ R $ & $ NS $ & 0 & 2 \\ \hline
$ NS $ & $ R $ & $ \frac{1}{4} $ & 8 \\ \hline
$ R $ & $ NS+1st $ & $ \frac{1}{2} $ & 12 \\ \hline
$ NS+1st $ & $ R $ & $ \frac{3}{4} $ & 16 \\ \hline
$ R $ & $ NS+2nd $ & 1 & 30 \\
$ R+1st $ & $ NS $ & 1 & 4 \\
\end{tabular}
\caption{ The finite size spectrum of the 2CSFK fixed point \cite{double2}}
\label{2CSFK}
\end{table}
| proofpile-arXiv_065-344 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} The experimental and theoretical study of
heavy hadrons typically focuses on the ground state $D^{(*)}$ and
$B^{(*)}$ mesons, and on the lightest baryons, $\Lambda_c$ and
$\Lambda_b$. This is hardly surprising, since these states are the
most copiously produced, and the most long-lived, making detailed
experiments possible. Nonetheless, these are but the lightest states
in a tower of excitations. These excited states are also
interesting, for a variety of reasons. First, one can use them as a
laboratory to study the heavy quark (and light flavor) symmetries
which are crucial to much of heavy hadron phenomenology. Second,
there are new applications of heavy quark symmetry, leading to new
questions about QCD, which only arise in the study of the more
complicated spin structure of excited heavy hadrons. Third, certain
experiments involving these excitations yield new information which
is directly applicable to the physics of the ground state
heavy hadrons.
In this talk, I will briefly survey a variety of such issues. I
begin with a review of hadron dynamics in the heavy quark
limit,\footnote{The reader who desires a more extensive introduction to
Heavy Quark Effective Theory, with thorough references to the
original literature, may consult a number of excellent reviews.~\cite{reviews}}
and of the simple
spectroscopic predictions which follow from it. Certain of these
predictions are currently not well satisfied, casting doubt either on
the data or on its interpretation. I will then turn to the strong
decays of heavy mesons and show that a consistent inclusion of
subleading effects can resolve an otherwise puzzling discrepancy. The
third topic will be the production of excited heavy hadrons in
fragmentation processes, and the fourth will be the production of
heavy hadrons in semileptonic decays. In the latter case, we will see
that it is possible to extract information from such processes which
is useful for improving the extraction of the CKM matrix element
$|V_{cb}|$ from semileptonic $B$ decays.
In addressing these four topics, I do not pretend to review the
entire field of excited bottom and charmed hadrons. Rather, I hope
to illustrate the rich interplay between theory and experiment which
these recent developments make possible.
\section{The Heavy Quark Limit}\label{HQL}
Consider a hadron containing a single heavy quark $Q$, where by
``heavy'' we mean that its mass satisfies the condition
$m_Q\gg\Lambda_{\rm QCD}\sim500\,{\rm MeV}$. Ultimately, of course, we will apply
this analysis to physical charm and bottom quarks, with
$m_c\approx1.5\,{\rm MeV}$ and $m_b\approx4.8\,{\rm MeV}$, which may not be well
into this asymptotic regime. Hence, at times it will be important
to include corrections which are subleading in an expansion in
$\Lambda_{\rm QCD}/m_Q$. For now, however, let us assume that we are in a regime
where this ``heavy quark limit'' applies.
A heavy hadron is a bound state consisting of the heavy quark and
many light degrees of freedom. These light degrees of freedom
include valence quarks and antiquarks, sea quarks and antiquarks,
and gluons, in a complex configuration determined by
nonperturbative strong interactions. These
interactions are characterized by the dimensional scale $\Lambda_{\rm QCD}$,
the scale at which the strong coupling $\alpha_s$ becomes of order 1;
in particular, $\Lambda_{\rm QCD}$ is the typical energy associated with
the four-momenta carried by the light degrees of freedom. Hence it
is also the typical energy of quanta exchanged with the
heavy quark in the bound state. Since $m_Q\gg\Lambda_{\rm QCD}$, the heavy
quark does not recoil upon exchanging such quanta with the light
degrees of freedom. This is the simple physical content of the heavy
quark limit: {\it $Q$ acts as a static source of chromoelectric
field, so far as the light degrees of freedom are concerned.}
In a more covariant language, the four-velocity $v^\mu$ of
$Q$ is unchanged by the strong interactions. Because the heavy quark does not
recoil from its interactions with
the light degrees of freedom, they are insensitive to its mass, so
long as $m_Q\gg\Lambda_{\rm QCD}$. This is analogous to the
statement in quantum electrodynamics that the electronic wave
function is the same in hydrogen and deuterium.
There is also a condition on the spin of the heavy quark, which
couples to the light degrees of freedom primarily through the
chromomagnetic interaction. Since the
chromomagnetic moment of $Q$ is given by $g\hbar/2m_Q$, this
interaction also vanishes in the heavy quark limit. Not only is
the velocity of the heavy quark unchanged by soft QCD, so is the
orientation of its spin.
Hence, if the light degrees of freedom have nonzero angular momentum
$J_\ell$, then the states with total $J=J_\ell+{1\over2}$ and
$J=J_\ell-{1\over2}$ are degenerate. This is analogous to the
statement in quantum electrodynamics that the
hyperfine splitting in hydrogen is much smaller than the electronic
excitation energies. Thus we have new {\it symmetries\/} of the
spectrum of QCD in the heavy quark limit.~\cite{IW} These lead to new ``good''
quantum numbers, the excitation energy and the total angular momentum
of the light degrees of freedom, which can be sensibly defined only in
this limit.
If we have $N_h$ heavy quarks, $Q_1\ldots Q_{N_h}$, then the heavy
quark symmetry group is $SU(2N_h)$. These symmetries yield
relations between the properties of hadrons containing a single
heavy quark, including masses, partial decay widths, and weak form
factors. These relations can often be sharpened by the systematic
inclusion of effects which are subleading in the $1/m_Q$ expansion.
\section{Spectroscopy}\label{SPEC}
The simplest heavy quark relations are those for the
spectroscopy of states containing a single heavy quark. Heavy
hadron spectroscopy differs from that for hadrons containing only
light quarks because we may specify separately the spin quantum
numbers of the light degrees of freedom and of the heavy quark.
The constituent quark model can serve as a useful guide for
enumerating these states. Of course, we should not take this model
too seriously, as it has certain unphysical features, such as
drawing an additional distinction between spin and orbital angular
momentum of the valence quarks, and including explicitly neither sea
quarks nor gluons. So remember in what follows that any mention of
constituent quarks is purely for the purpose of counting quantum
numbers.
\subsection{Heavy mesons}
In the constituent quark model, a heavy meson consists of a heavy
quark and a light antiquark, each with spin ${1\over2}$, in a
wavefunction with a given excitation energy and a given orbital
angular momentum. There is no natural zero-point with respect to
which to define energies in this confined system, but differences
between energy levels $E_\ell$ of the light antiquark are
well defined. The antiquark can have any integral orbital angular
momentum, $L=0,1,2,\ldots$, with parity $(-1)^L$. Combined with
the intrinsic spin-parity $S_\ell^P={1\over2}^-$ of the antiquark, we
find states with total spin-parity
\begin{equation}
J_\ell^P=\case12^\pm,\case32^\pm,\case52^\pm,\ldots\,.
\end{equation}
This is then added to the spin parity $S_Q^P={1\over2}^+$ of the
heavy quark, to yield states with total angular momentum
\begin{equation}
J^P= 0^\pm,1^\pm,2^\pm,\ldots\,.
\end{equation}
In the limit $m_Q\to\infty$, the two states with a given $J_\ell^P$
are degenerate.
As an example, let us consider the charmed mesons.
Our quark model intuition tells us correctly that the ground state
light degrees of freedom have the quantum numbers of a light antiquark
in an $s$ wave, so $J_\ell^P={1\over2}^-$ and the two degenerate
states have $J^P=0^-$ and $1^-$. This is indeed what is observed:
a $0^-$ $D$ meson with mass approximately $1870\,{\rm MeV}$, and a slightly
heavier $1^-$ $D^*$ at about $2010\,{\rm MeV}$. (I will keep to
approximate masses for now, as I do not want to concern myself with
small isospin splittings which complicate the situation in an
unimportant way.) The nonzero splitting between the $D$ and the
$D^*$ is an effect of order $1/m_c$; this splitting scales as
$\Lambda_{\rm QCD}^2/m_c$ in the heavy quark expansion.
As the next excitation, we might expect to
find the antiquark in a $p$ wave. With the antiquark spin, we find
light degrees of freedom with $J_\ell^P={1\over2}^+$ and
$J_\ell^P={3\over2}^+$, each leading to an (almost) degenerate
doublet of states. The doublet with $J^P=0^+$ and $1^+$
has not been observed, presumably because it is very broad (see
Section~\ref{STRONG}). The other doublet, with $J_\ell^P={3\over2}^+$,
consists of the
$D_1(2420)$ and $D_2^*(2460)$. The splitting between the $D_1$ and
the $D_2^*$, again a $1/m_c$ effect, is not related to the $D-D^*$
splitting.
The heavy quark symmetries imply relations between the spectra of
the bottom and charmed meson systems. Because the mass of a heavy
hadron can be decomposed in the form $M_H=m_Q+E_\ell$, the entire
spectrum of bottom mesons can be determined from the charmed mesons
once the quark mass difference $m_B-m_c$ is known.\footnote{The
difficult question of how properly to define heavy quark masses is
not really relevant to heavy hadron spectroscopy. For now, it is
best to take $m_Q$ to denote the pole mass at some fixed order in
QCD perturbation theory.} This difference can be found, for
example, from the ground state mesons. Taking a spin average to
eliminate the hyperfine energy,
\begin{equation}
\overline D = \case14(D+3D^*)\,,\qquad
\overline B =\case14(B+3B^*)\,,
\end{equation}
and letting the states stand for their masses, we
find
\begin{equation}
\overline B-\overline D=m_b-m_c\approx3.34\,{\rm MeV}\,.
\end{equation}
One then finds relations for the excited states, such as
\begin{equation}
\overline B_1-\overline D_1=\overline B-\overline D\,,
\end{equation}
where $\overline D_1={1\over8}(3D_1+5D_2^*)$ is the appropriate spin average.
Including strange quarks, one finds
similar relations, such as $\overline B_s-\overline D_s=\overline
B-\overline D$. There are also relations which exploit the known
scaling of the hyperfine splitting in the heavy quark limit. Since
$D^*-D\sim\Lambda_{\rm QCD}^2/m_c$, we find
\begin{eqnarray}
(B^*)^2-B^2&=&(D^*)^2-D^2\,,\\
(B_{s2}^*)^2-B_{s1}^2&=&(D_{s2}^*)^2-D_{s1}^2\,,
\end{eqnarray}
and so on.
\begin{table}
\caption{The observed charmed and bottom mesons.}
\vspace{0.4cm}
\centerline{
\begin{tabular}{|c|ccccccc|}
\hline
\multicolumn{2}{|c}{Spin}&\multicolumn{3}{c}{$D$ system~~\cite{CLEO,PDG}}
&\multicolumn{3}{c|}{$B$ system~~\cite{OPAL,PDG}}\\
\hline
$J_\ell^P$&$J^P$&state&$M$ (MeV)
&$\Gamma$ (MeV)&state&$M$ (MeV)&$\Gamma$ (MeV)\\
\hline
${1\over2}^-$&$0^-$&$D^0$&1865&$\tau=0.42\,
$ps&$B^0$&5279&$\tau=1.50\,$ps\\
&&$D^\pm$&1869&$\tau=1.06\,$ps&$B^\pm$&5279&$\tau=1.54\,$ps\\
&&$D_s$&1969&$\tau=0.47\,$ps&$B_s$&5375&$\tau=1.34\,$ps\\
\cline{2-8}
&$1^-$&$D^{*0}$&2007&$<2.1$&$B^*$&5325&\\
&&$D^{*\pm}$&2010&$<0.13$&&&\\
&&$D_s^*$&2110&$<4.5$&$B_s^*$&&\\
\hline
${1\over2}^+$&$0^+$&$D_0^*$&&&$B_0^*$&&\\
\cline{2-8}
&$1^+$&$D_1'$&&&$B_1'$&&\\
\hline
${3\over2}^+$&$1^+$&$D_1^0$&$2421\pm3$&$20\pm7$&$B_1$&5725&20\\
&&$D_1^\pm$&$2425\pm3$&$26\pm9$&&&\\
&&$D_{s1}$&2535&$<2.3$&$B_{s1}$&5874&1\\
\cline{2-8}
&$2^+$&$D_2^{*0}$&$2465\pm4$&$28\pm10$&$B_2^*$&5737&25\\\
&&$D_2^{*\pm}$&$2463\pm4$&$27\pm12$&&&\\
&&$D_{s2}^*$&$2573\pm2$&$16\pm6$&$B_{s2}^*$&5886&1\\
\hline
\end{tabular}}
\label{mesontable}
\end{table}
The charmed and bottom mesons which have been identified are listed
in Table~\ref{mesontable}, along with their widths (which will be
of interest in Section~\ref{STRONG}). Given the measured properties of
the charmed mesons, we can make a set of predictions for the
bottom system,
\begin{eqnarray}
B^*-B=&\!\!\!\!\!\!52\,{\rm MeV}&\qquad(46\,{\rm MeV})\\
\overline B_1 =&5789\,{\rm MeV}&\qquad(5733\,{\rm MeV})\\
B_s^*-B_s=&\!\!\!\!\!\!14\,{\rm MeV}&\qquad(12\,{\rm MeV})\\
\overline B_{s1}=&5894\,{\rm MeV}&\qquad(5882\,{\rm MeV})\\
B^*_{s2}-B_{s1}=&\!\!\!\!\!\!13\,{\rm MeV}&\qquad(12\,{\rm MeV})\,.
\end{eqnarray}
The experimental values are given in parentheses. We can
estimate the accuracy with which we expect these predictions to hold by
considering the size of the largest omitted term in the expansion
in $1/m_B$ and $1/m_c$. For relations between spin-averaged
quantities, this is
\begin{equation}
\Lambda_{\rm QCD}^2\left({1\over2m_c}-{1\over2m_b}\right)\sim50\,{\rm MeV}\,,
\end{equation}
while for relations involving hyperfine splittings, we have
\begin{equation}
\Lambda_{\rm QCD}^3\left({1\over4m_c^2}-{1\over4m_b^2}\right)\sim5\,{\rm MeV}\,.
\end{equation}
These estimates are confirmed by the results given above.
The relations we have derived here follow rigorously from QCD in
the heavy quark limit, $m_Q\to\infty$. Of course, they also arise
in phenomenological models of hadrons, such as the nonrelativistic
constituent quark model. In fact, an important test of any such
model is that it have the correct heavy quark limit. Since the
constituent quark model has this property, it reproduces these
predictions as well. However, unlike the heavy quark limit, the
constituent quark model is not in any sense a controlled {\it
approximation\/} to QCD, and it is impossible to estimate the error
in a quark model prediction in any meaningful way.
One intriguing feature of the quark model is that it makes accurate
predictions for many light hadrons, too. It is not clear whether
these successes have a clear explanation, or even a single one.
Perhaps, at some length scales, nonrelativistic constituent quarks
really are appropriate degrees of freedom. Perhaps its success
lies in its closeness to the large $N_c$ limit of QCD,~\cite{Manohar} in
which quark pair production is also suppressed. Whatever the
proper explanation, it is important to keep in mind that relations
which follow solely from the quark model do not have the same status as
those that follow from real {\it symmetries\/} of QCD, such as
heavy quark symmetry or light flavor $SU(3)$.
\subsection{Heavy baryons}
Because heavy baryons contain two light quarks, their flavor
symmetries are more interesting than those of the heavy mesons;
however, because they are more difficult to produce, less is known
experimentally about the spectrum of heavy baryon excitations. For
simplicity, let us restrict ourselves to states in which the light
quarks have no orbital angular momentum. Then, given two quarks
each with spin ${1\over2}$, the light degrees of freedom can be in
an antisymmetric state of total angular momentum $J_\ell^P=0^+$ or a
symmetric state with $J_\ell^P=1^+$. By the Pauli exclusion
principle, if neither light quark is a strange quark then the
spin and isospin are the same. The exclusion principle also
prohibits a $J_\ell^P=0^+$ state with two strange quarks.
\begin{table}
\caption{The lowest lying charmed baryons Isospin is denoted by
$I$, strangeness by $S$.}
\vspace{0.4cm}
\centerline{
\begin{tabular}{|l|l|llllr|l|}
\hline
Name&$J^P$&$s_\ell$&$L_\ell$&$J^P_\ell$&$I$&$S$&Decay\\
\hline
$\Lambda_c$&$\case12^+$&0&0&$0^+$&0&0&weak\\
$\Sigma_c$&$\case12^+$&1&0&$1^+$&1&0&
$\Lambda_c\pi$, $\Lambda_c\gamma$, weak\\
$\Sigma^*_c$&$\case32^+$&1&0&$1^+$&1&0&$\Lambda_c\pi$\\
$\Xi_c$&$\case12^+$&0&0&$0^+$&$\case12$&$-1$&weak\\
$\Xi'_c$&$\case12^+$&1&0&$1^+$&$\case12$&$-1$&$\Xi_c\gamma$, $\Xi_c\pi$\\
$\Xi^*_c$&$\case32^+$&1&0&$1^+$&$\case12$&$-1$&$\Xi_c\pi$\\
$\Omega_c$&$\case12^+$&1&0&$1^+$&0&$-2$&weak\\
$\Omega^*_c$&$\case32^+$&1&0&$1^+$&0&$-2$&$\Omega_c\gamma$\\
\hline
\end{tabular}}
\label{baryontable}
\end{table}
When the spin of the heavy quark is included, the $J_\ell^P=0^+$
state becomes a baryon with spin-parity $J^P={1\over2}^+$, while the
$J_\ell^P=1^+$ state becomes a doublet of baryons with
$J^P={1\over2}^+$ and $J^P={3\over2}^+$. The quantum numbers of
the charmed baryons are listed in Table~\ref{baryontable}, along
with their expected decays. Note that the dominant decay channels
of the higher mass $J^P={1\over2}^+$ states $\Sigma_c$ and $\Xi_c'$
are determined by the available phase space. If emission of a pion
is possible, then they will decay strongly; if not, then they will decay
weakly or electromagnetically, depending on their charge.
Again, there are heavy quark symmetry relations between the bottom
and charmed systems. The hyperfine interaction between the heavy
quark and the $J_\ell=1$ light degrees of freedom is removed by the
spin averages
\begin{eqnarray}
\overline\Sigma_c&=&\case13\left(\Sigma_c+2\Sigma_c^*\right)\\
\overline\Xi_c&=&\case13\left(\Xi_c'+2\Xi_c^*\right)\\
\overline\Omega_c&=&\case13\left(\Omega_c+2\Omega_c^*\right)\,.
\end{eqnarray}
Then we find heavy quark relations of the form
\begin{eqnarray}
\Lambda_b-\Lambda_c&=&\overline B-\overline D\label{hqrel1}\\
\overline\Sigma_b-\Lambda_b&=&\overline\Sigma_c-\Lambda_c\label{hqrel2}\\
{\Sigma_b^*-\Sigma_b\over\Sigma_c^*-\Sigma_c}&=&
{B^*-B\over D^*-D}\,.\label{hqrel3}
\end{eqnarray}
We can also use light flavor $SU(3)$ symmetry to relate the
nonstrange charmed baryons to the charmed baryons with strange
quarks. There are three relations which include corrections of order
$m_s$,~\cite{Savage}
\begin{eqnarray}
\Xi_c'&=&\case12\left(\Sigma_c+\Omega_c\right)\label{su3rel1}\\
\Xi_c^*&=&\case12\left(\Sigma^*_c+\Omega^*_c\right)\label{su3rel2}\\
\Sigma_c^*-\Sigma_c&=&\Xi_c^*-\Xi_c'\,.\label{su3rel3}
\end{eqnarray}
There is another relation in which corrections of order $m_s$ are
{\it not\/} systematically included,
\begin{equation}
\overline\Sigma_c-\Lambda_c=\overline\Xi_c'-\Xi_c\,;\label{su3rel4}
\end{equation}
however, since the analogous relation in the charmed meson system,
\begin{equation}
\overline D_{s1}-\overline D_s=\overline D_1 -\overline D\,,
\end{equation}
works
to within a few MeV, we will use this one as well.
The lightest observed heavy baryons are listed in
Table~\ref{baryondata}, along with their masses and the decay channels
in which they have been identified. I identify the observed states by
provisional names, while in the penultimate column I give the
conventional assignment of quantum numbers to these states. These
assignments are motivated primarily by the quark model.
Let us compare the predictions of heavy quark and flavor $SU(3)$
symmetry to these experimental results.~\cite{Falk} The heavy quark
constraints~(\ref{hqrel1}) and~(\ref{hqrel2}) are both satisfied to
within $10\,$MeV. However, the hyperfine relation (\ref{hqrel3}) is
badly violated. One finds
$(\Sigma_b^*-\Sigma_b)/(\Sigma_c^*-\Sigma_c)\approx0.84\pm0.21$, too
large by more than a factor of two! (I have ignored the correlation
between the errors on the masses of the $\Sigma_b$ and the $\Sigma^*_b$,
thereby overestimating the total uncertainty.) Clearly, if these data are
correct then there is a serious crisis for the application of heavy
quark symmetry to the charm and bottom baryons. On the other hand,
this crisis rests {\it entirely} on the reliability of the DELPHI
measurement~\cite{DELPHI} of these states.
The situation is somewhat better for the $SU(3)$ relations, although
not perfect. The first equal spacing rule (\ref{su3rel1}), yields the
prediction $\Xi'_c=2577\,$MeV, somewhat large but probably within the
experimental error. The second rule (\ref{su3rel2}) cannot be tested,
as the $\Omega^*_c$ state has not yet been found. The third
rule~(\ref{su3rel3}) yields the prediction $\Xi'_c=2578\,$MeV, again,
reasonably consistent with experiment. (In fact the precise agreement
of these two sum rules might lead one to expect that, when confirmed,
the mass of the $\Xi_c'$ will be somewhat higher than its present
central value.) By contrast, the final $SU(3)$
relation~(\ref{su3rel4}) fails by approximately 60~MeV, almost an
order of magnitude worse than for the charmed mesons. However, this
relation is not on the same footing as the others, so its failure is
not as significant as that of the heavy quark relation (\ref{hqrel3}).
\begin{table}
\caption{The observed heavy baryon states, with their conventional
and alternative identities. Isospin multiplets have been averaged
over. Statistical and systematic errors have, for simplicity, been
added in quadrature. The approximate masses of the proposed new
states are given in parentheses.}
\vspace{0.4cm}
\centerline{
\begin{tabular}{|l|lll|l|l|}
\hline
State&Mass (MeV)&Ref.&Decay&Conventional&Alternative \\
\hline
$\Lambda_c$&$2285\pm1$&~\cite{PDG}&weak&$\Lambda_c$&$\Lambda_c$\\
&(2380)&&weak&absent&$\Sigma_c^{0,++}$\\
&(2380)&&$\Lambda_c+\gamma$&absent&$\Sigma_c^+$\\
$\Sigma_{c1}$&$2453\pm1$&~\cite{PDG}&$\Lambda_c+\pi$&$\Sigma_c$&
$\Sigma^*_c$\\
$\Sigma_{c2}$&$2519\pm2$&~\cite{CLEO96}&$\Lambda_c+\pi$&
$\Sigma^*_c$&?\\
$\Xi_c$&$2468\pm2$&~\cite{PDG}&weak&$\Xi_c$&$\Xi_c$\\
$\Xi_{c1}$&$2563\pm15$\ (?)&~\cite{WA89}
&$\Xi_c+\gamma$&$\Xi'_c$&$\Xi'_c$\\
$\Xi_{c2}$&$2644\pm2$&~\cite{CLEO95}&$\Xi_c+\pi$&$\Xi^*_c$&$\Xi^*_c$\\
$\Omega_c$&$2700\pm3$&~\cite{E687}&weak&$\Omega_c$&$\Omega_c$\\
$\Omega_c^*$¬ yet seen&&&&\\
\hline
$\Lambda_b$&$5623\pm6$&~\cite{PDG,CDF96}&weak&$\Lambda_b$&$\Lambda_b$\\
&(5760)&&weak&absent&$\Sigma_b^\pm$\\
&(5760)&&$\Lambda_b+\gamma$&absent&$\Sigma_b^0$\\
$\Sigma_{b1}$&$5796\pm14$&~\cite{DELPHI}&$\Lambda_b+\pi$&
$\Sigma_b$&$\Sigma^*_b$\\
$\Sigma_{b2}$&$5852\pm8$&~\cite{DELPHI}&$\Lambda_b+\pi$&
$\Sigma^*_b$&?\\
\hline
\end{tabular}}
\label{baryondata}
\end{table}
What is going on here? One possibility is that the heavy quark
relations are simply no good for the spectroscopy of charmed baryons.
Of course, we would like to avoid this glum conclusion, because it
would call into question other applications of heavy quark symmetry to
charmed hadrons, such as the treatment of exclusive semileptonic $B$
decays used to extract $|V_{cb}|$. Another possibility is that
the data are not correct. This may not be unlikely, particularly as the
discrepancy rests primarily on the single DELPHI measurement.
However, let us look for an alternative resolution, in which we take
the reported data seriously, within their reported errors. As the data change
in the future, so perhaps will the motivation for such an alternative.
Let us, then, reinterpret the data under the constraint that the heavy
quark and $SU(3)$ symmetries be imposed explicitly, including the
dubious relation~(\ref{su3rel4}).~\cite{Falk} Then if we identify the
observed
$\Xi_{c1}$ with the $\Xi'_c$ state, the $SU(3)$ relations lead to
the prediction $\Sigma_c=2380\,{\rm MeV}$. If this is true, then it cannot
be correct to identify the $\Sigma_c$ with the observed $\Sigma_{c1}$;
rather, the $\Sigma_c$ would correspond to a state below threshold for
the decay $\Sigma_c\to\Lambda_c+\pi$, which is yet to be seen.
The observed $\Sigma_{c1}$ must then be the $\Sigma_c^*$, while the
observed $\Sigma_{c2}$ is some more highly excited baryon, perhaps an
orbital excitation. The new assignments are given in the final column
of Table~\ref{baryondata}.
A similar reassignment must be applied to the bottom baryons as well.
The $\Sigma_b$ is now assumed to be below $\Lambda_b+\pi$ threshold,
while the $\Sigma_{b1}$ is identified as the $\Sigma_b^*$. Then the
poorly behaved symmetry predictions improve remarkably. For example,
let us take the masses of the new states to be $\Sigma_c=2380\,$MeV
and $\Sigma_b=5760\,$MeV. Then the hyperfine splitting ratio
(\ref{hqrel3}) improves to
$(\Sigma_b^*-\Sigma_b)/(\Sigma_c^*-\Sigma_c)=0.49$, and the $SU(3)$
relation (\ref{su3rel4}) between the $s_\ell=0$ and $s_\ell=1$ states
is satisfied to within $5\,$MeV. The heavy quark
relation~(\ref{hqrel1}) is unaffected, while the
constraint~(\ref{hqrel2}) for the $\overline\Sigma_Q$ excitation
energy is satisfied to within $20\,$MeV, which is quite reasonable.
Only the $SU(3)$ equal spacing rules~(\ref{su3rel1})
and~(\ref{su3rel3}) suffer from the change. The former relation now
fails by $23\,$MeV. The latter now fails by $8\,$MeV, but the
discrepancies are in {\it opposite\/} directions, and the two
relations cannot be satisfied simultaneously by shifting the mass of
the $\Xi'_c$. With these new assignments, intrinsic $SU(3)$ violating
corrections of the order of $15\,$MeV seem to be unavoidable. In this
context, a confirmation of the $\Xi_c'$ state is very important. If
the mass were to be remeasured to be approximately 2578~MeV, then
$SU(3)$ violation under the conventional assignments would be
extremely small and we might be more disinclined to relinquish them.
Still, with respect to the symmetry predictions as a whole, the new
scenario is quite an improvement over the old. The heavy quark and
$SU(3)$ flavor symmetries have been resurrected. We can improve the
agreement further if we allow the measured masses to vary within their
reported $1\sigma$ errors. One set of allowed masses is
$\Sigma_c=2375\,$MeV, $\Sigma^*_c=2453\,$MeV, $\Xi'_c = 2553\,$MeV,
$\Xi^*_c=2644\,$MeV, $\Sigma_b=5760\,$MeV, and
$\Sigma^*_b=5790\,$MeV. For this choice, the $SU(3)$ relations
(\ref{su3rel1}), (\ref{su3rel3}) and (\ref{su3rel4}) are satisfied to
within $15\,$MeV,
$13\,$MeV and $4\,$MeV, respectively. The hyperfine ratio
(\ref{hqrel3}) is $(\Sigma_b^*-\Sigma_b)/(\Sigma_c^*-\Sigma_c)=0.38$,
and $\overline\Sigma_b-\Lambda_b$ is equal to
$\overline\Sigma_c-\Lambda_c$ to within $15\,$MeV. This is better
agreement with the symmetries than we even have a right to expect.
Of course, this new proposal implies certain issues of its own. The
most striking question is whether the new $\Sigma_c$ and $\Sigma_b$
states are already ruled out. Consider the $\Sigma_c$, since much
more is known experimentally about charmed baryons.
The $\Sigma_c$ is an isotriplet, so it comes in the charge states
$\Sigma_c^0$, $\Sigma_c^+$ and $\Sigma_c^{++}$. With the proposed
mass, these states are too light to decay strongly, to
$\Lambda_c^++\pi$. Instead, the $\Sigma_c^+$ will decay radiatively,
\begin{equation}
\Sigma_c^+\to\Lambda_c^++\gamma\,,\nonumber
\end{equation}
while the others decay weakly, via channels such as
\begin{eqnarray}
\Sigma_c^{++,0}&\to&\Sigma^\pm+\pi^+\nonumber\\
&\to&p+\pi^\pm+K_S\nonumber\\
&\to&\Sigma^\pm+\ell^++\nu\,.\nonumber
\end{eqnarray}
The challenge, then, is either to find these states or conclusively to
rule them out.
We should also note that nonrelativistic constituent quark models
typically do not favor such light $\Sigma_c^{(*)}$ and $\Sigma_b{(*)}^*$ as I
have suggested here. (See, for example, recent papers by
Lichtenburg~\cite{Lich} and Franklin.~\cite{Fran}) These models often have
been successful at predicting hadron masses, and are thus, not
unreasonably, quite popular. However, despite common
misperceptions,~\cite{Lich,Fran} they are {\it less\/} general, and
make substantially {\it more\/} assumptions, than a treatment based
solely on heavy quark and $SU(3)$ symmetry. A reasonable quark model
respects these symmetries in the appropriate limit, as well as
parametrizing deviations from the symmetry limit. Such models
therefore cannot be reconciled simultaneously with the heavy quark
limit and with the reported masses of the $\Sigma_b$ and
$\Sigma_b^*$. Hence, the predictions of this analysis follow
experiment in pointing to physics beyond the constituent quark model.
While the historical usefulness of this model for hadron spectroscopy
may deepen one's suspicion of the DELPHI data on $\Sigma_{b1,2}$, such
speculation is beyond the scope of this discussion. To reiterate, I
have taken the masses and errors of all states as they have been
reported to date; as they evolve in the future, so, of course, will
the theoretical analysis.
\section{Strong Decays of Excited Charmed Mesons}\label{STRONG}
Let us turn now from the spectroscopic implications of heavy quark
symmetry to its implications for the strong decays of excited
hadrons. We will focus on the system for which there is the most, and
most interesting, data available, th excited charmed mesons.
As we saw in Section~\ref{SPEC}, there are two doublets of $p$-wave
charmed mesons, one with $J_\ell^P=\case12^+$ and one with
$J_\ell^P=\case32^+$. The former correspond to the physical states
$D_0^*$ and $D_1'$, the latter to $D_1$ and $D_2^*$. Note that the
$D_1$ and $D_1'$ both have $J^P=1^+$, being distinguished by their light
angular momentum $J_\ell^P$, which is a good quantum number only in
the limit $m_c\to\infty$.
The $D_0^*$ and $D_1'$ decay via $s$-wave pion emission,
\begin{eqnarray}
D^*_0&\to&D+\pi\nonumber\\
D_1&\to&D^*+\pi\,.\nonumber
\end{eqnarray}
If their masses do not lie close to the threshold for this decay, then
these states can easily be quite broad, with widths of order 100 MeV
or more. Hence they could be very difficult to identify
experimentally, and in fact no such states have yet been found. By
contrast, the $D_1$ and $D_2^*$ are constrained by heavy quark symmetry to
decay via $d$-wave pion emission. The channels which are allowed are
\begin{eqnarray}\label{d12trans}
D_1&\to&D^*+\pi\nonumber\\
D_2^*&\to&D^*+\pi\nonumber\\
D_2^*&\to&D+\pi\,.
\end{eqnarray}
Because their decays rates are suppressed by a power of $|{\bf
p}_\pi|^5$, these states could be much narrower than the $D_0^*$ and
$D_1'$. In fact, resonances decaying in these channels have been
identified, and the properties of the $D_1(2420)$ and the $D_2(2460)$
are given in Table~\ref{mesontable}.
Since pion emission is a transition of the light degrees of freedom
rather than of the heavy quark, all of the decays (\ref{d12trans})
are really a single nonperturbative process, differentiated only by
the relative orientation of the spins of the heavy quark and the
initial and final light degrees of freedom. Hence the three
transitions are related to each other by heavy quark symmetry.
In the strict limit $m_c\to\infty$, both the $D_1$ and $D_2^*$ and
the $D$ and $D^*$ are degenerate doublets, so the factor of $|{\bf
p}_\pi|^5$ is the same in all three decays. The finite hyperfine
splittings $D^*-D\approx150\,{\rm MeV}$ and $D_2^*-D_1\approx40\,{\rm MeV}$ are
effects of order $1/m_c$, but their influence on $|{\bf p}_\pi|^5$ is
substantial. Hence we will account for this factor explicitly by
invoking heavy quark symmetry at the level of the {\it matrix
elements\/} responsible for the decays, and using the physical
masses to compute the phase space. A straightforward calculation
then yields two predictions for the full and partial widths:~\cite{ILW}
\begin{eqnarray}
&&{\Gamma(D_2^*\to D+\pi)/\Gamma(D_2^*\to D^*+\pi)}
=2.3\label{d2pred}\\
&&{\Gamma(D_1)/\Gamma(D_2^*)}=0.30\,.\label{d1pred}
\end{eqnarray}
For comparison, the experimental ratios are
\begin{eqnarray}
&&{\Gamma(D_2^*\to D+\pi)/\Gamma(D_2^*\to D^*+\pi)}
=2.2\pm0.9\\
&&{\Gamma(D_1)/\Gamma(D_2^*)}=0.71\,.
\end{eqnarray}
We see that the first relation works very well, while the second
fails miserably. This unfortunate prediction raises a similar
question as we faced earlier: is this a sign of a {\it general\/}
failure of heavy quark symmetry as applied to charmed mesons, or can
it be understood {\it within\/} the heavy quark expansion? Naturally,
we would much prefer this latter outcome, for the familiar reason that
we want very much to believe we can trust this expansion for the
charmed mesons in other contexts.
Explanations for the failure for the prediction~(\ref{d1pred}) have
been offered in the past. One is to suppose a small mixing,~\cite{ILW}
of order $1/m_c$, of the narrow $D_1$ with the broad $s$-wave $D_1'$.
Since these states have the same total angular momentum and parity,
$J^P=1^+$, such mixing is allowed when corrections for finite $m_c$ are
included. A small mixing, of the size one might reasonably expect at
this order, could easily double the width of the physical $D_1$. This
is a plausible explanation, and could well contribute at some level,
but for two reasons it is somewhat unlikely to be the dominant effect.
First, there is no evidence~\cite{CLEO} for an $s$-wave component in the
angular distribution of ${\bf p}_\pi$ in the decay $D_1\to
D^*+\pi$. Although such a component could have escaped
undetected by a conspiracy of unknown final interaction phases, such a
situation is certainly not the generic one. Second, there is no
evidence for an equivalent mixing between the strange analogues
$D_{s1}$ and $D_{s1}'$, which would broaden the observed $D_{s1}$
unacceptably.~\cite{ChoTriv} Of course, light flavor $SU(3)$ might do
a poor job of predicting a mixing angle, which is actually a ratio of
matrix elements both of which receive $SU(3)$ corrections. So, while
this explanation is not ruled out, neither does this evidence give one
particular confidence in it.
Another possibility is that the width of the $D_1$ receives a large
contribution from two pion decays to the $D$, either
nonresonant,~\cite{FalkLuke}
\begin{equation}
D_1\to D+\pi+\pi\,,\nonumber
\end{equation}
or through an intermediate $\rho$ meson,~\cite{rhodecay}
\begin{equation}
D_1\to D+\rho\to D+\pi+\pi\,.\nonumber
\end{equation}
Again, the problem is that there is no experimental evidence for such
an effect. Also, it is somewhat difficult, within the schemes in which
such decays are discussed, to broaden the $D_1$ enough to match fully
the experimental width. Hence, we are motivated to continue to search
for a more elegant and plausible explanation, which does not force us
to give up heavy quark symmetry for charmed mesons.
The answer, it turns out, lies in studying the heavy quark expansion
for the excited charmed mesons at subleading order in $1/m_c$. In this
case, we need a theory which contains both charmed mesons and soft
pions, coupled in the correct $SU(3)$ invariant way. Such a
technology is heavy hadron chiral perturbation
theory.~\cite{HHCPT} While the formalism is in some ways more than we
need, as it includes complicated pion self-couplings which will play
no role here, it is useful in that it allows us to keep track of all
the symmetries in the problem mechanically (and correctly).
Heavy hadron chiral perturbation theory accomplishes three things.
First, it builds in the heavy quark and chiral $SU(3)$ symmetries
explicitly. Second, it implements a momentum expansion for the pion
field, in powers of $\partial_\mu\pi/\Lambda_\chi$, where the chiral
symmetry breaking scale is $\Lambda_\chi\approx 1\,{\rm GeV}$. Finally, and
very important in the present context, it allows one to include
symmetry breaking corrections in a {\it systematic\/} way.
To implement the symmetries, the Lagrangian must be built out of
objects which carry representations not just of the Lorentz group, but
of the heavy quark and $SU(3)$ symmetries as well. Clearly, these
objects must contain both members of a single heavy meson doublet
of fixed $J_\ell^P$, and depend explicitly on the heavy meson
velocity. For the ground state mesons $D$ and $D^*$, this the
``superfield''~\cite{FalkLuke,FGGW,traceform}
\begin{equation}
H_a = {(1+\rlap/v)\over2\sqrt2}
\left[ D_a^{*\mu}\gamma_\mu-D_a\gamma^5\right]\,,
\end{equation}
where the index on the $D^*$ is carried by the polarization vector.
Under heavy quark spin rotations $Q$, Lorentz transformations $L$, and
$SU(3)$ transformations $U$, $H_a$ transforms respectively as
\begin{eqnarray}
H_a&\to&S_QH_a\\
H_a&\to&S_LH_aS_L^\dagger\\
H_a&\to&H_aU_{ab}^\dagger\,.
\end{eqnarray}
Here $S_Q$ and $S_L$ are the spinor representations of the Lorentz
group, and $U_{ab}$ is the usual matrix representation of the vector subgroup
of spontaneously broken chiral $SU(3)$ symmetry. There are similar
superfields for the excited mesons,~\cite{FalkLuke,traceform}
\begin{eqnarray}
S_a &=& {(1+\rlap/v)\over2\sqrt2}
\left[ D_{1a}^{\prime\mu}\gamma_\mu\gamma^5-D_{0a}^*\right]\,,\\
T^\mu_a &=& {(1+\rlap/v)\over2\sqrt2}\left[
D_{2a}^{*\mu\nu}\gamma_\nu
-D_{1a}^\nu\sqrt{\case32}\,\gamma^5(\delta^\mu_\nu-\case13
\gamma_\nu\gamma^\mu+\textstyle{1\over3}\gamma_\nu v^\mu)\right]\,,
\end{eqnarray}
transforming in analogous ways. The superfields $H_a$, $S_a$ and $T^\mu_a$ all
have mass dimension $\case32$. The pion fields appear as in
ordinary chiral perturbation theory; since we will not be interested in
pion self-couplings, we will just recall the linear term in the
exponentiation of the pion fields,
\begin{equation}
A_\mu={1\over f_\pi}\partial_\mu{\Pi}+\dots\,,
\end{equation}
where ${\Pi}$ is the matrix of Goldstone boson fields and
$f_\pi\approx132\,{\rm MeV}$.
We now build a Lagrangian out of invariant combinations of these
elements. At leading order we get the terms responsible for the $d$-wave
decay of the $D_1$ and $D_2^*$,
\begin{equation}
{h\over\Lambda_\chi}\,{\rm Tr}\,\left[\overline H\,
T^\mu\gamma^\nu\gamma^5
(iD_\mu A_\nu+iD_\nu A_\mu)\right]+{\rm h.c.}\,,
\end{equation}
and for the
$s$-wave decay of the $D_0^*$ and $D_1'$,
\begin{equation}
f\,{\rm Tr}\,\left[\overline H\,
S\gamma^\mu\gamma^5A_\mu\right]
+{\rm h.c.}\,.
\end{equation}
Expanding these interactions in terms of the individual fields, we find
the same symmetry predictions (\ref{d2pred}) and (\ref{d1pred}) as
before.
However, now we would like to go further, and include the leading
corrections of order $1/m_c$ in the effective
Lagrangian.~\cite{FaMe95} To understand how to do this, we turn to the
expansion of QCD in the heavy quark limit, given by the heavy quark
effective theory. This Lagrangian is written in terms of an effective
HQET field $h(x)$, which satisfies the conditions~\cite{Georgi}
\begin{equation}
{1+\rlap/v\over2}h(x)=h(x)
\end{equation}
and
\begin{equation}
i\partial_\mu h(x)=k_\mu h(x)\,,
\end{equation}
where $k^\mu=p_c^\mu-m_cv^\mu$ is the ``residual momentum'' of the
charm quark. Including the leading corrections, the HQET Lagrangian
takes the form~\cite{Georgi,FGL}
\begin{equation}
{\cal L}_{\rm HQET} = \bar hiv\cdot Dh+{1\over2m_c}\bar h(iD)^2h
+{1\over2m_c}\bar h\sigma^{\mu\nu}(\textstyle{1\over2}gG_{\mu\nu})h
+\dots\,.
\end{equation}
The effect of the subleading terms $\bar h(iD)^2h$ and $g\bar
h\sigma^{\mu\nu}G_{\mu\nu}h$ on the chiral expansion may be treated in
the same manner as other symmetry breaking perturbations to the
fundamental theory such as finite light quark masses. Namely, we
introduce a ``spurion'' field which carries the same representation of
the symmetry group as does the perturbation in the fundamental theory,
and then include this spurion in the chiral lagrangian in the most
general symmetry-conserving way. When the spurion is set to the
constant value which it has in QCD, the symmetry breaking is
transmitted to the effective theory. In the case of finite light
quark masses, for example, the symmetry breaking term in QCD is $\bar
q M_qq$, where $M_q={\rm diag}(m_u,m_D,m_s)$. Introducing a spurion
$M_q$ which transforms as $M_q\to LM_qR^\dagger$ under chiral $SU(3)$,
we then include terms in the ordinary chiral lagrangian such as
$\mu\,{\rm Tr}\,[M_q\Sigma+M_q\Sigma^\dagger]$.
In the present case, only the second of the two correction terms in
${\cal L}_{\rm HQET}$ violates the heavy spin symmetry. We include
its effect in the chiral lagrangian by introducing a spurion
$\Phi_s^{\mu\nu}$ which transforms as $\Phi_s^{\mu\nu}\to
S_Q\Phi_s^{\mu\nu}S_Q^\dagger$ under a heavy quark spin rotation
$S_Q$. This spurion is introduced in the most general manner
consistent with heavy quark symmetry, and is then set to the constant
$\Phi_s^{\mu\nu}=\sigma^{\mu\nu}/2m_c$ to yield the leading spin
symmetry violating corrections to the chiral lagrangian. We will
restrict ourselves to terms in which $\Phi_s^{\mu\nu}$ appears exactly
once.
The simplest spin symmetry violating effect is to break the degeneracy
of the heavy meson doublets. This occurs through the terms
\begin{equation}
\lambda_H\,{\rm Tr}\,\left[\overline H\Phi_s^{\mu\nu}H\sigma_{\mu\nu}
\right]
-\lambda_S\,{\rm Tr}\,\left[\overline S\Phi_s^{\mu\nu}S
\sigma_{\mu\nu}\right]
-\lambda_T\,{\rm Tr}\,\left[\overline T^\alpha\Phi_s^{\mu\nu}T_\alpha
\sigma_{\mu\nu}\right]\,.
\end{equation}
The dimensionful coefficients are fixed once the masses of the mesons
are known. For the ground state $D$ and $D^*$, for example, we find
\begin{equation}
\lambda_H={1\over8}\left[M^2_{D^*}-M^2_D\right]=(260\,{\rm MeV})^2\,.
\end{equation}
This value is entirely consistent with what one would obtain, instead,
with the $B$ and $B^*$ mesons. For the $D_1$ and $D_2^*$, we find
\begin{equation}
\lambda_T={3\over16}\left[M^2_{D_2^*}-M^2_{D_1}\right]=(190\,{\rm MeV})^2\,.
\end{equation}
Note that $\sqrt{\lambda_H}$ and $\sqrt{\lambda_T}$ are of order
hundreds of MeV, the scale of the strong interactions.
We are interested in the spin symmetry violating corrections to
transitions in the class $T^\mu\to H+\pi$, which will arise from terms
analogous to ${\cal L}_d$ but with one occurrence of
$\Phi_s^{\mu\nu}$. The spin symmetry, along with the symmetries which
constrained ${\cal L}_d$, requires that any such term be of the
generic form
\begin{equation}
{1\over\Lambda_\chi}{\rm Tr}\,\left[\overline H\Phi_s^{\mu\nu}T^\alpha
C_{\mu\nu\alpha\beta\kappa}\gamma^5
\left(iD^\beta A^\kappa+iD^\kappa A^\beta\right)\right]+{\rm h.c.}\,,
\end{equation}
where $C_{\mu\nu\alpha\beta\kappa}$ is an arbitrary product of Dirac
matrices and may depend on the four-velocity $v^\lambda$. This would
seem to allow for a lot of freedom, but it turns out that there is
only a {\it single\/} spin symmetry-violating term which respects both parity
and time
reversal invariance:
\begin{equation}
{\cal L}_{d1} = {h_1\over2m\Lambda_\chi}{\rm Tr}\,
\left[\overline H\sigma^{\mu\nu}T^\alpha
\sigma_{\mu\nu}\gamma^\kappa\gamma^5
\left(iD_\alpha A_\kappa+iD_\kappa A_\alpha\right)\right]+{\rm h.c.}\,.
\end{equation}
We expect the new coefficient $h_1$, which has mass dimension one, to
be of order hundreds of MeV.
The mixing of $D_1$ and $D_1'$ is also a spin symmetry violating
effect which arises at order $1/m_c$. There is a corresponding
operator in the chiral lagrangian which is responsible for this,
\begin{equation}\label{Lmix}
{\cal L}_{\rm mix} = g_1{\rm Tr}\,\left[\overline S\Phi_s^{\mu\nu}T_\mu
\sigma_{\nu\alpha}v^\alpha\right]+{\rm h.c.}\,.
\end{equation}
However, we will neglect this term for now. It is straightforward to
include both ${\cal L}_{d1}$ and ${\cal L}_{\rm mix}$ in a more
complete analysis.~\cite{FaMe95}
We now compute the partial widths for the decays of the $D_1$ and the
$D_2^*$ at subleading order in the $1/m_c$ expansion. We find
\begin{eqnarray}\label{d2widths}
\Gamma(D_2^{*0}\to D\pi)&=&{1\over10\pi}\,{m_D\over M_{D_2^*}}
\,{4|{\bf p}_\pi|^5\over
\Lambda_\chi^2f_\pi^2}\left[h-{h_1\over m_c}\right]^2\\
\Gamma(D_2^{0*}\to D^*\pi)&=&{3\over20\pi}\,{M_{D^*}\over M_{D_2^*}}
\,{4|{\bf p}_\pi|^5\over
\Lambda_\chi^2f_\pi^2}\left[h-{h_1\over m_c}\right]^2\\
\Gamma(D_1\to D^*\pi)&=&{1\over4\pi}\,{M_{D^*}\over M_{D_1}}
\,{4|{\bf p}_\pi|^5\over
\Lambda_\chi^2f_\pi^2}\left[\left(h+{5h_1\over 3m_c}\right)^2
+{8h_1^2\over9m_c^2}\right]\,,
\end{eqnarray}
where in each expression $|{\bf p}_\pi|^5$ is computed using the
actual phase space for that decay. Setting $h_1=0$ would reduce these
results to the leading order predictions. Note that the ratio of
partial widths of the $D_2^*$ is independent of $h_1$, and so is {\it
unchanged\/} by the inclusion of $1/m_c$ effects. However, the ratio
of the widths of the $D_1$ and the $D_2^*$ receives a large
correction,
\begin{equation}
{\Gamma(D_1)/\Gamma(D_2^*)}=0.30\left[1+{16\over3}{h_1\over m_c h}
+\dots\right]\,.
\end{equation}
{}From the width of the $D_2^*$, and taking $\Lambda_\chi=1\,{\rm GeV}$, we
find $h\approx 0.3$. Then we see that even for a modest coefficient
$h_1\approx100\,{\rm MeV}$, we get a correction to the ratio of widths of
order 100\%!
What we have learned, then, is that a $1/m_c$ correction of the
canonical size, with no tuning of parameters, naturally leaves one of
these predictions alone while destroying the other. In this sense, we
understand the failure of the bad prediction {\it within\/} the heavy
quark expansion. This is what we mean by saying that heavy quark
symmetry (or any symmetry) ``works''. It need not be the case that
every prediction of the symmetry limit be well satisfied by the
data. Rather, it is crucial that deviations from the symmetry limit
can be understood {\it within a systematic expansion in the small
parameters which break the symmetry.} When a symmetry works in this
sense, we retain predictive power even in cases when the symmetry predictions
behave poorly.
\section{Production of Heavy Hadrons via Fragmentation}\label{FRAG}
Before heavy hadrons can decay, they must be produced. The
production of a heavy hadron proceeds in two steps. First, the
heavy quark itself must be created; because of its large mass, this
process takes place over a time scale which is very short. Second,
some light degrees of freedom assemble themselves about the heavy
quark to make a color neutral heavy hadron, a process which involves
nonperturbative strong interactions and typically takes much
longer. If the heavy quark is produced with a large velocity in the
center of mass frame, and if there is plenty of available energy,
then production of these light degrees of freedom will be local in
phase space and independent of the light degrees of freedom in the
initial state. This is the fragmentation regime. We will see that
heavy quark symmetry simplifies the description of heavy hadron
production via fragmentation, because, as before, it allows us to
separate certain properties of the heavy quark from those of
the light degrees of freedom. This is particularly important in the
production of excited heavy hadrons, for which the behavior of the
spin of the light degrees of freedom can be quite interesting.
Our consideration of heavy quark fragmentation will lead us to
consider two related questions:~\cite{FaPe94}
\par\noindent 1.~What are the nonperturbative features of the
fragmentation process? In particular, can we exploit heavy quark
symmetry to isolate and study the spin of the light degrees of
freedom?
\par\noindent 2.~What is the fate of a polarized heavy quark created
in the hard interaction? Is any initial polarization preserved
until the heavy quark undergoes weak decay?
\par\noindent We will see that an understanding of the first
question will cast a useful light on the second. In the latter
case, the excited heavy baryons will play a significant role.
The analysis depends on following the spins of the heavy quark and
the light degrees of freedom separately through the three phases of
fragmentation, life of the state, and decay. The net interaction of
the heavy and light angular momenta $S_Q$ and $J_\ell$ depends both
on the strength of the coupling between them and on the length of
time they have to interact. Of course, the coupling between the
spins is small in the heavy quark limit, because it is mediated by
the chromomagnetic moment of the heavy quark. This moment scales as
$1/m_Q$, so the time $\tau_s$ it take for the heavy and light spins
to precess once about each other is of order $m_Q/\Lambda_{\rm QCD}^2$, much
longer than typical time scales associated with the strong
interactions.
This fact is enough to assure that the heavy quark spin is
essentially frozen during the process of fragmentation itself.
Since fragmentation is purely a phenomenon of nonperturbative QCD,
it takes place on a time scale of order $1/\Lambda_{\rm QCD}\ll\tau_s$. Hence
there is not enough time for the relatively weak spin-exchange
interactions to take place.
Naively, one can say something similar when the heavy quark fragments
to an excited hadron which decays via a strong transition of the
light degrees of freedom. The time scale of a strong transition is
set by nonperturbative QCD and should be comparable to the
fragmentation time. Thus, one might expect generically that the
lifetime $\tau$ of the state satisfies $\tau\ll\tau_s$, and the
heavy quark spin continues to be frozen in place during the life of
the excited hadron. However, if the energy available in the decay
is not much larger than $m_\pi$, the lightest hadron which can be
emitted in a strong transition, then $\tau$ can be increased by the
limited phase space. The most dramatic example is $D^*$ decay,
which is so close to threshold that the strong ($D^*\to D+\pi$) and
electromagnetic ($D^*\to D+\gamma$) widths are almost equal.
So we must treat excited hadrons on a case by case basis, depending
on the relative sizes of $\tau$ and $\tau_s$. For simplicity, we
will consider here only two extreme cases. Let the excited heavy
doublet be composed of a hadron $H$ of spin $J$ and mass $M$ and a
hadron $H^*$ of spin $J+1$ and mass $M^*$. The first possibility is
the ``naive'' one $\tau_s\gg\tau$, where $H$ and $H^*$ are formed
and then decay before the angular momenta $S_Q$ and $J_\ell$ have a chance to
interact. In this case, there is no depolarization of the heavy
quark spin $S_Q$, if one was present initially. Similarly, when
$H$ and $H^*$ decay strongly, the light degrees of freedom in the
decay carry any information about the spin state in which they
were produced. Note that the very spin-exchanges interaction which
is inhibited here is the one responsible for the hyperfine
splitting between $H$ and $H^*$. Hence, under these conditions the resonances
are almost completely {\it overlapping,} with a width
$\Gamma=1/\tau$ satisfying $\Gamma\gg|M^*-M|$. This is another
consequence of the effective decoupling of $S_Q$ and $J_\ell$,
which are independent good quantum numbers of the resonances.
The second possibility is the opposite extreme, $\tau\gg\tau_s$.
This corresponds to heavy hadrons which decay weakly or
electromagnetically, or to strong decays which are severely
suppressed by phase space. Here the spins $S_Q$ and $J_\ell$ have
plenty of time to interact, precessing about each other many times
before $H$ and $H^*$ decay. There is at least a partial degradation
of any initial polarization of $Q$, as well as a degradation of any
information about the fragmentation process which may be carried by
the light degrees of freedom. The signature of this situation is
that the states $H$ and $H^*$ are well separated resonances, since
the chromomagnetic interactions have ample opportunity to produce a
hyperfine splitting much larger than the width,
$|M^*-M|\gg\Gamma$. In contrast with the first case, here the
heavy and light spins are resolved into states of definite total
spin $J$.
\subsection{Production and decay of $D_1$ and $D_2^*$}
We will consider two examples, the first of which is the production
and decay of the excited charmed mesons $D_1$ and $D_2^*$. We see
from Table~\ref{mesontable} that the splitting between these
states is 35~MeV, while their widths are approximately 20~MeV.
This makes them somewhat of an intermediate case; however, for
simplicity let us treat them in the ``widely separated resonances''
limit. A more precise treatment which takes into
account their finite widths is straightforward but not very
pedagogically enlightening.~\cite{FaMe95,FaPe94}
We must follow the orientations of the spins $S_Q$ and $J_\ell$
through the following sequence of events:
\par\noindent 1.~The charm quark is created in some hard
interaction.
\par\noindent 2.~Light degrees of freedom with $J_\ell^P=\case32^+$
are created in the process of fragmentation.
\par\noindent 3.~The spins $S_Q$ and $J_\ell$ precess about each
other, resolving the states $D_1$ and $D_2^*$ of definite total
angular momentum $J$.
\par\noindent 4.~The $D_1$ or the $D_2^*$ decays via $d$-wave pion
emission. We can measure the direction of this pion with respect
to the spatial axis along which the fragmentation took place.
\par\noindent The light degrees of freedom can be produced with
helicity $h=\pm\case32$ or $h=\pm\case12$ along the fragmentation
axis. While parity invariance of the strong interactions requires
that the probabilities for helicities $h$ and $-h$ are identical,
the relative production of light degrees of freedom with
$|h|=\case32$ versus $|h|=\case12$ is determined in some
complicated and incalculable way by strong dynamics. Let the
quantity $w_{3/2}$ denote the probability that $|h|=\case32$,
\begin{equation}
w_{3/2}=P(h=\case32)+P(h=-\case32)\,.
\end{equation}
Then $1-w_{3/2}$ is the probability that $|h|=\case12$. Completely
isotropic production corresponds to $w_{3/2}=\case12$. We have identified a
new nonperturbative parameter of QCD, which is well defined only in
the heavy quark limit.
This new parameter can be measured in the strong decay of the
$D_2^*$ or $D_1$. For example, consider the angular distribution
of the pion with respect to the fragmentation axis in the decay
$D_2^*\to D+\pi$. This is a decay of the light degrees of freedom
in the excited hadron, so it will depend on their initial
orientation (that is, on $w_{3/2}$) and on the details of the
precession of $J_\ell$ around $S_Q$ during the lifetime of the
$D_2^*$. Following the direction of $J_\ell$ through
fragmentation, precession and decay, we find the distribution
\begin{equation}\label{d2todpi}
{1\over\Gamma}{{\rm d}\Gamma\over{\rm d}\cos\theta}=
\case14\left[1+3\cos^2\theta-6w_{3/2}
(\cos^2\theta-\case13)\right]\,.
\end{equation}
This distribution is isotropic only when $w_{3/2}=\case12$, that
is, when the light degrees of freedom are produced isotropically in
the fragmentation process. Similar distributions are found in the
decays $D_2^*\to D^*+\pi$ and $D_1\to D^*+\pi$.
A fit of ARGUS data~\cite{ARGUS} to the expression (\ref{d2todpi}) seems
to indicate that a small value of $w_{3/2}$ is preferred; while the
errors are large, we find that
$w_{3/2}<0.24$ at the 90\% confidence level.~\cite{FaPe94} It would be
nice to confirm this result with a sharper measurement, and not only for
the charmed mesons but in the bottom system as well. Since $w_{3/2}$
is intrinsically nonperturbative, we do not have any real
theoretical understanding of why it should be small, although
perturbative calculations of of fragmentation production of the $B_c$
system in the limit $m_c\ll m_b$ yield small $w_{3/2}$ as
well.~\cite{ChenWise,Yuan}
\subsection{Polarization of $\Lambda_b$ at SLC/LEP}
After warming up with the excited charmed mesons, we are set to
address a somewhat more practical question: What is the polarization of
$\Lambda_b$ baryons produced at the $Z$ pole? This question is
motivated by the fact that $b$ quarks produced in the decay of the
$Z$ are 94\% polarized left-handed. Since the $\Lambda_b$ is
composed of a $b$ quark and light degrees of freedom with zero
net angular momentum, the orientation of a $\Lambda_b$ is identical
to the orientation of the $b$ quark inside it. Similarly, the $b$
quark spin does not precess inside a $\Lambda_b$. Hence if a
$b$ quark produced at the $Z$ fragments to a $\Lambda_b$, then
those baryons should inherit the left-handed polarization of the
quarks and reveal it in their weak decay.
Unfortunately, life is not that simple. Two recent measurements of
$\Lambda_b$ polarization from LEP are~\cite{DELPHI1,ALEPH}
\begin{eqnarray}
&&P(\Lambda_b) =0.08^{+0.35}_{-0.29}{\rm (stat.)}^{+0.18}_{-0.16}
{\rm (syst.)}\qquad {\rm (DELPHI)}\,,\nonumber\\
&&P(\Lambda_b) =0.26^{+0.20}_{-0.25}{\rm (stat.)}^{+0.12}_{-0.13}
{\rm (syst.)}\qquad {\rm (ALEPH)}\,,\nonumber
\end{eqnarray}
both a long way from $P(\Lambda_b) =0.94$. The reason is that not
all $b$ quarks which wind up as $\Lambda_b$ baryons get there
directly. In particular, they can fragment to the excited baryons
$\Sigma_b$ and $\Sigma_b^*$, which then decay to
$\Lambda_b$ via pion emission. If the excited states, which
have light degrees of freedom with $S_\ell=1$, live long enough,
then the $b$ quark will precess about $S_\ell$ and the polarization
will be degraded. The result will be a net sample of $\Lambda_b$'s
with a polarization less than 94\%, as is in fact observed.
In addition to the requirement that $\tau>\tau_s$ for the
$\Sigma_b^{(*)}$, any depolarization of $\Lambda_b$'s by this
mechanism depends on two unknown quantities:
\par\noindent 1.~The production rate $f$ of $\Sigma_b^{(*)}$ relative
to $\Lambda_b$. Isospin and spin counting enhance $f$ by a factor
of nine, while the mass splitting between $\Sigma_b^{(*)}$ and
$\Lambda_b$ suppresses it; studies based on the Lund Monte Carlo
indicate $f\approx0.5$ with a very large uncertainty.~\cite{LUND}
\par\noindent 2.~The orientation of the spin $S_\ell$ with respect
to the fragmentation axis. This orientation, which is
nonperturbative in origin, reflects the possible helicities
$h=1,0,-1$. By analogy with the treatment of the heavy mesons, we
define~\cite{FaPe94}
\begin{equation}
w_1=P(h=1)+P(h=-1)\,.
\end{equation}
In this case, isotropic production corresponds to $w_1=\case23$. We
may measure $w_1$ from the angle of the pion with respect to the fragmentation
axis in the decay
$\Sigma_b^*\to\Lambda_b+\pi$,
\begin{equation}\label{sigtolampi}
{1\over\Gamma}{{\rm d}\Gamma\over{\rm d}\cos\theta}=
\case14\left[1+3\cos^2\theta-\case92w_1
(\cos^2\theta-\case13)\right]\,.
\end{equation}
It turns out that the decay $\Sigma_b\to\Lambda_b+\pi$ is isotropic in
$\cos\theta$ for any value of $w_1$.
The polarization retention of the $\Lambda_b$ may be computed in
terms of $f$ and $w_1$. As before, it is more tedious than
instructive to present the general case in which the $\Sigma_b$ and
the $\Sigma_b^*$ may partially overlap, so let us restrict to the
extreme situation $\tau\gg\tau_s$. Then the polarization of the
observed $\Lambda_b$'s is $P(\Lambda_b)=R(f,w_1)P(b)$, where
$P(b)=94\%$ is the initial polarization of the $b$ quarks,
and~\cite{FaPe94}
\begin{equation}
R(f,w_1) = {1+\case19(1+4w_1)f\over1+f}\,.
\end{equation}
Note that for $f=0$ (no $\Sigma_b^{(*)}$'s are produced),
$R(0,w_1)=1$ and there is no depolarization. For the Lund value
$f=0.5$, $R$ ranges between 0.70 and 0.85.
Can the very low measured values of $P(\Lambda_b)$ be accommodated
by the present data on the $\Sigma_b^{(*)}$? The situation is
still unclear. On the one hand, the same DELPHI analysis which found
such surprising masses for the excited bottom baryons reported
$w_1\approx0$ and $1<f<2$ with large uncertainty.~\cite{DELPHI} If this
is confirmed, and if the conventional identification of the bottom
baryons is correct, then a polarization in the range
$P(\Lambda_b)\approx40\%-50\%$ is easy to accommodate. On the other
hand, CLEO's recent announcement~\cite{CLEO96} of the $\Sigma_c^*$ was
accompanied by a measurement $w_1=0.71\pm0.13$, consistent with
isotropic fragmentation. Recall that by heavy quark symmetry, $w_1$
measured in the charm and bottom systems must be the same, so this
result is inconsistent with the report from DELPHI. Clearly, further
measurements are needed to resolve this situation.
\section{Weak Decays}\label{WEAK}
We now turn to our final topic, the production of excited charmed
hadrons in semileptonic $B$ decays. The branching fraction of
\begin{equation}
B\to(D_1,D_2^*)+\ell+\nu
\end{equation}
has been measured by two groups with roughly
consistent results:~\cite{branch}
\begin{eqnarray}\label{D12data}
{\rm OPAL}:&&\qquad(34\pm7)\%\\
{\rm CLEO}:&&\qquad<30\%\ {\rm at\ 90\%\ c.l.}\,.
\end{eqnarray}
It is not unreasonable to assume that this measurement will
eventually be improved, and any discrepancies resolved. The
question is, what can we learn from it? How useful would an effort
to improve this measurement really be?
I will propose that it would be extremely useful. First, because
studying the production of excited charmed mesons in $B$ decay
gives us direct information about QCD, and second, because through
this insight into QCD we can dramatically reduce the single most
nettlesome theoretical uncertainty in the extraction of $|V_{cb}|$ from
inclusive semileptonic $B$ decays, namely the dependence on the $b$ quark mass.
The heavy quark expansion and perturbative QCD may be used to analyze
semileptonic and radiative $B$ decays in a systematic expansion in
powers of $1/m_b$ and $\alpha_s(m_b)$.~\cite{CGG,SV,FLS94,MW} Since the
energy $m_b-m_c$ which is released in such a decay is large compared to
$\Lambda_{\rm QCD}$, we may invoke the duality of the partonic and hadronic
descriptions of the process. The idea is that sufficiently inclusive
quantities may be computed at the level of quarks and gluons, if the
interference between the short-distance and long-distance physics may
be neglected. Except near the boundaries of phase space, this is
usually the case if the ratio of typical long wavelengths
($\sim1/\Lambda_{\rm QCD}$) to typical short wavelengths ($\sim1/m_b$) is
sufficiently large. While it is reasonable to expect parton-hadron
duality to hold for arbitrarily large energy releases, its
application at the $b$ scale requires a certain amount of
faith.~\cite{FDW}
Consider a $B$ meson with initial momentum $p_B^\mu=m_B v^\mu$, which
decays into leptons with total momentum $q^\mu$ and a hadronic state
$X_c$ with momentum $p_X^\mu=p_B^\mu-q^\mu$. Since we are interested
in the properties of the hadrons which are produced, we define the
kinematic invariants~\cite{FLS96}
\begin{eqnarray}
s_H&=&p_X^2\\
E_H&=&p_X\cdot p_B/m_B\,,
\end{eqnarray}
which are, respectively, the invariant mass of the hadrons and their
total energy in the $B$ rest frame. We then compute the doubly
differential distribution ${\rm d}\Gamma/{\rm d} s_H{\rm d} E_H$ using the heavy
quark expansion. First, we use the optical theorem to relate the
semileptonic decay rate for fixed $q^\mu$ to the imaginary part of a
forward scattering amplitude,
\begin{eqnarray}
&&\sum_{X_c}\int{\rm d} q\,\big|\langle X_c(p_X)(\ell\nu)(q)|
{\cal O}_W|B\rangle\big|^2\nonumber\\
&&\qquad
=\case12 G_F^2\int{\rm d} q\,e^{iq\cdot x} L_{\mu\nu}(q)\,\langle B|
T\{J_{bc}^{\dagger\mu}(x),J_{bc}^\nu(0)\}|B\rangle\,.
\end{eqnarray}
Here ${\cal O}_W=(G_F/\sqrt2)J_{cb}^\mu J_{\ell\mu}$ is the product of
left-handed currents responsible for the semileptonic decay $b\to
c\ell\nu$, and
\begin{equation}
L_{\mu\nu}=\case13\left(q_\mu q_\nu-q^2g_{\mu\nu}\right)
\end{equation}
is the tensor derived from the squared leptonic matrix element. The next step
is to expand the time-ordered product
$T\{J_{bc}^{\dagger\mu}(x),J_{bc}^\nu(0)\}$ in inverse powers of
$1/m_b$, using the operator product expansion and the heavy quark
effective theory. This yields an infinite sum of operators written in
terms of the effective field $h(x)$, which we will truncate at order
$1/m_b^2$. Finally, we write the matrix elements of the form
$\langle B|\bar h\cdots h|B\rangle$ in terms of parameters given by the heavy
quark expansion.
Once we have the differential distribution ${\rm d}\Gamma/{\rm d} s_H{\rm d} E_H$, we
can weight with powers of the form $s_H^n E_H^m$ and integrate to
compute moments of $s_H$ and $E_H$. Of course the $(n,m)=(0,0)$
moment is just the semileptonic partial width $\Gamma$. The moments
of $s_H$, which will be of particular interest, are sensitive to the
production of excited charmed hadrons such as the $D_1$ and $D_2^*$.
Our results will be in terms of four QCD and HQET parameters, since we
keep only terms up to order $1/m_b^2$:
\par\noindent 1.~The strong coupling constant $\alpha_s(m_b)$. We get
powers of $\alpha_s(m_b)/\pi$ when we compute the radiative corrections
to the time-ordered product.
\par\noindent 2.~The ``mass'' $\bar\Lambda$ of the light degrees of
freedom, defined by~\cite{Luke}
\begin{equation}
\bar\Lambda=\lim_{m_b\to\infty}\left[m_B-m_b\right]\,.
\end{equation}
Because the quark mass which appears in this expression is the pole
mass, $m_b=m_b^{\rm pole}$, the quantity $\bar\Lambda$ suffers from an
infrared renormalon ambiguity~\cite{renormalons} of order $\sim100\,{\rm MeV}$.
This ambiguity affects the interpretation of $\bar\Lambda$, and so we
must treat with caution any expression in which it appears. For
comparison with data, it is preferable to use expressions in which the
renormalon ambiguity can be shown to cancel.
\par\noindent 3.~The ``kinetic energy'' $\lambda_1$ of the heavy quark,
defined by~\cite{FaNe}
\begin{equation}
\lambda_1=\lim_{m_b\to\infty}\langle B|\bar b(iD)^2 b|B\rangle/2m_B\,.
\end{equation}
Note that $\lambda_1$ is not exactly the $b$ quark kinetic energy (or
rather, its negative), since there are gauge fields in the covariant
derivative. Relative to the $b$ quark's rest energy, its
nonrelativistic kinetic energy is suppressed by $1/m_b^2$.
\par\noindent 4.~The energy of the $b$ quark due to its hyperfine
interaction with the light degrees of freedom, given by~\cite{FaNe}
\begin{equation}
\lambda_2=\lim_{m_b\to\infty}\langle B|\case12 g
\bar b\sigma^{\mu\nu}G_{\mu\nu}b|B\rangle/6m_B\,.
\end{equation}
This is the only one the four parameters where the spin of the $b$
enters directly. We can extract $\lambda_2$ from the $B^*-B$ mass
splitting, which yields\,\footnote{Because the chromomagnetic operator
is renormalized, $\lambda_2(\mu)$ actually depends slightly on the
renormalization scale.~\cite{FGL,EiHill} The number we give here is
$\lambda_2(m_b)$.}
\begin{equation}
\lambda_2=0.12\,{\rm GeV}\,.
\end{equation}
We will present results which include heavy quark corrections up
through order $1/m_b^2$, and radiative corrections up through two
loops. Actually, the two loop corrections are only partially
computed, with just those pieces proportional to $\beta_0\alpha_s^2$,
where $\beta_0=11-\case23n_f$ is the leading coefficient in the QCD
beta function. We may hope that this piece dominates the two loop
term, because of the large numerical coefficient $\beta_0$; in fact,
for other calculations for which the full two loop result is known,
this is usually the case. For semileptonic $B$ decay, the
full two loop calculation has not been completed.
We will present results for the semileptonic partial width, and for
the first moment of the hadronic invariant mass spectrum. It is
convenient to substitute all appearances of the charm and bottom quark
masses with spin-averaged meson masses, using the expansion
\begin{equation}
\overline m_B=m_B+\bar\Lambda-{\lambda_1\over2m_B}+\dots\,,
\end{equation}
and analogously for charm. Then the coefficients which appear below
are functions of the measured ratio $\overline m_D/\overline m_B$,
with no hidden dependence on unknown quark masses. For the
semileptonic partial width, we find~\cite{FLS96}
\begin{eqnarray}\label{width}
\Gamma(B\to X_c\ell\nu)&=&{G_F^2|V_{cb}|^2\over192\pi^3}m_B^5\, 0.369
\bigg[1-1.54{\alpha_s(m_b)\over\pi}-1.43\beta_0
{\alpha_s^2(m_b)\over\pi^2}\nonumber\\
&&\qquad\qquad-1.65{\bar\Lambda\over m_B}\left(1-0.87{\alpha_s(m_b)
\over\pi}\right)-0.95{\bar\Lambda^2\over m_B^2}\nonumber\\
&&\qquad\qquad-3.18{\lambda_1\over m_B^2}+0.02{\lambda_2\over m_B^2}
+\dots\bigg]\,,
\end{eqnarray}
and for the average hadronic invariant mass,~\cite{FLS96}
\begin{eqnarray}\label{moment}
\langle s_H-\overline m_D^2\rangle &=& m_B^2\bigg[
0.051{\alpha_s(m_b)\over\pi}+0.096\beta_0{\alpha_s^2(m_b)\over\pi^2}
\nonumber\\
&&\qquad\qquad+0.23{\bar\Lambda\over m_B}\left(1+0.43{\alpha_s(m_b)
\over\pi}\right)+0.26{\bar\Lambda^2\over m_B^2}\nonumber\\
&&\qquad\qquad+1.01{\lambda_1\over m_B^2}-0.31{\lambda_2\over m_B^2}
+\dots\bigg]\,.
\end{eqnarray}
We include a subtraction of $\overline m_D^2$ in the invariant mass so
that the theoretical expression will start at order $\alpha_s$ and
$\bar\Lambda$. The heavy quark expansion seems to be under control, as
the corrections proportional to $\lambda_1$ and $\lambda_2$ are at
the level of a few percent. However, this not true of the expansion
in perturbative QCD. Since $\beta_0\alpha_s/\pi\approx0.6$, we see
that the two loop corrections to (\ref{width}) and (\ref{moment}) are
as large as the one loop terms.
This is real trouble! With such a poorly behaved perturbation series,
these expressions are not trustworthy. Actually, there is a problem
with the nonperturbative corrections, too, since they contain the
ambiguous parameter $\bar\Lambda$. How, then, can we use this theory
to do reliable phenomenology?
Remarkably, these two problems are actually connected, and can be used
to solve each other. The renormalon ambiguity of $\bar\Lambda$ arises
from the poor behavior of QCD perturbation theory at high orders in
the series for $m_b^{\rm pole}$. Perhaps it is the same poor
behavior which manifests itself in the perturbation series for
$\Gamma$ and $\langle s_H\rangle$. If so, then the solution is to
eliminate $\bar\Lambda$ in favor of some unambiguous {\it physical\/}
quantity, solving both problems as once.
In fact, it can be shown that this is precisely the
case.~\cite{renormalons} The bad perturbation series in $\Gamma$ arises
from the indirect dependence of the theoretical expression on the pole
mass $m_b^{\rm pole}$, through $\bar\Lambda$. One way to eliminate
$\bar\Lambda$ is to write it in terms of $\langle s_H-\overline
m_D^2\rangle$, which can be measured. We then find~\cite{FLS96}
\begin{eqnarray}\label{width2}
\Gamma(B\to X_c\ell\nu)&=&{G_F^2|V_{cb}|^2\over192\pi^3} m_B^5\, 0.369
\bigg[1-1.17{\alpha_s(m_b)\over\pi}-0.74\beta_0
{\alpha_s^2(m_b)\over\pi^2}\nonumber\\
&&\qquad\qquad\qquad\qquad\qquad-7.17{\langle s_H-\overline m_D^2\rangle
\over m_B^2}+\dots\bigg]\,,
\end{eqnarray}
omitting the small terms of order $1/m_b^2$. Note that the size of
the two loop term has shrunk by a factor of two with this
rearrangement. We have regained some measure of control over the
perturbation series.\footnote{This improvement may be interpreted as
an increase in the BLM renormalization scale~\cite{BLM} from $\mu_{\rm
BLM}=0.16m_B$ to $\mu_{\rm BLM}=0.38m_B$.~\cite{FLS96}}
The moral of this exercise is that while it is perfectly fine to keep
$\bar\Lambda$ in intermediate steps in calculations, it should be
eliminated from predictions of physical quantities. By the same
token, any extraction of $\bar\Lambda$ from the data is ambiguous, in
the sense that it is necessarily polluted with an infrared renormalon
ambiguity and a corresponding poorly behaved perturbation series.
We can use the data (\ref{D12data}) to derive an experimental lower
bound on $\langle s_H-\overline m_D^2\rangle$. Taking the relative
branching ratio to be 27\%, consistent with all measurements, we find
\begin{equation}
\langle s_H-\overline m_D^2\rangle\ge0.49\,{\rm GeV}^2\,.
\end{equation}
We can translate this into a bound on $\bar\Lambda$, which at one loop
yields
\begin{equation}
\bar\Lambda_{\rm one\ loop}
>\left[0.33-0.07\left({\lambda_1\over0.1\,{\rm GeV}^2}\right)\right]\,{\rm GeV}\,.
\end{equation}
Note that our prejudice is that $\lambda_1<0$, so it is probably
conservative to ignore the small $\lambda_1$ term. When two loop
corrections (proportional to $\beta_0\alpha_s^2$) are included, the
bound is weakened to
\begin{equation}
\bar\Lambda_{\rm two\ loop}
>\left[0.26-0.07\left({\lambda_1\over0.1\,{\rm GeV}^2}\right)\right]\,{\rm GeV}\,.
\end{equation}
The instability of these bounds when radiative corrections are
included is a direct reflection of the renormalon ambiguity.
Of more interest is the bound on $|V_{cb}|$ from the improved relation
(\ref{width2}), which has no (leading) renormalon ambiguity.
Including two loop corrections, we find ~\cite{FLS96}
\begin{equation}
|V_{cb}|>\left[0.040-0.00028\left({\lambda_1\over0.1\,{\rm GeV}^2}\right)\right]
\left(\tau_B\over1.60{\rm ps}\right)^{-1/2}\,.
\end{equation}
We have left explicit the dependence on the lifetime $\tau_B$ of the
$B$ meson. The contribution of the two loop correction to this bound
is 0.002, well within reason. If, to be conservative, we take
$\langle s_H-\overline m_D^2\rangle=20\%$, then the bound becomes
$|V_{cb}|>0.038$.
{}From this point of view, of course, the ideal experiment would measure
$\langle s_H-\overline m_D^2\rangle$ directly, as well as higher
moments such as $\langle (s_H-\overline m_D^2)^2\rangle$. Such a
program could lead to the best possible measurement of $|V_{cb}|$, with
theoretical uncertainties at the level of a few percent.
\section{Conclusions}
We have seen that excited heavy hadrons have a lot to teach us about
both QCD and physics at short distances. The phenomenology of these
hadrons is extremely rich. We have illustrated their potential by
discussing their spectroscopy, strong decays and production in
fragmentation and semileptonic decay, but by no means need this
exhaust the possibilities. Dedicated theoretical and experimental
study of these states will pay real physics dividends in the upcoming Factory
Era.
\section*{Acknowledgements}
It is a great pleasure to thank the organizers of this Johns Hopkins
Workshop for a stimulating conference and their warm hospitality.
This work was supported by the National Science Foundation under
Grant No.~PHY-9404057 and National Young Investigator Award
No.~PHY-9457916; by the Department of Energy under Outstanding
Junior Investigator Award No.~DE-FG02-94ER40869; and by the Alfred
P.~Sloan Foundation.
\section*{References}
| proofpile-arXiv_065-345 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The aim of this paper is to develop a systematic theory of non-abelian
Seiberg-Witten
equations. The equations we introduce and study are associated with a
$Spin^G(4)$-structure on a 4-manifold, where $G$ is a closed subgroup of the
unitary group
$U(V)$ containing the central involution $-{\rm id}_V$. We call these equations the
$G$-monopole
equations. For $G=S^1$, one recovers the classical (abelian) Seiberg-Witten
equations
[W], and
the case $G=Sp(1)$ corresponds to the "quaternionic monopole equations"
introduced in [OT5].
Fixing the determinant of the connection component in the $U(2)$-monopole
equations, one
gets the so called $PU(2)$-monopole equations, which should be regarded
as a twisted version
of quaternionic monopole equations and will be extensively studied in the
second part of this
paper.
It is known ([OT4], [OT5], [PT2]) that the most natural way to prove the
equivalence between
Donaldson theory and Seiberg-Witten theory is to consider a suitable
moduli space of
non-abelian monopoles. In [OT5] it was shown that an $S^1$-quotient of
a moduli space of
quaternionic monopoles should give an homological equivalence between
a fibration over a
union of Seiberg-Witten moduli spaces and a fibration over certain
$Spin^c$-moduli spaces
[PT1].
By the same method, but using moduli spaces of $PU(2)$-monopoles
instead of quaternionic
monopoles, one should be able to express any Donaldson polynomial
invariant
in terms of Seiberg-Witten invariants associated with the
\underbar{twisted} abelian
monopole equations of [OT6].
The idea can be extended to get information about the
Donaldson theories associated with an arbitrary symmetry
group $G$, by relating the
corresponding polynomial invariants to Seiberg-Witten-type
invariants associated with smaller
symmetry groups. One has only to consider a suitable moduli
space of $G$-monopoles and to
notice that this moduli space contains distinguished closed
subspaces of "reducible solutions".
The reducible solutions with trivial spinor-component can
be identified with $G$-instantons,
and all the others reductions can be regarded as monopoles
associated to a smaller group.
It is important to point out that, if the base manifold is a
K\"ahler surface one has
Kobayashi-Hitchin-type correspondences (see [D], [DK], [K],
[LT] for the instanton case) which
give a complex geometric description of the moduli spaces
of $SU(2)$, $U(2)$ or
$PU(2)$-monopoles (see section 2). The first two cases were
already studied in [OT5] and
[OT1]. In the algebraic case one can explicitly compute such
moduli spaces of non-abelian
monopoles and prove the existence of a projective
compactification. The points
corresponding to instantons and abelian monopoles can be
easily identified (see also [OST]).\\
The theory has interesting extensions to manifolds of other dimensions.
On Riemann surfaces
for instance, one can use moduli spaces of $PU(2)$-monopoles to reduce
the
computation of the volume or the Chern numbers of a moduli space of
semistable rank 2-
bundles to computations on the symmetric powers of the base, which
occur in the moduli
space of $PU(2)$-monopoles as subspaces of abelian reductions. \\
The present paper is divided into two parts: The first deals with the
general theory of
$Spin^G$-structures and $G$-monopole equations. We give classification
theorems for
$Spin^G$-structures in principal bundles, and an explicit description of
the set of equivalence
classes in the cases $G=SU(2)$, $U(2)$, $PU(2)$. Afterwards we introduce
the $G$-monopole
equations in a natural way by coupling the Dirac harmonicity condition for
a pair formed by a
connection and a spinor, with the vanishing condition for a
generalized moment map. This first
part ends with a section dedicated to the concept of reducible
solutions of the
$G$-monopole equations. Describing the moduli spaces of
$G$-monopoles around the
reducible loci is the first step in order to express the
Donaldson
invariants associated with the symmetry group $G$ in terms
of Seiberg-Witten-type
reductions.\\
In the second part of the paper, we give a complex geometric
interpretation of the moduli
spaces of $PU(2)$-monopoles in terms of stable oriented pairs,
by proving a
Kobayashi-Hitchin type correspondence. Using this result, we describe
a simple
example of moduli space of $PU(2)$-monopoles on ${\Bbb P}^2$, which
illustrates in a concrete
case how our moduli spaces can be used to relate Donaldson and
Seiberg-Witten
invariants.
In order to be able to give general explicit formulas relating the
Donaldson polynomial
invariants to Seiberg-Witten invariants, it remains to construct
$S^1$-equivariant smooth perturbations of the moduli spaces of
$PU(2)$-monopoles, to
construct an Uhlenbeck compactification of the perturbed moduli spaces,
and finally to
give explicit descriptions of the ends of the (perturbed) moduli spaces.
The first two problems are treated in [T1], [T2]. Note that the proof of the
corresponding
transversality results for other moduli spaces of non-abelian connections
coupled with
harmonic spinors ([PT1], [PT2]) are not complete ([T1]). The third problem, as
well as
generalizations to larger symmetry groups will be treated in a future paper.
I thank Prof. Christian Okonek for encouraging me to write this paper,
as well
as for the careful reading of the text and his valuable suggestions.
\section{G-Monopoles on 4-manifolds}
\subsection{The group $Spin^G$ and $Spin^G$-structures}
\subsubsection{$Spin^G$-structures in principal bundles}
Let $G\subset U(V)$ be a closed subgroup of the unitary group of
a Hermitian vector space $V$,
suppose that $G$ contains the central involution $-{\rm id}_V$, and denote
by ${\germ g}\subset u(V)$ the
Lie algebra of $G$. We put
$$Spin^G :=Spin \times_{{\Bbb Z}_2} G \ .
$$
By definition we get the following fundamental exact sequences:
$$\begin{array}{c}1\longrightarrow Spin \longrightarrow Spin^G \stackrel{\delta}\longrightarrow
\qmod{G}{{\Bbb Z}_2}\longrightarrow
1
\\
1\longrightarrow G\longrightarrow Spin^G \stackrel{\pi}{\longrightarrow } SO \longrightarrow 1
\\
1\longrightarrow {\Bbb Z}_2\longrightarrow Spin^G\textmap{(\pi,\delta)} SO\times
\qmod{G}{{\Bbb Z}_2}\longrightarrow 1
\end{array}\eqno{(*)}$$
Note first that there are well defined morphisms
$${\rm ad}_G:Spin^G\longrightarrow O({\germ g})\ ,\ {\rm Ad}_{G}:Spin^G \longrightarrow {\rm Aut}(G)
$$
induced by the morphisms ${\rm ad}:G\longrightarrow O({\germ g})$ and ${\rm Ad}:G\longrightarrow
{\rm Aut}( G)$.
If $P^G$ is principal $Spin^G$-bundle, we denote by ${\Bbb G}(P^G)$,
${\scriptscriptstyle|}\hskip
-4pt{{\germ g}}(P^G)$ the fibre bundles
$P^G\times_{{\rm Ad}_G}G$,
$P^G\times_{{\rm ad}_G}{\germ g}$. The group of sections
$${\cal G}(P^G):=\Gamma({\Bbb G}(P^G))
$$
in ${\Bbb G}(P^G)$ can be identified with the group of bundle-automorphisms
of $P^G$ over
the $SO$-bundle $P^G\times_\pi SO$. After a suitable Sobolev completion
${\cal
G}(P^G)$
becomes a Hilbert Lie group, whose Lie algebra is the corresponding Sobolev
completion
of
$\Gamma({\scriptscriptstyle|}\hskip -4pt{\g}(P^G))$.
We put also
$$ \delta(P^G):=P^G\times_{\delta} \left(\qmod{G}{{\Bbb Z}_2}\right)\ .$$
Note that ${\Bbb G}(P^G)$ can be identified with the bundle
$\delta(P^G)\times_{\bar{\rm Ad}} G$
associated with the $\qmod{G}{{\Bbb Z}_2}$-bundle $\delta(P^G)$.
Let $P$ a principal $SO(n)$-bundle over a topological space $X$. A
\underbar{$Spin^G(n)$}-\underbar{structure} in $P$ is a bundle
morphisms $P^G\longrightarrow P$ of
type $\pi$, where $P^G$ is a principal $Spin^G(n)$-bundle over $X$.
Equivalently, a
$Spin^G(n)$-structure in
$P$ can be regarded as a pair consisting of a
$Spin^G(n)$-bundle $P^G$ and an orientation preserving linear
isometry
$$\gamma:P\times_{SO(n)}{\Bbb R}^n\longrightarrow P^G\times_{\pi}{\Bbb R}^n $$
(called the \underbar{Clifford} \underbar{map} of the
structure).
Two $Spin^G$-structures $P^G_0\textmap{\sigma_0} P$,
$P_1^G\textmap{\sigma_1} P$ in $P$ are called \underbar{equivalent},
if the $Spin^G$
bundles
$P^G_0$, $P_1^G$ are isomorphic over $P$.
If $(X,g)$ is an oriented Riemannian $n$-manifold, a
$Spin^G(n)$-structure in $X$ is a $Spin^G(n)$-structure $P^G\longrightarrow P_g$ in
the bundle $P_g$ of oriented $g$-orthonormal coframes of $ X $. This is
equivalent with
the data of a pair $(P^G,\gamma)$, where $P^G$ is a $Spin^G(n)$-bundle and
$\gamma:\Lambda^1_X\stackrel{\simeq}{\longrightarrow} P^G\times_\pi{\Bbb R}^n$ is a linear
orientation-preserving isometry. Here
$\Lambda^1_X$ stands for the cotangent bundle of $X$, endowed with the dual
$SO(n)$-structure.
\vspace{5mm}
Let $X$ be a fixed paracompact topological space. Note that there is a
natural map
$H^1(X,\underline {{G}/{{\Bbb Z}_2}}) {\longrightarrow} H^2(X,{\Bbb Z}_2)$, which we denote by $w$. If
$G=Spin(k)$,
$w$ coincides with the usual morphism $w_2$ defined on the set of
$SO(k)$-bundles.
By the
third exact sequence in
$(*)$ we get the following simple classification result
\begin{pr} The map $P^G\longmapsto
(P^G\times_\pi SO,\delta(P^G))$ defines a surjection of the set of
isomorphism classes of
$Spin^G$-bundles onto the set of isomorphism classes of pairs $(P,\Delta)$
consisting of an
$SO$-bundle and a $\qmod{G}{{\Bbb Z}_2}$-bundle satisfying $w_2(P)+w(\Delta)=0$.
Two
$Spin^G$-bundles have the same image if and only if they are congruent modulo
the natural
action of $H^1(X,{\Bbb Z}_2)$ in $H^1(X,\underline{Spin^G})$.
\end{pr}
{\bf Proof: } Indeed, the natural morphism
$H^1(X,\underline{SO_{}}\times\underline{G/{\Bbb Z}_2})
\longrightarrow
H^2(X,{\Bbb Z}_2)$ is given by $(P,\Delta)\longmapsto (w_2(P)+w(\Delta))$.
\hfill\vrule height6pt width6pt depth0pt \bigskip
\\
For instance, we have the following result
\begin{pr} Let $X$ be a 4-manifold. The group $H^1(X,{\Bbb Z}_2)$ acts trivially
on the
set of
(equivalence classes of) $Spin^c(4)$-bundles over $X$. Equivalence classes of
$Spin^c(4)$-bundles over $X$ are classified by pairs $(P,\Delta)$
consisting of
an
$SO(4)$-bundle $P$ and an
$S^1$-bundle
$\Delta$ with $w_2(P)+w_2(\Delta)=0$.
\end{pr}
{\bf Proof: } Using the identification (see [OT1], [OT3])
$$Spin^c(4)=\{(a,b)\in U(2)\times U(2)|\ \det a=\det b\}\ ,$$
we get an
exact sequence
$$1\longrightarrow Spin^c(4)\longrightarrow U(2)\times U(2)\longrightarrow S^1\longrightarrow 1\ .
$$
Using this, one can prove that, on 4-manifolds, the data of an
(equivalence class of)
$Spin^c(4)$-bundles
is equivalent
to the data of a pair of $U(2)$-bundles having isomorphic determinant line
bundles. The
action of $H^1(X,{\Bbb Z}_2)$ is given by tensoring with flat line bundles with
structure group
${\Bbb Z}_2$. The Chern class of such line bundles is 2-torsion, hence the assertion
follows
from the classification of unitary vector bundles on 4-manifolds in terms
of Chern
classes.
\hfill\vrule height6pt width6pt depth0pt \bigskip
The classification of the $Spin^G$-structures in a given $SO$-bundle $P$ is a
more delicate
problem.
\begin{pr} Fix a $Spin^G$-structure $\sigma:P^G\longrightarrow P$ in $P$. Then
the set of equivalence classes of $Spin^G$-structures in $P$ can be identified
with the
cohomology set $H^1(X,{\Bbb G}(P^G ))$ of the sheaf of sections in the bundle
${\Bbb G}(P^G )$.
\end{pr}
Recall that ${\Bbb G}(P^G)$ can be identified with the bundle $\delta(P^G)
\times_{\bar{\rm Ad}} G$
associated with the $\qmod{G}{{\Bbb Z}_2}$-bundle $\delta(P^G)$.
Therefore we get the exact sequence of bundles of groups
$$1\longrightarrow {\Bbb Z}_2\longrightarrow{\Bbb G}(P^G)\longrightarrow \delta(P^G)\times_{{\rm Ad}}
\left(\qmod{G}{{\Bbb Z}_2}\right)\longrightarrow 1\ .
$$
The third term coincides with the gauge group of automorphisms of
$\delta(P^G)$. The cohomology set
$H^1\left(X,\delta(P^G)\times_{\bar{\rm Ad}}
\left(\qmod{G}{{\Bbb Z}_2}\right)\right)$ of the
associated sheaf coincides with the pointed set of (equivalence classes of)
$\qmod{G}{{\Bbb Z}_2}$-bundles over
$X$ with distinguished element $\delta(P^G)$. This shows that
$\qmod{H^1(X,{\Bbb G}(P^G))}{H^1(X,{\Bbb Z}_2)}$ can be identified with the set of
$\qmod{G}{{\Bbb Z}_2}$-bundles $\Delta$ with $w(\Delta)=w(\delta(P^G))$.
Therefore
\begin{pr} The map
$$(\sigma:P^G\longrightarrow P) \longmapsto \delta(P^G)$$
is a surjection of the set of (equivalence classes of) $Spin^G$-structures
in $P$ onto the
set of
$\qmod{G}{{\Bbb Z}_2}$-bundles $\Delta$ satisfying $w(\Delta)+w_2(P)=0$.
Two
$Spin^G$-structures have the same image if and only if they are congruent
modulo the
natural action of
$H^1(X,{\Bbb Z}_2)$.
\end{pr}
Proposition 1.1.2 and the proposition below show that the
classification of
$Spin^G$-structures in the
$SO$-bundle $P$ is in general different from the classification
of $Spin^G$-bundles with
associated $SO$-bundle isomorphic to $P$.
\begin{pr}\hfill{\break}
1. If $G=S^1$ then $Spin^{S^1}=Spin^c$, ${\Bbb G}(P^G)=X\times S^1$,
hence the
set of $Spin^c$-structures in $P$ is a $H^1(X,\underline{S}^1)=
H^2(X,{\Bbb Z})$-torsor if it is
non-empty. The $H^1(X,{\Bbb Z}_2)$-action in the set of $Spin^c$-structures
in $P$, factorizes
through a ${\rm Tors}_2 H^2(X,{\Bbb Z})$-action, which is free and whose
orbits coincide with
the fibres of the determinant map $(\sigma:P^c\longrightarrow P)\longmapsto
\delta(P^c)$.\\
2. Suppose that $X$ is a 4-manifold, $P$ is an $SO$-bundle over $X$
and that $G$ is one of
the following:\\ a) $SU(r)$, $r\geq 2$,
b) $U(r)$, $r\geq 2$, $r$ even.
c) $Sp(r)$, $r\geq 1$.
Then $H^1(X,{\Bbb Z}_2)$ acts trivially in the set of $Spin^G$-structures in
$P$, hence the
classification of $Spin^G$-structures in $P$ reduces to the classification of
$\qmod{G}{{\Bbb Z}_2}$-bundles over $X$.
\end{pr}
{\bf Proof: } \\
1. The first assertion follows immediately from Propositions 1.1.3 and 1.1.4. \\
2. Let
$\sigma_i:P^G_i\longrightarrow P$, $i=0,\ 1$ be two $Spin^G$-structures in $P$.
We consider the
locally trivial bundle $Iso_P(P^G_1,P^G_0)$ whose fibre in $x\in X$ consists
of isomorphism
$\rho_x:(P^G_1)_x\longrightarrow (P^G_0)_x$ of right $Spin^G$-spaces which make the
following
diagram commutative.
$$\begin{array}{rcl}
(P^G_1)_x&\stackrel{\rho_x}{\longrightarrow }&{(P^G_0)_x}_{\phantom{X_{X_{X_X}}} }\\
%
{\scriptstyle\sigma_{0x}}\searrow&&\swarrow{\scriptstyle
\sigma_{\scriptscriptstyle
1x}}\\
& P_x&
\end{array}
$$
$Iso_P(P^G_1,P^G_0)$ is a principal bundle in the sense of Grothendieck
with structure
group bundle ${\Bbb G}(P^G_0)$. The $Spin^G$-structures $\sigma_i$ are equivalent
if and only if
$Iso_P(P^G_1,P^G_0)$ admits a section. Consider first the case $G=SU(r)$
($r\geq 2$) or
$Sp(r)$ ($r\geq 1$). Since
$\pi_i\left([Iso_P(P^G_1,P^G_0)]_x\right)=0$ for $i\leq 2$ and
$\pi_3\left([Iso_P(P^G_1,P^G_0)]_x\right)$ can be canonically identified
with ${\Bbb Z}$,
the
obstruction $o(\sigma_1,\sigma_0)$ to the existence of such a section is an
element in
$H^4(X,{\Bbb Z})$. Assume now that $\sigma_1=\lambda\sigma_0$ for some
$\lambda\in
H^1(X,{\Bbb Z}_2)$ and let $p:\tilde X\longrightarrow X$ the cover associated to
$\ker\lambda\subset\pi_1(X)$. It is easy to see that one has
$o(p^*(\sigma_1),p^*(\sigma_0))=p^*(o(\sigma_1,\sigma_0))$. But, since
$p^*(\lambda)=0$, we get $p^*(\sigma_1)=p^*(\sigma_0)$ hence
$o(p^*(\sigma_1),p^*(\sigma_0))=0$. Since $p^*:H^4(X,{\Bbb Z})\longrightarrow
H^4(\tilde X,{\Bbb Z})$ is injective for a 4-manifold $X$, the assertion follows
immediately.
Finally consider $G=U(r)$. When $r\geq 2$ is even, the determinant map
$U(r)\longrightarrow S^1$
induces a morphism $Spin^{U(r)}
\longrightarrow S^1$. If $\sigma_1=\lambda\sigma_0$, then there is a natural
identification
$P^G_1\times_{\det} S^1=P^G_0\times_{\det} S^1$, hence, denoting this
line bundle by $L$,
we get a subbundle
$Iso_{P,L}(P^G_1,P^G_0)$ of $Iso_P(P^G_1,P^G_0)$ consisting fibrewise of
isomorphisms
$(P^G_1)_x\longrightarrow (P^G_0)_x$ over $P_x\times L_x$. Since the standard fibre of
$Iso_{P,L}(P^G_1,P^G_0)$ is $SU(r)$, the same argument as above shows that
this bundle
admits a section, hence $\sigma_1$ and $\sigma_0$ are equivalent.
\hfill\vrule height6pt width6pt depth0pt \bigskip
\subsubsection{$Spin^G(4)$-structures on 4-manifolds and spinor bundles}
Let ${\Bbb H}_{\pm}$ be two copies of the quaternionic skewfield, regarded as right
quaternionic
vector spaces. The canonical left actions of $Sp(1)$ in ${\Bbb H}_{\pm}$ define an
orthogonal representation of the group
$$Spin(4)=Sp(1)\times Sp(1)=SU(2)\times SU(2)$$
in ${\Bbb H}\simeq {\rm Hom}_{{\Bbb H}}({\Bbb H}_+,{\Bbb H}_-)$, which gives the standard identification
$$\qmod{SU(2)\times SU(2)}{{\Bbb Z}_2}=SO({\Bbb H})=SO(4)$$
Therefore, the group $Spin^G(4)=\qmod{SU(2)\times SU(2)\times G}{{\Bbb Z}_2}$
comes with 2
unitary representations
$$\lambda_\pm: Spin^G(4)\longrightarrow U({\Bbb H}_{\pm}\otimes_{\Bbb C} V)$$
obtained by coupling the natural representation of $G$ in $V$ with the
spinorial
representations $p_{\pm}:Spin(4)=SU(2)\times SU(2)\longrightarrow SU(2)$.
There are well defined adjoint morphisms
$$ {\rm ad}_{\pm}:Spin^G(4)\longrightarrow O(su(2))\ ,\ \ {\rm Ad}_{\pm}:Spin^G(4)\longrightarrow
{\rm Aut}(SU(2))$$
induced by the projections $p_{\pm}$ and the corresponding adjoint
representations
associated with the Lie group $SU(2)$. If $P^G$ is a $Spin^G(4)$-bundle,
we denote by
${\rm ad}_{\pm}(P^G)$, ${\rm Ad}_{\pm}(P^G)$ the corresponding bundles with
fibres $su(2)$,
$SU(2)$ associated with $P^G$.
The \underbar{spinor} \underbar{vector} \underbar{bundles} associated
with a
$Spin^G(4)$-bundle $P^G$ are defined by
$$\Sigma^{\pm}=\Sigma^{\pm}(P^G):=P^G\times_{\lambda_{\pm}}
({\Bbb H}_{\pm}\otimes_{\Bbb C}
V)\ ,
$$
The bundles ${\rm ad}_{\pm}(P^G)$, ${\scriptscriptstyle|}\hskip -4pt{\g}(P^G)$ are real subbundles of the
endomorphism bundle
${\rm End}_{\Bbb C}(\Sigma^{\pm})$. The bundle ${\Bbb G}(P^G)$ acts fibrewise unitarily
in the
bundles $\Sigma^{\pm}$. On the other hand, the identification ${\Bbb H}\simeq
{\rm Hom}_{\Bbb H}({\Bbb H}_+,{\Bbb H}_-)$ defines a real embedding
\begin{equation}
P^G\times_\pi{\Bbb H}\stackrel{ }{\longrightarrow}{\rm Hom}_{{\Bbb G}(P^G)}(\Sigma^+,
\Sigma^-)\subset{\rm Hom}_{\Bbb C}(
\Sigma^+,\Sigma^-)
\end{equation}
of the $SO(4)$-vector bundle $P^G\times_\pi{\Bbb H}$ in the bundle
${\rm Hom}_{{\Bbb G}(P^G)}(\Sigma^+,\Sigma^-)$ of
${\Bbb C}$-linear morphisms $\Sigma^+\longrightarrow \Sigma^-$ which commute with the
${\Bbb G}(P^G)$-action.
The data of a $Spin^G(4)$-structure with principal bundle $P^G$ on an
oriented Riemannian 4-manifold $X$ is equivalent to the data of an
orientation-preserving
isomorphism $\Lambda^1_X\stackrel{\gamma}{\longrightarrow}P^G\times_\pi{\Bbb H}$,
which defines (via
the monomorphism in (1)) a
\underbar{Clifford}
\underbar{multiplication}
$(\Lambda^1\otimes{\Bbb C})\otimes \Sigma^+
\longrightarrow \Sigma^-$ commuting with the ${{\Bbb G}(P^G)}$ actions in $\Sigma^{\pm}$.
Moreover,
as in the classical $Spin^c(4)$ case [OT1], [OT3], we also have induced
identifications (which multiply the norms by 2)
$$\Gamma:\Lambda^2_{\pm}\longrightarrow {\rm ad}_{\pm}(P^G)\ .
$$
\subsubsection{ Examples}
1. $Spin^c(4)$-structures:\\
The group $Spin^c(4):=Spin^{U(1)}(4)$ can be identified with the
subgroup
$$G_2:=\{(a,b)\in U(2)\times U(2)|\ \det a=\det b\}
$$
of $U(2)\times U(2)$. Via this identification, the map $\delta:Spin^c(4)\longrightarrow
S^1\simeq\qmod{S^1}{{\Bbb Z}_2}$ in the exact sequence
$$1\longrightarrow Spin(4)\longrightarrow Spin^c(4)\stackrel{\delta}{\longrightarrow} S^1\longrightarrow 1
$$
is given by the formula $\delta(a,b)=\det a=\det b$. The spinor bundles come
with
identifications
$$\det\Sigma^+\stackrel{\simeq}{\rightarrow}\det\Sigma^-
\stackrel{\simeq}{\rightarrow}
P^c\times_\delta{\Bbb C}\ .$$
The $SO(4)$-vector bundle $P^c\times_\pi {\Bbb H}$ associated with a
$Spin^c(4)$-bundle
$P^c$ can be identified with the bundle
${\Bbb R} SU(\Sigma^+\Sigma^-)\subset{\rm Hom}(\Sigma^+,\Sigma^-)$ of real
multiples of isometries of determinant 1.
Using these facts, it easy to see that a $Spin^c(4)$-structure can be recovered
from the
data of the spinor bundles, the identification between the determinant line
bundles
and the
Clifford map. More precisely
\begin{pr} The data of a $Spin^c(4)$-structure in the
$SO(4)$-bundle $P$ over $X$ is equivalent to the data of a
triple consisting of:\\
i) A pair of $U(2)$-vector bundles $\Sigma^{\pm}$.\\
ii) A unitary isomorphism $\det\Sigma^+\stackrel{\iota}{\rightarrow}\det
\Sigma^-$.\\
iii) An orientation-preserving linear isometry
$$\gamma:P\times_{SO(4)}{\Bbb R}^4 \rightarrow{\Bbb R} SU(\Sigma^+,\Sigma^-)\ .$$
\end{pr}
{\bf Proof: } Given a triple $(\Sigma^{\pm},\iota,\gamma)$, we define $P^c$ to be the
manifold
over $X$
$$
\begin{array}{cl}
P^c:=\left\{ [x,(e_1^+,e_2^+),(e_1^-,e_2^-) ]|\right.& x\in X,\
(e_1^\pm,e_2^\pm)\ {\rm an\ orthonormal\ basis\ in\ }\Sigma^{\pm}_x,\\
&\left. \iota_*( e_1^+\wedge e_2^+)=e_1^-\wedge e_2^-\right\}\ .
\end{array}
$$
Every triple $[x,(e_1^+,e_2^+),(e_1^-,e_2^-)]\in P^c_x$ defines an
orthonormal orientation-compatible basis in ${\Bbb R} SU(\Sigma^+_x,\Sigma^-_x)$
which is
given with respect to the frames $(e_1^\pm,e_2^\pm)$ by the Pauli matrices.
Using
the isomorphism $\gamma$, we get a bundle morphism from $P^c$ onto the
orthonormal
oriented frame bundle of
$P\times_{SO(4)}{\Bbb R}^4$, which can be canonically identified with $P$.
\hfill\vrule height6pt width6pt depth0pt \bigskip
Let $P$ be a principal $SO(4)$-bundle, $P^c\stackrel{{\germ c}_0} \longrightarrow P$
a fixed
$Spin^c(4)$-structure in $P$, $\Sigma^{\pm}$ the associated spinor
bundles, and
$$\gamma_0:P\times_{SO(4)}{\Bbb R}^4\longrightarrow P^c\times_\pi{\Bbb H}=
{\Bbb R} SU(\Sigma^+,\Sigma^-)$$
the
corresponding Clifford map. For every $m\in H^2(X,{\Bbb Z})$ let $L_m$ be
a Hermitian line
bundle of Chern class
$m$. The fixed identification $\det \Sigma^+\textmap{\simeq}\det \Sigma^-$
induces
an identification $\det \Sigma^+\otimes L_m\textmap{\simeq}\det
\Sigma^-\otimes
L_m$, and the map
$$\gamma_m: P\times_{SO(4)}{\Bbb R}^4\longrightarrow {\Bbb R} SU(\Sigma^+\otimes L_m,\Sigma^-
\otimes L_m)\ ,\
\gamma_m(\eta):=\gamma_0(\eta)\otimes{\rm id}_{L_m}$$
is the Clifford map of a
$Spin^c(4)$-structure ${\germ c}_m$ in $P$ whose spinor bundles are $\Sigma^{\pm}
\otimes L_m$.
Using the results in the previous section (see also [H], [OT1], [OT6]) we
get
\begin{pr} \hfill{\break}
i) An $SO(4)$-bundle $P$ admits a $Spin^c(4)$-structure iff $w_2(P)$ admits
integral
lifts.\\
ii) The set of isomorphism classes of $Spin^c(4)$-structures in an $SO(4)$-
bundle $P$ is
either empty or is an $H^2(X,{\Bbb Z})$-torsor. If
$\gamma_0$ is a fixed
$Spin^c(4)$-structure in the
$SO(4)$-bundle
$P$, then the map
$m\longmapsto {\germ c}_m$ defines a bijection between $H^2(X,{\Bbb Z})$ and
the set of
(equivalence classes of) $Spin^c(4)$-structures in $P$. \\
%
iii) [HH] If $(X,g)$ is a compact oriented Riemannian 4-manifold,
then
$w_2(P_g)$ admits integral lifts. In particular any compact oriented
Riemannian
4-manifold admits $Spin^c(4)$-structures\\
\end{pr}
2. $Spin^h(4)$-structures: \\
The quaternionic spin group is defined by $Spin^h:=Spin^{Sp(1)}$.
By the classification results 1.1.4., 1.1.5 we get
\begin{pr} Let $P$ be an $SO$-bundle over a compact oriented 4-manifold $X$.
The map
$$\left[\sigma:P^h\longrightarrow P\right]\longmapsto [\delta(P^h)]$$
defines a 1-1 correspondence between the set of isomorphism classes of
$Spin^h$-structures in $P$ and the set of isomorphism classes of
$PU(2)$-bundles $\bar
P$ over $X$ with $w_2(\bar P)=w_2(P)$. The latter set can be
identified ([DK], p.41) with
$$\{p\in{\Bbb Z}|\ p\equiv w_2(P)^2\ {\rm mod}\ 4\}$$
via the Pontrjagin class-map.
\end{pr}
In dimension 4, the group $Spin^h(4)$ can be identified with the
quotient
$$\qmod{SU(2)\times SU(2) \times SU(2)}{\{\pm({\rm id},{\rm id},{\rm id})\}}\ ,$$
hence there is an exact sequence
\begin{equation}1\longrightarrow {\Bbb Z}_2\longrightarrow SU(2)\times SU(2) \times SU(2)\longrightarrow
Spin^h(4)\longrightarrow
1\ .
\end{equation}
Let $G_3$ be the group
$$G_3:=\{(a,b,c)\in U(2)\times U(2)\times U(2)|\ \det a=\det b=\det c\}
$$
We have an exact sequence
$$1\longrightarrow S^1\longrightarrow G_3\longrightarrow Spin^h(4)\longrightarrow 1$$
extending the exact sequence (2). If $X$ is any manifold, the induced
map
$H^1(X,\underline{Spin^h(4)})\longrightarrow H^2(X,\underline{\phantom{(}S^1})=
H^3(X,{\Bbb Z})$
factorizes as
$$H^1(X,\underline{Spin^h(4)})\stackrel{\pi}{\longrightarrow}
H^1(X,\underline{SO(4)})\stackrel{w_2}{\longrightarrow} H^2(X,{\Bbb Z}_2)\longrightarrow
H^2(X,S^1)\ .$$
Therefore a $Spin^h(4)$-bundle $P^h$ admits an $G_3$-reduction iff
the second
Stiefel-Whitney class $w_2(P^h\times_\pi SO(4))$ of the associated
$SO(4)$-bundle
admits an integral lift. On the other hand, the data of a $G_3$-structure
in a
$SO(4)$-bundle $P$ is equivalent to the data of a triple consisting of a
$Spin^c(4)$-structure
$P^c\longrightarrow P$ in $P$, a $U(2)$-bundle $E$, and an isomorphism
$$P^c\times_\delta{\Bbb C}\textmap{\simeq}\det E\ .$$
Therefore ( see [OT5]),
\begin{pr} Let $P$ be a principal $SO(4)$-bundle whose second Stiefel-Whitney
class $w_2(P)$
admits an integral lift. There is a 1-1 correspondence between isomorphism
classes of
$Spin^h(4)$-structures in $P$ and equivalence classes of triples consisting
of a
$Spin^c(4)$-structure $P^c\longrightarrow P$ in $P$, a $U(2)$-bundle $E$, and an
isomorphism
$P^c\times_\delta {\Bbb C}\textmap{\simeq}\det E$.
Two triples are equivalent if,
after tensoring the first with an $S^1$-bundle, they become isomorphic
over $P$.
\end{pr}
Let us identify\ $\qmod{SU(2) \times SU(2)}{{\Bbb Z}_2}$ with $SO(4)=SO({\Bbb H})$ as
explained
above, and
denote by
$$\pi_{ij}:Spin^h\longrightarrow SO(4) \ \ \ 1\leq i<j\leq 3$$
the three epimorphisms associated with the three projections of the product
$SU(2)\times SU(2)\times SU(2)$ onto $SU(2)\times SU(2)$. Note that
$\pi_{12}=\pi$.
The spinor bundles
$\Sigma^{\pm}(P^h)$ associated with a principal $Spin^h(4)$-bundle $P^h$ are
%
$$\Sigma^+(P^h)=P^h\times_{\pi_{13}}{\Bbb C}^4\ ,\ \ \Sigma^-(P^h)=
P^h\times_{\pi_{23}}{\Bbb C}^4 $$
This shows in particular that the Hermitian 4-bundles $\Sigma^{\pm}(P^h)$
come with
\underbar{a} \underbar{real} \underbar{structure} and compatible
trivializations of
$\det(\Sigma^{\pm}(P^h))$.
Suppose now that the \ $Spin^h(4)$-bundle $P^h$ \ admits a $G_3$-lifting\
, consider
the associated triple
$(P^c,E,P^c\times_\delta{\Bbb C}\textmap{\simeq}\det E)$, and let
$\Sigma^{\pm}$ be the spinor bundles associated with $P^c$. The spinor bundles
$\Sigma^{\pm}(P^h)$ of
$P^h$ and the automorphism-bundle ${\Bbb G}(P^h)$ can be be expressed
in terms of the
$G_3$-reduction as follows
$$\Sigma^{\pm}(P^h)=[\Sigma^{\pm}]^{\vee}\otimes E=
\Sigma^{\pm}\otimes E^{\vee}\ ,\ \
{\Bbb G}(P^h)=SU(E) \ .
$$
Moreover, the associated $PU(2)$-bundle $\delta(P^h)=
P^h\times_\delta PU(2)$ is naturally
isomorphic to the $S^1$-quotient of the unitary frame bundle $P_E$ of $E$.
\vspace{0.5cm}\\ \\
3. $Spin^{U(2)}$-structures: \\
Consider the $U(2)$ spin group
$$Spin^{U(2)}:=Spin\times_{{\Bbb Z}_2} U(2)\ ,$$
and let $p:U(2)\longrightarrow PU(2)$ be the canonical projection. The map
$$p\times\det: U(2)\longrightarrow PU(2)\times S^1$$
induces an isomorphism
$\qmod{U(2)}{\{\pm{\rm id}\}}=PU(2)\times S^1$. Therefore the map
$\delta:Spin^{U(2)}\longrightarrow
\qmod{U(2)}{\{\pm{\rm id}\}}$ can be written as a pair $(\bar\delta,\det)$
consisting of a
$PU(2)-$ and an
$S^1$-valued morphism. We have exact sequences
\begin{equation}
\begin{array}{c}1\longrightarrow Spin \longrightarrow Spin^{U(2)}\textmap{(\bar\delta,\det)}
PU(2)\times
S^1\longrightarrow 1 \\
\\
1\longrightarrow U(2)\longrightarrow Spin^{U(2)}\textmap {\pi} SO \longrightarrow 1 \\ \\
1\longrightarrow {\Bbb Z}_2\longrightarrow Spin^{U(2)} \textmap{(\pi,\bar\delta,\det)} SO
\times PU(2)\times S^1
\longrightarrow 1 \\ \\
1 \longrightarrow SU(2)\longrightarrow Spin^{U(2)}\textmap{(\pi,\det)} SO \times S^1 \longrightarrow 1 \ .
\end{array}
\end{equation}
Let $P^u\longrightarrow P $ be a $Spin^{U(2)}$-structure in a $SO$-bundle $P$ over $X$.
An important role will be played by the subbundles
$${\Bbb G}_0(P^u):=P^u\times_{{\rm Ad}_{U(2)}}
SU(2)\ ,\ \ {\scriptscriptstyle|}\hskip -4pt{\g}_0(P^u):=P^u\times_{{\rm Ad}_{U(2)}}su(2)$$
of ${\Bbb G}(P^u)=P^u\times_{{\rm Ad}_{U(2)}}U(2)$,
${\scriptscriptstyle|}\hskip -4pt{\g}(P^u):=P^u\times_{{\rm Ad}_{U(2)}} u(2)$ respectively. The group of sections
$${\cal G}_0(P^u):=\Gamma(X,{\Bbb G}_0(P^u))$$
in ${\Bbb G}_0(P^u)$ can be identified with the group of automorphisms of $P^u$ over
the $SO\times S^1$-bundle $P\times_X (P^u\times_{\det} S^1)$.
By Propositions 1.1.4, 1.1.5 we get
\begin{pr} Let $P$ be a principal $SO$-bundle, $\bar P$ a $PU(2)$-bundle, and
$L$ a Hermitian line bundle over $X$.\\
i) $P$ admits a $Spin^{U(2)}$-structure $P^u
\rightarrow P$ with
$$P^u\times_{\bar \delta}PU(2)\simeq\bar P\ ,\ \ P^u\times_{\det}{\Bbb C}\simeq L$$
iff $w_2(P)=w_2(\bar P)+\overline c_1(L)$, where $\overline c_1(L)$ is
the mod 2 reduction of $c_1(L)$ .\\
ii) If the base $X$ is a compact oriented 4-manifold, then the
map
$$P^u\longmapsto \left([P^u\times_{\bar\delta} PU(2)]
,[P^u\times_{\det}{\Bbb C}]\right)$$
defines a 1-1 correspondence between the set of
isomorphism classes of
$Spin^{U(2)}$-struc\-tures in $P$ and the set of pairs of isomorphism
classes
$([\bar P],[L])$, where
$\bar P$ is a $PU(2)$-bundle and $L$ an $S^1$-bundle with $w_2(
P)=w_2(\bar P)+\overline c_1(L)$. The latter set can be identified with
$$\{(p,c)\in H^4(X,{\Bbb Z})\times H^2(X,{\Bbb Z}) |\ p\equiv (w_2(P)+ \bar c)^2\ {\rm
mod}\ 4\}
$$
\end{pr}
\hfill\vrule height6pt width6pt depth0pt \bigskip
The group $Spin^{U(2)}(4)=\qmod{SU(2) \times SU(2)
\times U(2)}{\{\pm({\rm id},{\rm id},{\rm id})\}}$
fits in the exact sequence
$$1\longrightarrow S^1 \longrightarrow \tilde G_3\longrightarrow Spin^{U(2)}(4)\longrightarrow 1\ ,$$
where
$$\tilde G_3:=\{(a,b,c)\in U(2)\times U(2)\times U(2)|\ \det a=\det b\}\ .$$
and a $Spin^{U(2)}(4)$-bundle $P^u$ admits a $\tilde G_3$-reduction iff
$w_2(P^u\times_\pi SO(4))$ has integral lifts. Therefore, as in
Proposition 1.1.9, we get
\begin{pr} Let $P$ be an $SO(4)$-bundle whose second Stiefel-Whitney
class admits
integral lifts.
There is a 1-1 correspondence between isomorphism classes of
$Spin^{U(2)}$-structures in $P$ and equivalence classes of pairs
consisting of a $Spin^c(4)$-structure $P^c\longrightarrow P$ in
$P$ and a $U(2)$-bundle $E$. Two pairs are considered equivalent if,
after tensoring
the first one with a line bundle, they become isomorphic over $P$.
\end{pr}
Suppose that the $Spin^{U(2)}(4)$-bundle $P^u$ admits an $\tilde
G_3$-lifting, let
$(P^c,E)$ be the pair associated with this reduction, and let $\Sigma^{\pm}$
be the spinor
bundles associated with $P^c$. Then the associated bundles
$\Sigma^{\pm}(P^u)$,
$\bar\delta(P^u)$,
$\det(P^u)$, ${{\Bbb G}(P^u)}$, ${\Bbb G}_0(P^u)$
can be expressed in terms of the pair $(P^c,E)$ as follows:
$$\Sigma^{\pm}(P^u)=[\Sigma^{\pm}]^{\vee}\otimes
E=\Sigma^{\pm}\otimes (E^{\vee}\otimes[\det(P^u)]) \ ,\ \
\bar\delta(P^u)\simeq
\qmod{P_E}{S^1}\ , $$ $$ \det(P^u)\simeq \det (P^c)^{-1}\otimes (\det E)\ , \ \
{\Bbb G}(P^u)=U(E),\ {\scriptscriptstyle|}\hskip -4pt{\g}(P^u)=u(E)\ ,$$
$$ \ {\Bbb G}_0(P^u)=SU(E),\ {\scriptscriptstyle|}\hskip -4pt{\g}_0(P^u)=su(E)\ .
$$
\subsection{The G-monopole equations}
\subsubsection{Moment maps for families of complex structures}
Let $(M,g)$ be a Riemannian manifold, and ${\cal J}\subset
A^0(so(T_M))$ a family of complex structures on
$M$ with the property that $(M,g,J)$ is a K\"ahler manifold, for
every $J\in {\cal J}$. We
denote by $\omega_J$ the K\"ahler form of this K\"ahler manifold.
Let $G$ be a compact
Lie group acting on $M$ by isometries with are holomorphic with
respect to any $J\in{\cal
J}$. Let $U$ be a fixed subspace of $A^0(so(T_M))$ containing the
family ${\cal J}$, and suppose for simplicity that $U$ is finite dimensional. We
define the \underbar{total} \underbar{K\"ahler} \underbar{form}
$\omega_{\cal J}\in
A^2(U^{\vee})$ by the formula
$$\langle\omega_{\cal J},u\rangle=g(u(\cdot),\cdot)\ .$$
\begin{dt} Suppose that the total K\"ahler form is closed and
$G$-invariant.
A map $\mu:M \longrightarrow {\rm Hom}({\germ g}, U^{\vee})$ for will
be called a ${\cal
J}$-moment map for the
$G$-action in
$X$ if the following two identities hold \\
1. $\mu(ag)=({\rm ad}_g\otimes{\rm id}_{U^{\vee}})(\mu(a))$ $\forall\ a\in M,\ g\in G$.\\
2. $d(\langle\mu,{\alpha}\rangle)=\iota_{\alpha^{\#}}\omega_{\cal J}$ in
$A^1(U^{\vee})$
$\forall\
\alpha\in {\germ g}$, where $\alpha^{\#}$ denotes the vector field associated
with $\alpha$.
\end{dt}
In many cases ${\germ g}$ comes with a natural ${\rm ad}$-invariant euclidean metric.
A map $\mu:M
\longrightarrow {\germ g}\otimes U$ will be called also a moment map if its composition
with the
morphism ${\germ g}\otimes U\longrightarrow {\germ g}^{\vee}\otimes U^{\vee}$ defined by the
euclidean structures
in ${\germ g}$ and $U$ is a moment map in the above sense. Similarly, the total
K\"ahler form can be
regarded (at least in the finite dimensional case) as an element in
$\omega_{\cal J}\in
A^2 (U)$.
Note that if $\mu$ is a moment map with respect to ${\cal J}$, then
for every
$J\in{\cal J}$ the map $\mu_J:=\langle \mu,J\rangle:M\longrightarrow{\germ g}^{\vee}$ is a
moment map for
the $G$-action in $X$ with respect to the symplectic structure
$\omega_J$.
\
\begin{re} Suppose
that the total K\"ahler form $\omega_{\cal J}$ is $G$-invariant and closed.
Let $\mu:M\longrightarrow
{\rm Hom}({\germ g}, U^{\vee})$ be a ${\cal J}$-moment map for a
free $G$-action and suppose that $\mu$ is a submersion at every point in
$\mu^{-1}(0)$.
Then
$\omega_{\cal J}$ descend to a closed
$U^{\vee}$-valued 2-form on the quotient
manifold $\qmod{\mu^{-1}(0)}{G}$. In particular,
in this case, all the 2-forms
$\omega_J$ descend to closed 2-formes on this quotient, but they may be
degenerate.
\end{re}
\vspace{3mm}
{\bf Examples:}\hfill{\break}\\
1. Hyperk\"ahler manifolds: \\
Let $(M,g,(J_1,J_2,J_3))$ be a hyperk\"ahler manifold [HKLR]. The
three complex structures
$J_1,J_2,J_3$ span a sub-Lie algebra $U\subset A^0(so(T_M))$
naturally isomorphic to
$su(2)$. Suppose for simplicity that $Vol(M)=1$. The sphere $S(U,\sqrt
2)\subset U$ of
radius
$\sqrt 2$ contains the three complex structures and for any
$J\in S(U,\sqrt 2)$ we get a K\"ahler manifold $(M,g,J)$. Suppose that
$G$ acts on $M$ preserving the hyperk\"ahler structure. A hyperk\"ahler
moment map $\mu:M\longrightarrow
{\germ g}\otimes su(2)$ in the sense of [HKLR] can be regarded as a moment
map with respect to the family $S(U,\sqrt 2)$ in the sense above. If the
assumptions in the Remark above are fulfilled, then the forms
$(\omega_J)_{J\in S(U,\sqrt 2)}$ descend to
\underbar{symplectic} forms on the quotient $\qmod{\mu^{-1}(0)}{G}$,
which can be endowed with a natural hyperk\"ahler structure in this
way [HKLR].\\ \\
2. Linear hyperk\"ahler spaces:\\
Let $G$ be a compact Lie group and
$G\subset U(W)$ a unitary representation of
$G$. A moment map for the
$G$-action on $W$ is given by
$$\mu_G(w)=-{\rm Pr}_{\germ g}\left(\frac{i}{2}(w\otimes\bar w)\right)
$$
where ${\rm Pr}_{\germ g}:u(W)\longrightarrow {\germ g}={\germ g}^{\vee}$ is the projection ${\germ g}\hookrightarrow
u(V)$. Any other moment map can be obtained by adding a constant central
element in ${\germ g}$.
In the special case of the standard left action of $SU(2)$ in ${\Bbb C}^2$, we
denote by $\mu_0$
the associated moment map. This is given by
$$\mu_0(x)=-\frac{i}{2}(x\otimes\bar x)_0 \ ,
$$
where $(x\otimes\bar x)_0$ denotes the trace-free component of the Hermitian
endomorphism $x\otimes\bar x$.
Consider now the scalar extension
$M:={\Bbb H}\otimes_{{\Bbb C}} W$. Left multiplications by quaternionic units define a
$G$-invariant
hyperk\"ahler structure in $M$. The corresponding family of complex
structures is
parametrized by the radius $\sqrt 2$-sphere $S$ in the space of imaginary
quaternions
identified with $su(2)$.
Define the quadratic map $\mu_{0,G}:{\Bbb H}\otimes_{\Bbb C} W\longrightarrow su(2)\otimes {\germ g}$ by
$$\mu_{0,G}(\Psi)={\rm Pr}_{[su(2)\otimes {\germ g}]}(\Psi\otimes\bar\Psi) \ .
$$
It acts on tensor monomials by
$$x\otimes w\stackrel{\mu_{0G}}{\longmapsto} -4\mu_0(x)\otimes\mu_G(w)\in
su(2)\otimes {\germ g}\subset {\rm Herm}({\Bbb H}\otimes_{\Bbb C} W)\ .
$$
It is easy to see that $-\frac{1}{2}\mu_{0,G}$ is a moment map for the
$G$ action in $M$ with respect to the linear hyperk\"ahler structure in $M$
introduced above.
\\ \\
3. Spaces of spinors:\\
Let $P^G$ be $Spin^G(4)$-bundle over a compact Riemannian manifold $(X,g)$.
The corresponding spinor bundles $\Sigma^{\pm}(P^G)$ have
${\Bbb H}_{\pm}\otimes_{\Bbb C} V$ as standard fibres. Any section $J\in
\Gamma(X,S({\rm ad}_{\pm}(P^G),\sqrt 2))$ in the radius $\sqrt 2$-sphere bundle
associated to
$ad_{\pm}(P^G)$ gives a complex (and hence a K\"ahler) structure in
$A^0(\Sigma^{\pm}(P^G))$.
Therefore (after suitable Sobolev completions)
the space of
sections $$\Gamma(X,S({\rm ad}_{\pm}(P^G),\sqrt 2))$$ can be regarded as a family
of K\"ahler
structures in the space of sections $A^0(\Sigma^{\pm}(P^G))$ endowed with
the standard
$L^2$-Euclidean metric. Define a quadratic map $\mu_{0,{\cal
G}}:A^0(\Sigma^{\pm}(P^G))\longrightarrow A^0(ad_{\pm}(P^G)\otimes {\scriptscriptstyle|}\hskip -4pt{\g})$ by sending an
element
$\Psi\in A^0(\Sigma^{\pm}(P^G))$ to the section in
$ad_{\pm}(P^G)\otimes{\scriptscriptstyle|}\hskip -4pt{\g}$ given by
the fibrewise projection of $\Psi\otimes\bar\Psi\in
A^0({\rm Herm}(\Sigma^{\pm}(P^G)))$.
Then $-\frac{1}{2}\mu_{0,{\cal G}}:A^0(\Sigma^{\pm}(P^G))\longrightarrow
A^0({\rm ad}_{\pm}(P^G)\otimes
{\scriptscriptstyle|}\hskip -4pt{\g})\subset{\rm Hom}(A^0({\scriptscriptstyle|}\hskip -4pt{\g}), A^0({\rm ad}_{\pm}(P^G)^{\vee})$ can be
regarded as a $\Gamma(X,S({\rm ad}_{\pm}(P^G),\sqrt 2))$-moment map for the
natural action of the
gauge group ${\cal G}$. \\
\\
4. Spaces of connections on a 4-manifold:\\
Let $(X,g)$ be a compact oriented Riemannian 4-manifold, $G\subset U(r)$ a
compact Lie
group, and $P$ a principal $G$-bundle over $X$. The space of connections
${\cal A}(P)$ is an
euclidean affine space modelled on $A^1({\rm ad}(P))$, and the gauge group ${\cal
G}:=\Gamma(X,P\times_{Ad}G)$ acts from the left by
$L^2$-isometries. The space of
almost complex structures in $X$ compatible with the metric and the
orientation can be
identified with space of sections in the sphere bundle
$S(\Lambda^2_+,\sqrt 2)$ under the map which associates to an almost
complex structure
$J$ the K\"ahler form
$\omega_J:=g(\cdot,J(\cdot))$ [AHS]. On the other hand any almost complex
structure
$J\in \Gamma(X,S(\Lambda^2_+,\sqrt 2))$ induces a gauge invariant
\underbar{integrable} complex structure in the affine space ${\cal A}(P)$
by identifying
$A^1({\rm ad}(P))$ with $A^{01}_J({\rm ad}(P)^{{\Bbb C}})$.
The total K\"ahler form of this family is the element
$\Omega\in A^2_{{\cal A}(P)}(A^2_{+,X})$ given by
$$\Omega(\alpha,\beta) = {\rm Tr}(\alpha\wedge\beta)^+ \ ,
$$
where $\alpha$, $\beta\in A^1({\rm ad}(P))$.
Consider the map $ F^+ :{\cal A}(P)\longrightarrow
A^0({\rm ad}(P)\otimes \Lambda^2_{X,+})\subset {\rm Hom}(A^0({\rm ad}(P)),(A^2_+)^{\vee})$
given by
$A\longmapsto F_A^+$.
It satisfies the equivariance property 1. in Definition 1.2.1. Moreover,
for every
$A\in{\cal A}(P)$,
$\alpha\in A^1({\rm ad}(P))=T_A({\cal A}(P))$,
$\varphi\in A^0({\rm ad}(P))=Lie({\cal G})$ and $\omega\in A^2_+$ we have
(denoting by $\delta$
the exterior derivative on ${\cal A}(P)$)
$$\left\langle(\iota_{\varphi^{\#}}\Omega)(\alpha)-
\langle\delta_A(F^+)
(\alpha),\varphi\rangle,\omega\right\rangle
=\langle
d^+[{\rm Tr}(\varphi\wedge\alpha)],\omega\rangle=
\int_X{\rm Tr}(\varphi\wedge\alpha)\wedge
d\omega \ .
$$
This formula means that the second condition in Definition 1.2.1. holds
up to 1-forms on ${\cal A}(P)$ with values in the subspace ${\rm im}[d^+: A^1_X
\rightarrow
A^2_{X,+} ]$. Let $\bar\Omega$ be the image of $\Omega$ in $A^2_{{\cal
A}(P)}\left[\qmod{A^2_{X,+}}{{\rm im}(d^+)}\right]$. Putting
$${\cal A}^{ASD}_{reg}=\{A\in{\cal A}(P)|\ F_A^+=0,\ {\cal G}_A=Z(G),\
H^0_A=H^2_A=0\}$$
we see that $\bar\Omega$ descends to a closed
$\left[\qmod{A^2_{X,+}}{{\rm im}(d^+)}\right]$-valued 2-form
$[\bar\Omega]$ on the moduli space of regular anti-selfdual connections
${\cal M}^{ASD}_{reg}:=\qmod{{\cal A}^{ASD}_{reg}}{{\cal G}}$. Thus we
may consider the
map
$F^+$ as a $\Gamma(X,S(\Lambda^2_+,\sqrt 2))$-moment map modulo $d^+$-exact
forms for the action of the gauge group on ${\cal A}(P) $.
Note that in the case $G=SU(2)$ taking $L^2$-scalar product of
$\frac{1}{8\pi^2}[\bar\Omega]$ with a harmonic selfdual form $\omega\in
{\Bbb H}^2_+$ defines a de Rham representant of Donaldson's
$\mu$-class associated with the Poncar\'e dual of $[\omega]$.
The following simple consequence of the above observations can be regarded
as the
starting point of Seiberg-Witten theory.
\begin{re} The data of a $Spin^G(4)$-structure in the Riemannian manifold
$(X,g)$ gives an
isometric isomorphism $\frac{1}{2}\Gamma:\Lambda^2_{+}\longrightarrow {\rm ad}_{+}(P^G)$. In
particular we get an identification between the two familes
$\Gamma(X,S({\rm ad}_{+}(P^G),\sqrt 2))$ and $\Gamma(X,S(\Lambda^2_{+},\sqrt 2))$ of
complex structures in $A^0(\Sigma^{+}(P^G))$ and ${\cal A}(\delta(P^G))$
studied
before. Consider the action of the gauge group ${\cal G}:=\Gamma(X,{\Bbb G})$ on
the product ${\cal A}(\delta(P^G))\times A^0(\Sigma^{+}(P^G))$ given by
$$[(A,\Psi),f]\longmapsto (\delta(f)(A),f (\Psi))\ .
$$
This action admits a (generalized)
moment map modulo
$d^+$-exact forms (with respect to the family
$\Gamma(X,S({\rm ad}_{+}(P^G),\sqrt 2))$) which
is given by the formula
$$(A,\Psi)\longmapsto F_A^{+}-\Gamma^{-1}(\mu_{0,{\cal G}}(\Psi)) \ .
$$
\end{re}
\subsubsection{Dirac harmonicity and the $G$-monopole equations}
Let $P^G$ be a $Spin^G$-bundle. Using the third exact sequence in $(*)$
sect. 1.1, we see
that the data of a connection in $P^G$ is equivalent to the data of a pair
consisting of a
connection in
$P^G\times_\pi SO$, and a connection in $\delta(P^G)$. In particular, if
$P^G\longrightarrow P_g$ is
a $Spin^G(n)$-structure in the frame bundle of an oriented Riemannian
$n$-manifold $X$,
then the data of a connection $A$ in $\delta(G)$ is equivalent to the
data of a
connection
$B_A$ in $P^G$ lifting the Levi-Civita connection in $P_g$. Suppose now
that $n=4$, and
denote as usual by $\gamma: \Lambda^1\longrightarrow {\rm Hom}_{\Bbb G}(\Sigma^+(P^G)^+,
\Sigma^-(P^G))$
the Clifford map of a fixed
$Spin^G(4)$-structure $P^G\stackrel{\sigma}\longrightarrow P_g$, and by
$\Gamma:\Lambda^2_{\pm}\longrightarrow
{\rm ad}_{\pm}(P^G)$ the induced isomorphisms. We define the Dirac
operators
${\raisebox{.17ex}{$\not$}}\hskip -0.4mm{D}_A^{\pm}$ associated with $A\in{\cal A}(\delta(P^G))$ as the
composition
$$A^0(\Sigma^{\pm}(P^G))\textmap{\nabla_{{B_A}}}
A^1(\Sigma^{\pm}(P^G))\textmap{\cdot\gamma} A^0(\Sigma^{\mp}(P^G)) \ .
$$
We put also
$$\Sigma(P^G):=\Sigma^+(P^G)\oplus \Sigma^-(P^G)\ ,\ \
{\raisebox{.17ex}{$\not$}}\hskip -0.4mm{D}_A:={\raisebox{.17ex}{$\not$}}\hskip -0.4mm{D}_A^+\oplus{\raisebox{.17ex}{$\not$}}\hskip -0.4mm{D}_A^-:A^0(\Sigma(P^G))\longrightarrow A^0(\Sigma(P^G))\ .$$
Note that ${\raisebox{.17ex}{$\not$}}\hskip -0.4mm{D}_A$ is a self-adjoint first order elliptic operator.
\begin{dt} A pair $(A,\Psi)\in{\cal A}(P^G)\times A^0(\Sigma(P^G))$
will be called
(Dirac) harmonic if ${\raisebox{.17ex}{$\not$}}\hskip -0.4mm{D}_A\Psi=0$.
\end{dt}
The harmonicity condition is obviously invariant with respect to the
gauge group ${\cal
G}(P^G):=\Gamma(X,{\Bbb G}(P^G))$. The monopole equations associated to
$\sigma$ couple the
two gauge invariant equations we introduced above: the vanishing of
the "moment map "
(cf. 1.4.1) of the gauge action with respect to the family of complex
structures
$\Gamma(X,S({\rm ad}_+(P^G),\sqrt 2))$ in the affine space
${\cal A}(\delta(P^G))\times A^0(\Sigma^+(P^G))$ and the Dirac
harmonicity.
\begin{dt} Let $P^G\textmap{\sigma} P$ be a $Spin^G(4)$-structure on
the compact
oriented Riemannian 4-manifold $X$.
The associated Seiberg-Witten equations for a pair
$(A,\Psi)\in {\cal A}(\delta(P^G)) \times A^0(\Sigma^+(P^G))$ are
$$\left\{\begin{array}{ccc}
{\raisebox{.17ex}{$\not$}}\hskip -0.4mm{D}_A\Psi&=&0\\
\Gamma(F_A^+)&=&\mu_{0,{\cal G}}(\Psi)
\end{array}\right. \eqno{(SW^\sigma)}$$
\end{dt}
The solutions of these equations modulo the gauge group will be
called $G$-monopoles.
The case $G=S^1$ corresponds to the classical (abelian)
Seiberg-Witten theory. The case
$G=SU(2)$ was extensively studied in [OT5], and from a physical
point view in [LM].
\begin{re} If the Lie algebra ${\germ g}$ of $G$ has non-trivial center $z({\germ g})$,
then the moment
map of the gauge action in $A^0(\Sigma^+(P^G))$ is not unique. In this
case it is
more natural to consider the family of equations
$$\left\{\begin{array}{ccc}
{\raisebox{.17ex}{$\not$}}\hskip -0.4mm{D}_A\Psi&=&0\\
\Gamma(F_A^+)&=&\mu_{0,{\cal G}}(\Psi)+\beta \ ,
\end{array}\right. \eqno{(SW^\sigma_\beta)}$$
obtained by adding in the second equation a
section
$$\beta\in A^0({\rm ad}_+(P^G)\otimes z({\germ g}))\simeq A^2_+(X,z({\germ g}))\ .$$
\end{re}
In the case $G=S^1$ the equations of this form are called \underbar{twisted}
\underbar{monopole} equations [OT6]. If
$b_+(X)=1$, the invariants defined using moduli spaces of twisted
monopoles depend in an
essential way on the twisting term $\beta$ ([LL], [OT6]).
The particular case $G=U(2)$ requires a separate discussion, since
in this case
$\delta(U(2))\simeq PU(2)\times S^1$ and, correspondingly, the bundle
$\delta(P^u)$
associated with a $Spin^{U(2)}(4)$-structure $P^u\textmap{\sigma} P_g$
splits as the
product
$$\delta(P^u)=\bar\delta(P^u)\times_X\det(P^u)$$
of a $PU(2)$-bundle with a $U(1)$-bundle. The data
of a connection in $P^u$ lifting the Levi-Civita connection in $P_g$ is
equivalent to the data
of a pair
$A=(\bar A,a)$ formed by a connection $\bar A$ in $\bar\delta(P^u)$ and a
connection $a$ in
$\det(P^u)$. An alternative approach regards the connection
$a\in {\cal A}(\det(P^u))$ as a
parameter (not an unknown !) of the equations, and studies the
corresponding monopole
equations for a pair $(\bar A,\Psi)\in {\cal A}(\bar\delta(P^u))\times
A^0(\Sigma^+)$.
$$\left\{\begin{array}{ccc}
{\raisebox{.17ex}{$\not$}}\hskip -0.4mm{D}_{\bar A,a}\Psi&=&0\\
\Gamma(F_{\bar{A}}^+)&=&\mu_{0,0}(\Psi)
\end{array}\right. \eqno{(SW^\sigma_a)}$$
Here ${\raisebox{.17ex}{$\not$}}\hskip -0.4mm{D}_{\bar A,a}$ denotes the Dirac operator associated to the
connection in $P^u$
which lifts the Levi-Civita connection in $P_g$, the connection $\bar A$
in the
$PU(2)$-bundle
$\bar\delta(P^u)$ and the connection $a$ in the $S^1$-bundle $\det P^u$;
the quadratic
map $\mu_{0,0}$ sends a spinor $\Psi\in A^0(\Sigma^+(P^u))$ to the
projection of the
endomorphism
$(\Psi\otimes\bar\Psi)\in A^0({\rm Herm}(\Sigma^+(P^u)))$ on
$A^0({\rm ad}_+(P^u)\otimes{\scriptscriptstyle|}\hskip -4pt{\g}_0(P^u))$.
The natural gauge group
which lets invariant the equations is the group ${\cal G}_0(P^u):=
\Gamma(X,{\Bbb G}_0(P^u))$ of
automorphisms of the bundle $P^u$ over the bundle-product $P_g\times_X
\det(P^u)$,and
$-\mu_{0,0}$ is the
$\Gamma(X,S({\rm ad}_+,\sqrt 2))$-moment map for the ${\cal G}_0(P^u)$-action in the
configuration space. There is no ambiguity in choosing the moment map of the
${\cal G}_0(P^u)$-action, so there is \underbar{no} natural way to
perturb these equations besides varying the connection-parameter $a\in{\cal
A}(\det(P^u))$.
Since the connection-component of the unknown is a
$PU(2)$-connection, these equations will be called the
$PU(2)$-\underbar{monopole} \underbar{equations}, and its solutions
modulo the
gauge group ${\cal G}_0(P^u)$ will be called
$PU(2)$-monopoles.
Note that if the $Spin^{U(2)}(4)$-structure $P^u\longrightarrow P_g$ is associated
with the pair $(P^c
\longrightarrow P_g,E)$ (Proposition 1.1.11), the quadratic map $\mu_{0,0}$ sends
a spinor $\Psi\in
A^0\left(\Sigma^+(P^c)\otimes [E^{\vee}\otimes\det (P^u)]\right)$ to the
projection of
$$(\Psi\otimes\bar\Psi)\in
A^0\left({\rm Herm}\left(\Sigma^+(P^c)
\otimes[E^{\vee}\otimes\det (P^u)]\right)\right)$$
on
$A^0\left(su(\Sigma^+)\otimes su([E^{\vee}
\otimes\det (P^u)]\right)$.
\begin{re} The data of a $Spin^h(4)$-structure in $X$ is equivalent to
the data of
$Spin^{U(2)}$-structure $P^u\textmap{\sigma} P$ together with a
trivialization of the
$S^1$-bundle $\det(P^u)$. The corresponding $SU(2)$-Seiberg-Witten
equations coincide with
the $PU(2)$-equations $SW^\sigma_\theta$ associated with the trivial
connection
$\theta$ in the trivial bundle $\det(P^u)$.
\end{re}
We shall always regard the $SU(2)$-monopole equations as
special $PU(2)$-monopole
equations. In particular we shall use the notation
$\mu_{0,{\cal G}}=\mu_{0,0}$ if ${\cal
G}$ is the gauge group associated with a $Spin^h(4)$-structure.
\begin{re} The moduli space of
$PU(2)$-monopoles of the form $(\bar A,0)$ can be identified
with a moduli space of
anti-selfdual
$PU(2)$-connections, modulo the gauge group ${\cal G}_0$. The
natural morphism of
${\cal G}_0$ into the usual $PU(2)$-gauge group of
bundle automorphisms of $\bar\delta(P^u)$ is a local isomorphism
but in general it is
not surjective (see [LT] ). Therefore the space of $PU(2)$-monopoles
of the form $(\bar
A,0)$ is a finite cover of the corresponding Donaldson moduli space of
$PU(2)$-instantons.
\end{re}
\begin{re} Let $G$ be a compact Lie group endowed with a central invlotion
$\iota$ and an
arbitrary unitary representation $\rho:G\longrightarrow U(V)$ with
$\rho(\iota)=-{\rm id}_V$. One can
associate to any $Spin^G(4)$-bundle the spinor bundles $\Sigma^{\pm}$ of
standard fibre
${\Bbb H}_{\pm}\otimes V$. Endow the Lie algebra
${\germ g}$ with an
${\rm ad}$-invariant metric. Then one can define
$\mu_{0,G}$ using the adjoint of the map ${\germ g}\longrightarrow u(V)$ instead of the
orthogonal projection, and
the
$G$-monopole equations have sense in this more general framework.
\end{re}
\subsubsection{Reductions}
Let $H\subset G\subset U(V)$ be a closed subgroup of $G$ with $-{\rm id}_V\in H$.
Let $P^G\textmap{\sigma} P$ be a $Spin^G $-structure in the $SO$-bundle
bundle $P$.
\begin{dt} A $Spin^H$-reduction of $\sigma$ is a subbundle $P^H$ of
$P^G$ with structure group $Spin^H\subset Spin^G$.
\end{dt}
Note that such a reduction $P^H\hookrightarrow P^G$ defines a reduction
$\delta(P^H)\hookrightarrow\delta(P^G)$ of the structure group of the
bundle $\delta(P^G)$
from $\qmod{G}{{\Bbb Z}_2}$ to
$\qmod{H}{{\Bbb Z}_2}$, hence it defines in particular an injective linear
morphism ${\cal
A}(\delta(P^H))\hookrightarrow{\cal A}(\delta(P^G))$ between the associated
affine spaces
of connections.
Let now $V_0$ be an $H$-invariant subspace of $V$.
Consider a $Spin^G(4)$-structure $P^G\textmap{\sigma} P$ in the
$SO(4)$-bundle $P$, and
a $Spin^H(4)$-reduction $P^H\stackrel{\rho}{\hookrightarrow} P^G$ of
$\sigma$. Let
$\Sigma^{\pm}(P^H,V_0)$ be the spinor bundles associated with $P^H$ and the
$Spin^H(4)$-representation in
${\Bbb H}^{\pm}\otimes_{\Bbb C} V_0$.
The inclusion $V_0\subset V$ induces bundle inclusions of the associated
spinor bundles
$\Sigma^{\pm}(P^H,V_0)\hookrightarrow \Sigma^{\pm}(P^G)$. Suppose now that
$P_g$ is the frame-bundle of a compact oriented Riemannian 4-manifold, choose
$A\in{\cal
A}(\delta(P^H))\subset {\cal A}(\delta(P^G))$, and let be $B_A\in{\cal
A}(\delta(P^G))$ be
the
induced connection. Then the spinor bundles $\Sigma^{\pm}(P^H,V_0)$ become
$B_A$-parallel subbundles of
$\Sigma^{\pm}(P^G)$, and the Dirac operator
$${\raisebox{.17ex}{$\not$}}\hskip -0.4mm{D}_A:\Sigma(P^G)\longrightarrow \Sigma(P^G)$$
maps $\Sigma(P^H,V_0)$ into itself. Therefore the set of Dirac-harmonic pairs
associated with
$(\sigma\circ\rho,V_0)$ can be identified with a subset of the set of
Dirac-harmonic
pairs associated with $(\sigma,V)$.
The group $G$ acts on the set
$$\{(H,V_0)|\ H\subset G\ {\rm closed\ subgroup},\ V_0\subset V\ {\rm is}\
H-{\rm
invariant}\}\ .
$$
of subpairs of $(G,V)$ by $[g,(H,V_0)]\longmapsto( Ad_g(H),g(V_0))$.
Moreover, for any
$Spin^H(4)$-reduction
$P^H\hookrightarrow P^G$ of
$\sigma$ and any element $g\in G$ we get a reduction
$P^{{\rm Ad}_g(H)}\hookrightarrow P^G$ of
$\sigma$ and subbundles $\Sigma^{\pm}(P^{{\rm Ad}_g(H)},g(V_0))$ of the spinor
bundles
$\Sigma^{\pm}(P^G)$.
\begin{dt} A subpair $(H,V_0)$ of $(G,V)$ with $-{\rm id}_V\in H$ will be
called admissible
and
$\mu_G|_{V_0}$ takes values in ${\germ h}$ or, equivalently, if $\langle ik(v),
v\rangle=0$ for all
$k\in{\germ h}^{\bot_{{\germ g}}}$ and $v\in V_0$.
\end{dt}
Therefore, if $(H,V_0)$ is admissible, then $\mu_G|_{V_0}$ can be
identified with the
moment map $\mu_H$ associated with the $H$-action in $V_0$ (with
respect to the
metric in {\germ h} induced from {\germ g} -- see Remark 1.2.9). If generally
$E$ is a system of equations on a configuration space ${\cal A}$ we denote
by ${\cal
A}^E$ the space of solutions of this system, enowed with the induced topology.
\begin{pr} Let $(H,V_0)$ be an \ admissible \ subpair of \ $(G,V)$. \ A \\
$Spin^H(4)$-reduction
$P^H\textmap{\rho} P^G$ of the $Spin^G(4)$-structure $P^G\textmap{\sigma}
P_g$
induces an inclusion
$$\left[{\cal A}(\delta(P^H))\times
A^0(\Sigma^+(P^H))\right]^{SW^{\sigma\circ\rho}}\subset
\left[{\cal A}(\delta(P^G))\times A^0(\Sigma^+(P^G))\right]^{SW^{\sigma}}$$
which is equvariant with respect to the actions of the two gauge groups.
\end{pr}
\begin{dt} Let $(H,V_0)$ be an admissible subpair. A solution $(A,\Psi)\in
\left[{\cal
A}(\delta(P^G))\times A^0(\Sigma^+(P^G))\right]^{SW^\sigma}$ will be called
\underbar{reducible} \underbar{of} \underbar{type} $(H,V_0)$, if it belongs
to the image of
such an inclusion, for a suitable reduction $P^H\textmap{\rho} P^G$.
\end{dt}
If $(H,V_0)$ is
admissible, $H\subset H'$ and $V_0$ is $H'$-invariant, then $(H',V_0)$ is
also admissible.
An admissible pair
$(H,V_0)$
will be called \underbar{minimal} if $H$ is minimal in the set of closed
subgroups
$H'\subset G$ such that $(H',V_0)$ is an admissible subpair of $(G,V)$. The
sets of (minimal)
admissible pairs is
invariant
under the natural $G$-action. We list the conjugacy classes of proper
minimal admissible
subpairs in the cases
$(SU(2),{\Bbb C}^{\oplus 2})=(Sp(1),{\Bbb H})$,
$(U(2),{\Bbb C}^{\oplus 2})$,
$(Sp(2),{\Bbb H}^{\oplus 2})$. Fix first the maximal tori
$$
T_{SU(2)}:=\left\{\left(\matrix{z&0\cr 0& z^{-1}}\right)|z\in S^1\right\}\ ,\ \
T_{U(2)}:=\left\{\left(\matrix{u&0\cr 0& v }\right)| u,v\in S^1\right\}
$$
$$T_{Sp(2)}:=\left\{\left(\matrix{u&0\cr 0& v }\right)|u,v\in S^1\right\}
$$
\\
On the right we list the minimal admissible subpairs of the pair on the left:
$$\begin{array}{llcrl}
(SU(2),{\Bbb C}^{\oplus 2}):\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \
\ \ \ \ \ &(\{\pm1\}&, &\{0\}) &\\ \\ &(T_{SU(2)}&,&{\Bbb C}\oplus\{0\} )\\ \\
\end{array}
$$
%
$$\begin{array}{llcrl}
(U(2),{\Bbb C}^{\oplus 2}):\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
&(\{\pm1\}&, &\{0\}) &\\ \\
&\left(\left\{\left(\matrix{\zeta&0\cr0&\pm1}\right)|\zeta\in
S^1\right\}\right.&,&\left.
\phantom{\matrix{1\cr1}}{\Bbb C}\times\{0\}\right) \\
\\
\end{array}
$$
%
$$\begin{array}{llcrl}
(Sp(2),{\Bbb H}^{\oplus 2}):\ \ \ \ \ \ \ \ \ &(\{\pm 1\}&, &\{0\}) &\\ \\
&\left(\left\{\left(\matrix{\zeta&0\cr0&\pm1}\right)|\zeta\in
T_{Sp(1)}\right\}\right.&,&\left.
\phantom{\matrix{1\cr1}}{\Bbb C}\oplus\{0_{\Bbb H}\}\right)\\
&\left(\left\{\left(\matrix{\zeta&0\cr0&\pm1}\right)|\zeta\in\ \ Sp(1)\
\ \right\}\right.&,&\left.
\phantom{\matrix{1\cr1}}{\Bbb H}\oplus\{0_{\Bbb H}\}\right)\\ \\
\end{array}
$$
\begin{re} Fix a maximal torus $T$ of $G$ with Lie algebra ${\germ t}$, and let
${\fam\meuffam\tenmeuf
W}\subset {\germ t}^{\vee}$ be the weights of the induced $T$-action in $V$. Let
$V=\bigoplus\limits_{\alpha\in{\fam\meuffam\tenmeuf W}} V_\alpha$ be the corresponding
decomposition of
$V$ in weight spaces. If $(T,V')$ is a subpair of $(G,V)$, then $V'$ must
be a sum of
weight subspaces, i.e. there exist ${\fam\meuffam\tenmeuf W}'\subset {\fam\meuffam\tenmeuf W}$ such that
$V'=\bigoplus\limits_{\alpha\in{\fam\meuffam\tenmeuf W}'} V'_\alpha$, with $0\ne
V'_\alpha\subset
V_\alpha$. When $G$ is one of the classical
groups $SU(n)$, $U(n)$, $Sp(n)$ and $V$ the corresponding canonical
$G$-module, it follows
easily that $(T,V')$ is admissible iff $|{\fam\meuffam\tenmeuf W}'|=1$. Notice that there
is a natural action of
the Weil group $\qmod{N(T)}{T}$ in the set of abelian subpairs of the form
$(T,V')$.
\end{re}
The case of the $PU(2)$-monopole equations needs a separate discussion: Fix a
$Spin^{U(2)}(4)$-structure $\sigma:P^u\longrightarrow P_g$ in $P_g$ and a connection
$a$ in
the
line bundle $\det (P^u)$.
In this case the admissible pairs are by definition equivalent to one of
$$(H,\{0\})\ ,\ \ H\subset U(2)\ {\rm with} -{\rm id}_V\in H\ ;\ \
(T_{U(2)},{\Bbb C}\oplus\{0\})
$$
An abelian reduction $P^{ T_{U(2)} }\stackrel{\rho}{\hookrightarrow} P^u$ of
$\sigma$ gives rise to a pair of $Spin^c$-structures $({\germ c}_1:P^c_1\longrightarrow P_g,
{\germ c}_2:P^c_2\longrightarrow P_g)$ whose determinant line bundles come with an
isomorphism
$\det(P^c_1)\otimes\det(P^c_2)=[\det (P^u)]^2$. Moreover, the $PU(2)$-bundle
$\bar\delta(P^u)$ comes with an $S^1$-reduction $\bar\delta(P^u)=
P^{S^1}\times_\alpha
PU(2)$ where $[P^{S^1}]^2= \det(P^c_1)\otimes\det(P^c_2)^{-1}$ and
$\alpha:S^1\longrightarrow
PU(2)$ is the standard embedding $\zeta\longmapsto\left[\left(\matrix{\zeta&0\cr
0&1}\right)\right]$. Since we have fixed the connection $a$ in $\det (P^u)$,
the data of a
connection $\bar A\in{\cal A}(\bar\delta(P^u))$ which reduces to
$P^{ T_{U(2)} }$ via $\rho$ is equivalent to the data of a connection
$a_1\in{\cal A}(\det(P^c_1))$.
Moreover, we have a natural parallel inclusion $\Sigma^{\pm}(P^c_1)\subset
\Sigma^{\pm}(P^u)$. Consider the following twisted abelian monopole
equations
[OT6] for a pair $(A_1,\Psi_1)\in {\cal A}(\det (P^c_1))\times
A^0(\Sigma^{\pm}(P^c_1))$
$$\left\{\begin{array}{ccc}
{\raisebox{.17ex}{$\not$}}\hskip -0.4mm{D}_{A_1}\Psi_1&=&0\\
\Gamma(F_{A_1}^+)&=&(\Psi_1\bar\Psi_1)_0+\Gamma(F_a^+) \ .
\end{array}\right. \eqno{(SW^{{\germ c}_1}_{\Gamma(F_a^+)})}$$
Taking in Remark 1.2.6 as twisting term the form $\beta=\Gamma(F_a^+)$, we get
\begin{pr}A $Spin^{T_{U(2)}}$-reduction
$P^{ T_{U(2)}}\stackrel{\rho}{\hookrightarrow} P^u$
of the $Spin^{U(2)}(4)$-structure $P^u\textmap{\sigma} P_g$
induces an inclusion
$$\left[{\cal A}(\det(P^c_1))\times
A^0(\Sigma^+(P^c_1))\right]^{SW^{{\germ c}_1}_{\Gamma(F_a^+)}}\subset
\left[{\cal A}(\bar\delta(P^u))\times
A^0(\Sigma^+(P^u))\right]^{SW^{\sigma}_a}$$
which is equivariant with respect to the actions of the two gauge groups.
\end{pr}
The fact that the Donaldson ($PU(2)$-) $SU(2)$-moduli space is contained in the
space of
($PU(2)$-) $SU(2)$-monopoles, and that
(twisted) abelian monopoles arise as
abelian reductions in the space of ($PU(2)$)
$SU(2)$-monopoles suggests that these moduli spaces can be used to prove the
equivalence between the two theories [OT5].
This idea can be applied to get information about the Donaldson invariants
associated with
larger symmetry groups $G$ by relating these invariants to Seiberg-Witten
type
invariants associated with smaller symmetry groups. In order to do this, one
has first to
study invariants associated to the moduli spaces of reducible solutions of all
possible
types in a suitable moduli space of $G$-monopoles.
\subsubsection{Moduli spaces of $G$-monopoles}
Let ${\cal A}$ be the configuration space
of one of the monopole equations $SW$ introduced in sect. 1.2.2.: For the
equations
$SW^\sigma_\beta$ associated with a $Spin^G(4)$-structure
$\sigma:P^G \longrightarrow P_g$ in
$(X,g)$ and a section $\beta\in A^0({\rm ad}_+(P^G)\otimes z({\germ g}))$, the space
${\cal A}$ coincides with ${\cal A}(\delta(P^G))\times
A^0(\Sigma^+(P^G))$; in the
case of $PU(2)$-monopole equations $SW^\sigma_a$ associated to a
$Spin^{U(2)}(4)$-structure $\sigma:P^u\longrightarrow P_g$ and an abelian connection
$a\in{\cal
A}(\det(P^u))$ the configuration space is ${\cal A}(\bar\delta(P^u))\otimes
A^0(\Sigma^+(P^u))$. In this section, we denote by ${\cal G}$ the gauge group
corresponding
to the monopole equation $SW$, i.e. ${\cal G}={\cal G}(P^G)$ if
$SW=SW^\sigma_\beta$ and ${\cal G}={\cal G}_0(P^u)$ in the $PU(2)$-case $SW=
SW^\sigma_a$. The Lie algebra $Lie({\cal G})$ of ${\cal G}$ is
$\Gamma(X,{\scriptscriptstyle|}\hskip -4pt{\g}(P^G))$
in the
first case and $\Gamma(X,{\scriptscriptstyle|}\hskip -4pt{\g}_0(P^u))$ in the second.
The corresponding moduli space of $G$-monopoles is defined as a
topological space by
$${\cal M}:=\qmod{{\cal A}^{SW}}{{\cal G}}\ .
$$
There is a standard way of describing the local structure of ${\cal M}$,
which was
extensively described in the cases $G=S^1$, $G=U(2)$ in [OT1] and in the
case $G=SU(2)$
(which is similar to the $PU(2)$-case) in [OT5] (see [DK], [K], [LT], [M]
for the
instanton case and for the classical case of holomorphic bundles).
We explain briefly the general strategy:
Let $p=(A,\Psi)\in {\cal A}^{SW}$. The infinitesimal action of
$Lie({\cal G})$
and the
differential of $SW$ in $p$ define a "elliptic deformation complex"
$$0\longrightarrow C^0_p \textmap{D^0_p} C^1_p\textmap{D^1_p}C^2_p \longrightarrow 0
\eqno{({\cal C}_p)}
$$
where:\\
$C^0_p= Lie({\cal G})=\Gamma(X,{\germ g}(P^G))$ ( or $\Gamma(X,{\germ g}_0(P^u))$ in the
$PU(2)$-case),\\ \\
${\cal C}^1_p= T_p({\cal A})=A^1({\scriptscriptstyle|}\hskip -4pt{\g}(P^G))\oplus A^0(\Sigma^{+}(P^G))$ (or
$A^1({\scriptscriptstyle|}\hskip -4pt{\g}_0(P^u))\oplus A^0(\Sigma^{+}(P^u))$ in the
$PU(2)$-case),\\ \\
$C^2_p=A^0({\rm ad}_+(P^G)\otimes{\scriptscriptstyle|}\hskip -4pt{\g}(P^G))\oplus A^0(\Sigma^{-}(P^G))$ (or
$A^0({\rm ad}_+(P^u)\otimes{\scriptscriptstyle|}\hskip -4pt{\g}_0(P^u))\oplus A^0(\Sigma^{-}(P^u))$ in the
$PU(2)$-case),\\ \\
$D_p^0(f):=f^{\#}_p=(-d_A f, f\Psi)$, \\ \\
$D_p^1(\alpha,\psi):=d_pSW(\alpha,\psi)=\left(\Gamma(d_A^+\alpha)-m
(\psi,\Psi)-m
(\Psi,\psi),\gamma(\alpha)\Psi+{\raisebox{.17ex}{$\not$}}\hskip -0.4mm{D}_A(\psi)\right)\ .$
Here $m$ is the
sesquilinear map associated with the quadratic map $\mu_{0,{\cal G}}$ (or
$\mu_{0,0}$ in the
$PU(2)$-case).
The index $\chi$ of this elliptic complex is called the \underbar{expected}
\underbar{dimension} of the moduli space and can be easily computed by
Atiyah-Singer
Index-Theorem [LMi] in terms of characteristic classes of
$X$ and vector bundles associated with $P^G$.
We give the result in the case of the $PU(2)$-monopole equations:
$$\chi(SW^\sigma_a)=\frac{1}{2}\left(-3 p_1(\bar\delta(P^u))+
c_1(\det(P^u))^2\right)-
\frac{1}{2}(3e(X)+4\sigma(X))
$$
The same methods as in [OT5] give:
\begin{pr}\hfill{\break}
1. The stabilizer ${\cal G}_p$ of $p$ is a finite dimensional Lie group
isomorphic to a
subgroup of $G$ which acts in a natural way in the harmonic spaces
${\Bbb H}^i({\cal C}_p)$,
$i=0,\ 1,\ 2$.\\
2. There exists a neighbourhood $V_p$ of $P$ in ${\cal M}$, a ${\cal
G}_p$-invariant
neighbourhood $U_p$ of
$0$ in ${\Bbb H}^1({\cal C}_p)$, a ${\cal G}_p$-equivariant real analytic map
$K_p:U_p\longrightarrow
{\Bbb H}^2({\cal C}_p)$ with $K_p(0)=0$, $dK_p(0)=0$ and a homeomorphism:
$$V_p\simeq \qmod{Z(K_p)}{{\cal G}_p}\ .
$$
\end{pr}
The homeomorphisms in the proposition above define a structure of a smooth
manifold of dimension $\chi$ in the open set
$${\cal M}_{reg}=\{[p]\in{\cal M}|\ {\cal G}_p=\{1\},\ H^2({\cal
C}_p)=0\}$$
of regular points, and a structure of a real analytic orbifold in the open
set of points
with finite stabilizers.
Note that the stabilizer of a solution of the form $(A,0)$ contains always
$\{\pm {\rm id}\}$,
hence ${\cal M}$ has at least ${\Bbb Z}_2$-orbifold singularities in the
Donaldson points (see Remark 1.2.8).
As in the instanton case, the moduli space ${\cal M}$ is in general
non-compact. The
construction of an Uhlenbeck-type compactification is treated in [T1],
[T2].
\section{$PU(2)$-Monopoles and stable oriented pairs}
In this section we show that the moduli spaces of $PU(2)$-monopoles on a
compact
K\"ahler surface have a natural complex geometric description in terms of
stable
oriented pairs. We explain first briefly, following [OT5],
the concept of oriented pair and we indicate how moduli space of simple
oriented pairs are constructed. Next we restrict ourselves to the rank
2-case and we
introduce the concept of stable oriented pair; the stability property we
need [OT5]
does \underbar{not} depend on a parameter and is an open property. An
algebraic
geometric approach can be found in [OST].
In section 2.2 we give a complex geometric description of the moduli
spaces of
irreducible $PU(2)$-monopoles on a K\"ahler surface in terms of moduli
spaces of
stable oriented pairs. This description is used to give an explicit
description of
a moduli space of $PU(2)$-monopoles on ${\Bbb P}^2$.
\subsection{Simple, strongly simple and stable oriented pairs}
Let $(X,g)$ be a compact K\"ahler manifold of dimension $n$, $E$ a
differentiable
vector bundle of rank
$r$ on
$X$, and ${\cal L}=(L,\bar\partial_{\cal L})$ a fixed holomorphic structure
in the
determinant line bundle $L:=\det E$. We recall (see [OT5]) the following
fundamental definition:
\begin{dt} An oriented pair of type $(E,{\cal L})$ is a pair $({\cal
E},\varphi)$,
where
${\cal E}$ is a holomorphic structure in $E$ such that $\det{\cal E}={\cal
L}$, and
$\varphi\in H^0({\cal E})$. Two oriented pairs $({\cal E}_1,\varphi_1)$, $({\cal
E}_2,\varphi_2)$ of type $(E,{\cal L})$ are called isomorphic if they are
congruent
modulo the natural action of the group $\Gamma(X,SL(E))$ of differentiable
automorphism of $E$ of determinant 1.
\end{dt}
Therefore we fix the underlying ${\cal C}^{\infty}$-bundle and the
holomorphic determinant line bundle (not only its isomorphism type !) of the
holomorphic bundles we consider.
An oriented pair $p=({\cal E},\varphi)$ is called \underbar{simple} if its
stabilizer
$\Gamma(X,SL(E))_p$ is contained in the center ${\Bbb Z}_r{\rm id}_E$ of
$\Gamma(X,SL(E))$, and is
called
\underbar{strongly} \underbar{simple} if its stabilizer is trivial.
The first property has an equivalent infinitesimal formulation: the pair
$({\cal
E},\varphi)$ is simple if and only if any trace-free holomorphic
endomorphism of
${\cal E}$ with
$f(\varphi)=0$ vanishes.
In [OT5] it was shown that
\begin{pr} There exists a (possibly non-Hausdorff) complex analytic
orbifold ${\cal M}^s(E,{\cal L} )$ parameterizing isomorphism classes of
simple oriented pairs of type
$(E,{\cal L})$. The open subset ${\cal M}^{ss}(E,{\cal L})\subset {\cal
M}^{s}(E,{\cal L})$ consisting of strongly simple pairs is a complex
analytic space, and the points in ${\cal M}^s(E,{\cal L})\setminus{\cal
M}^{ss}(E,{\cal L})$ have neighbourhoods modeled on ${\Bbb Z}/r$-quotients.
\end{pr}
If ${\cal E}$ is holomorphic bundle we denote by ${\cal S}({\cal E})$
the set of
reflexive subsheaves ${\cal F}\subset{\cal E}$ with $0<{\rm rk}({\cal
F})<{\rm rk}({\cal E})$. Once we have fixed a section $\varphi\in H^0({\cal
E})$, we
put
$${\cal S}_\varphi({\cal E}):=\{{\cal F}\in{\cal S}({\cal E})|\
\varphi\in H^0(X,{\cal F})\} \ .$$
We recall (see [B]) that ${\cal E}$ is called $\varphi$-\underbar{stable} if
$$\max (\mu_g({\cal E}),\sup\limits_{{\cal F}'\in{\cal S} ({\cal E})}
\mu_g({\cal F}'))<
\inf\limits_{{\cal F}\in {\cal S}_\varphi({\cal E})} \mu_g(\qmod{{\cal
E}}{{\cal F} })\ ,$$
where for a nontrivial torsion free coherent sheaf ${\cal F}$, $\mu_g({\cal
F})$
denotes its slope with respect to the K\"ahler metric $g$. If the real number
$\lambda$ belongs to the interval $\left(\max (\mu_g({\cal E}),\sup
\limits_{{\cal
F}'\in{\cal S} ({\cal E})}
\mu_g({\cal F}')),
\inf\limits_{{\cal F}\in {\cal S}_\varphi({\cal E})} \mu_g(\qmod{{\cal
E}}{{\cal F} })\right)$, the pair $({\cal E},\varphi)$ is called
$\lambda$-stable.
If ${\cal
M}$ is a holomorphic line bundle and $\varphi\in H^0({\cal M})\setminus\{0\}$,
then $({\cal M},\varphi)$ is $\lambda$-stable iff $\mu_g({\cal M})<\lambda$.
The correct definition of the stability property for oriented
pairs of arbitrary rank is a delicate point [OST]. The definition must
agree in the
algebraic-projective case with the corresponding GIT-stability condition.
On the
other hand, in the case $r=2$ the definition simplifies considerably and
this case is
completely sufficient for our purposes. Therefore from now on we assume $r=2$,
and we recall from [OT5] the following
\begin{dt} \hfill{\break}
An oriented pair $({\cal E},\varphi)$ of type $({ E},{\cal L})$
is called \underbar{stable} if one of the following conditions holds:\\
I. \ ${\cal E}$ is
$\varphi$-stable, \\
II. $\varphi\ne 0$ and ${\cal E}$ splits in direct sum of line bundles
${\cal E}={\cal E}_1\oplus{\cal E}_2$, such that \hspace*{5mm} $\varphi\in
H^0({\cal E}_1)$ and the pair $({\cal E}_1,\varphi)$ is $\mu_g({ E})$-stable.\\
A holomorphic pair $({\cal E},\varphi)$ of type $({ E},{\cal L})$
is called \underbar{polystable} if it is stable, or $\varphi=0$ and
${\cal E}$
is a polystable bundle.
\end{dt}
\begin{re} An oriented pair $({\cal E},\varphi)$ of type $(E,{\cal L})$ with
$\varphi\ne 0$ is stable iff $\mu_g({\cal
O}_X(D_\varphi))<\mu_g(E)$, where $D_\varphi$ is the divisorial component of the
vanishing locus $Z(\varphi)$. An oriented pair of the form $({\cal E},0)$
is stable iff the
holomorphic bundle ${\cal E}$ is stable.
\end{re}
\subsection{The projective vortex equation and stability of oriented pairs}
The stability property for holomorphic bundles has a well known differential
geometric characterization: an holomorphic bundle is stable if and only if
it is
simple and admits a Hermite-Einstein metric (see for instance [DK], [LT]).
Similarly,
an holomorphic pair $({\cal E},\varphi)$ is $\lambda$-stable if and only
it is simple
and ${\cal E}$ admits a Hermitian metric satisfying the vortex equation
associated
with the constant
$t=\frac{4\pi\lambda}{Vol_g(X)}$ [B]. All these important results are infinite
dimensional extensions of the {\it metric characterization of stability}
(see [MFK],
[DK]).
The same approach gives in the case of oriented pairs the following
differential
geometric interpretation of stability [OT5]:
Let $E$ be a differentiable rank 2 vector bundle over a compact K\"ahler
manifold $(X,g)$, ${\cal L}$ a holomorphic structure in $L:=\det(E)$ and $l$ a
fixed Hermitian metric in $L$.
%
\begin{thry} An holomorphic pair $({\cal E},\varphi)$ of type
$(E,{\cal L})$ with
${\rm rk}({\cal E})=2$ is polystable iff ${\cal E}$ admits a
Hermitian metric $h$ with $\det h=l$ which solves the following
\underbar{projective} \underbar{vortex} \underbar{equation}:
$$i\Lambda_g F_h^0 +\frac{1}{2}(\varphi\bar\varphi^h)_0=0\ .\eqno{(V)}$$
If $({\cal E},\varphi)$ is stable, then the
metric $h$ is unique.
\end{thry}
\begin{re} With an appropriate definition of (poly)stability of oriented
pairs [OST], the
theorem holds for arbitrary rank $r$.
\end{re}
Denote by $\lambda\in{\cal A}(L)$ the the Chern connection of ${\cal L}$
associated with the metric $l$. Let $\bar{\cal A}_{\bar\partial_\lambda}$ be the
space of semiconnections in $E$ which induce the fixed semiconnection
${\bar\partial_\lambda}$ in $L$.
Fix a Hermitian metric $H$ in $E$ with $\det H=l$ and denote by ${\cal
A}_\lambda$ the space of unitary connections in $E$ with induce the fixed
connection $\lambda$ in $L$. There is an obvious identification
${\cal A}_\lambda\textmap{\simeq}\bar{\cal A}_{\bar\partial_\lambda}$,
$C\longmapsto \bar\partial_C$ which endows the affine space ${\cal A}_\lambda$
with a complex structure compatible with the standard $L^2$ euclidean structure.
Therefore, after suitable Sobolev completions, the product ${\cal
A}_\lambda\times A^0(E)=\bar {\cal A}_{\bar\partial_\lambda}\times A^0(E)$
becomes a Hilbert K\"ahler manifold. Let ${\cal G}_0:=\Gamma(X,SU(E))$ be the
gauge group of unitary automorphisms of determinant 1 in
$(E,H)$ and let ${\cal G}_0^{\Bbb C}:=\Gamma(X,SL(E))$ be its complexification.
\begin{re} The map $m:{\cal A}_\lambda\times A^0(E)\longrightarrow A^0(su(E))$ defined by
$$m(C,\varphi)=\Lambda_g F_C^0 -\frac{i}{2}(\varphi\bar\varphi^H)_0
$$
is a moment map for the ${\cal G}_0$-action in the K\"ahler manifold ${\cal
A}_\lambda\times A^0(E)$
\end{re}
If ${\cal E}$ is a holomorphic structure in $E$ with $\det{\cal E}={\cal L}$ we
denote by $C_{\cal E}\in {\cal A}_\lambda$ the Chern connection defined be
${\cal
E}$ and the fixed metric $H$.
The map $({\cal E},\varphi)\longmapsto (C_{\cal E},\varphi)$ identifies the set
of oriented pairs of type $(E,{\cal L})$ with the subspace $Z(j)$ of the
affine space
${\cal A}_\lambda\times A^0(E)$ which is cut-out by the integrability condition
$$j(C,\varphi):=(F^{02}_C,\bar\partial_C\varphi)=0
$$
\begin{dt} A pair $(C,\varphi)\in {\cal A}_\lambda\times A^0(E)$ will be called
\underbar{irreducible} if any $C$-parallel endomorphism $f\in A^0(su(E))$ with
$f(\varphi)=0$ vanishes.
\end{dt}
This notion of (ir)reducibility must not be confused with that one
introduced in
section 1.2.3, which depends on the choice of an admissible pair. For instance,
irreducible pairs can be abelian.
The theorem above can now be reformulated as follows:
\begin{pr} An oriented pair $({\cal E},\varphi)$ of type $(E,{\cal L})$ is
polystable
if and only if the complex orbit ${\cal G}_0^{\Bbb C}\cdot (C_{\cal E},\varphi)\subset
Z(j)$ intersects the vanishing locus $Z(m)$ of the moment map $m$. $({\cal
E},\varphi)$ is stable if and only if it is polystable and $(C_{\cal
E},\varphi)$ is
irreducible.
\end{pr}
It can be easily seen that the intersection $\left[{\cal G}_0^{\Bbb C}\cdot (C_{\cal
E},\varphi)\right]\cap Z(m)$ of a complex orbit with the vanishing locus of the
moment map is either empty or coincides with a
\underbar{real} orbit. Moreover, using the proposition above one can
prove that the
set $Z(j)^{st}$ of stable oriented pairs is an \underbar{open} subset of the set
$Z(j)^{s}$ of simple oriented pairs. The quotient $\qmod{Z(j)^s}{{\cal
G}_0^{{\Bbb C}}}$ can
be identified with the moduli space ${\cal M}^s(E,{\cal L})$ of simple
oriented pairs
of type $(E,{\cal L})$. The open subspace ${\cal M}^{st}(E,{\cal
L}):=\qmod{Z(j)^{st}}{{\cal G}_0^{{\Bbb C}}}\subset {\cal M}^s(E,{\cal L})$ will
be called
the moduli space of stable oriented pairs, and comes with a natural
structure of a
\underbar{Hausdorff} complex space.
The same methods as in [DK], [LT], [OT1] give finally the following
\begin{thry} The identification map $(C,\varphi)\longmapsto
(\bar\partial_C,\varphi)$ induces an isomorphism of real analytic spaces
$\qmod{Z(j,m)^{ir}}{{\cal G}_0}\textmap{\simeq}\qmod{Z(j)^{st}}{{\cal
G}_0^{{\Bbb C}}}={\cal M}^{st}(E,{\cal L})$, where
$Z(j,m)^{ir}$ denotes the locally closed subspace consisting of irreducible
oriented pairs solving the equations $j(C,\varphi)=0$, $m(C,\varphi)=0$.
\end{thry}
%
\subsection{Decoupling the $PU(2)$-monopole equations}
Let $(X,g)$ be a K\"ahler surface and let $P^{\rm can}\longrightarrow P_g$ be the
associated
\underbar{canonical} $Spin^c(4)$-\underbar{structure} whose spinor bundles are
$\Sigma^+=\Lambda^{00}\oplus\Lambda^{02}$, $\Sigma^-=\Lambda^{01}$. By
Propositions 1.1.11, 1.1.7 it follows that the data of a
$Spin^{U(2)}(4)$-structure
in $(X,g)$ is equivalent to the data of a Hermitian 2-bundle $E$. The bundles
associated with the
$Spin^{U(2)}(4)$-structure $\sigma:P^u\longrightarrow P_g$ corresponding to $E$ are:
$$\det(P^u)=\det E\otimes K_X ,\ \bar\delta(P^u)=\qmod{P_E}{S^1} , $$ \
$$\Sigma^{\pm}(P^u)=\Sigma^{\pm}\otimes
E^{\vee}\otimes\det(P^u)=\Sigma^{\pm}\otimes E\otimes K_X\ ;\ \
\Sigma^{+}(P^u)=E
\otimes K_X\oplus E\ .
$$
Suppose that $\det(P^u)\in NS(X)$ and fix an \underbar{integrable} connection
$a\in{\cal A}(\det(P^u))$. Denote by
$c\in {\cal A}(K_X)$ the Chern connection in $K_X$, by $\lambda:=a\otimes
\bar c$
the induced connection in $\det(E)=\det(P^u)\otimes \bar K_X$ and by ${\cal
L}$ the
corresponding holomorphic structure in this line bundle. Identify the
affine space
${\cal A}(\bar\delta(P^u))$ with ${\cal A}_{\lambda\otimes c^{\otimes
2}}(E\otimes K_X)$ and the space of spinors
$A^0(\Sigma^+(P^u))$ with the direct sum $A^0(E \otimes K_X)\oplus
A^0(E)=A^0(E \otimes K_X)\oplus
A^{02}(E \otimes K_X)$. The same computations as in Proposition 4.1 [OT5]
gives
the following {\it decoupling theorem}:
\begin{thry} A pair
$$(C,\varphi+\alpha)\in {\cal A}_{\lambda\otimes c^{\otimes 2}}(E\otimes
K_X)\times\left(A^0(E\otimes K_X)\oplus A^{02}(E \otimes K_X)\right)$$
solve the $PU(2)$-monopole equations $SW^\sigma_a$ if and only if the
connection
$C$ is integrable and one of the following conditions is fulfilled:
$$1)\ \alpha=0,\ \bar\partial_C\varphi=0\ \ \ and\ \ \ i\Lambda_g
F_C^0+\frac{1}{2}(\varphi\bar\varphi)_0=0\ ,
$$
$$\ 2)\ \varphi=0,\ \partial_C\alpha=0\ \ and\ \ i\Lambda_g
F_C^0-\frac{1}{2}*(\alpha\wedge\bar\alpha)_0=0\ ,
$$
\end{thry}
Using Theorem 2.2.5 we get
\begin{re} The moduli space $({\cal M}^\sigma_a)_{\alpha=0}^{ir}$ of
irreducible
solutions of type 1) can be identified with the moduli space ${\cal
M}^{st}(E\otimes
K_X,{\cal L}\otimes{\cal K}_X^{\otimes 2})$.
The moduli space $({\cal M}^\sigma_a)^{ir}_{\varphi=0}$ of
irreducible solutions of type 2) can be identified with the moduli space
${\cal
M}^{st}(E^{\vee},{\cal L}^{\vee})$ via the map $(C,\alpha)\longmapsto (\bar
C\otimes c,\bar \alpha)$.
\end{re}
Concluding, we get the following simple description of the moduli space ${\cal
M}^\sigma_a$ in terms of moduli spaces of stable oriented pairs.
\begin{co} Suppose that the $Spin^{U(2)}(4)$-structure $\sigma:P^u\longrightarrow P_g$ is
associated to the pair $(P^{\rm can}\longrightarrow P_g,E)$, where $P^{\rm can}\longrightarrow P_g$
is the canonical $Spin^c(4)$-structure of the K\"ahler surface $(X,g)$ and
$E$ is a
Hermitian rank 2 bundle. Let $a\in{\cal A}(\det(P^u))$ be an integrable
connection and
${\cal L}$ the holomorphic structure in $\det E=\det(P^u)\otimes
K_X^{\vee}$ defined by
$a$ and the Chern connection in $K_X$. Then the moduli space
${\cal M}^\sigma_a$ decomposes as a union of two Zariski closed subspaces
$${\cal M}^\sigma_a=({\cal M}^\sigma_a)_{\alpha=0}\mathop{\bigcup}({\cal
M}^\sigma_a)_{\varphi=0}
$$
which intersect along the Donaldson moduli space ${\cal D}(\delta(P^u))
\subset{\cal M}^\sigma_a$ (see Remark 1.2.8). There are canonical real analytic
isomorphisms
$$({\cal M}^\sigma_a)_{\alpha=0}^{ir}\simeq{\cal M}^{st} (E\otimes K_X,{\cal
L}\otimes{\cal K}_X^{\otimes 2})\ ,\ \ ({\cal
M}^\sigma_a)^{ir}_{\varphi=0}= {\cal
M}^{st}(E^{\vee},{\cal L}^{\vee})
$$
\end{co}
Using Remark 1.2.7, we recover the main result (Theorem 7.3) in [OT5]
stated for
quaternionic monopoles.
\vspace{5mm}\\
{\bf Example:} (R. Plantiko) On ${\Bbb P}^2$ endowed with the standard Fubini-Study
metric $g$ consider the $Spin^{U(2)}(4)$-structure $P^u\longrightarrow P_g$ with
$c_1(\det(P^u))=4$,
$p_1(\bar\delta(P^u))=-3$. It is easy to see that this
$Spin^{U(2)}(4)$-structure is
associated with the pair $(P^{\rm can}\longrightarrow P_g,E)$, where $E$ is a
$U(2)$-bundle with $c_2(E)=13$, $c_1(E)=7$. Therefore $E\otimes K$ has
$c_1(E\otimes K)=1$, $c_2( E\otimes K)=1$.
Using Remark 2.1.4 it is easy to see
that any stable oriented pair $({\cal F},\varphi)$ of type $(E\otimes K,
{\cal O}(1))$
with
$\varphi\ne 0$ fits in an exact sequence of the form
$$
0\longrightarrow {\cal O} \textmap{\varphi} {\cal F}\longrightarrow {\cal O}(1)\otimes
J_{z_\varphi}\longrightarrow 0\ ,$$
where $z_\varphi\in{\Bbb P}^2$, $c\in{\Bbb C}$ and ${\cal F}={\cal T}_{{\Bbb P}^2}(-1)$ is the
unique stable bundle with
$c_1=c_2=1$. Moreover, two oriented pairs $({\cal F},\varphi)$, $({\cal
F},\varphi')$ define the same point in the moduli space of stable oriented
pairs of
type $(E\otimes K,{\cal O}(1))$ if and only if $\varphi'=\pm \varphi$.
Therefore
$${\cal M}^{st}(E\otimes K,{\cal
O}(1))=\qmod{H^0({\cal F})}{\pm {\rm id}}\simeq \qmod{{\Bbb C}^3}{\pm{\rm id}}$$
Studying the local models of the moduli space one can check that the above
identification is a complex analytic isomorphism.
On the other hand every polystable oriented pair of type $ (E\otimes
K,{\cal O}(1))$
is stable and there is no polystable oriented pair of type $(E^{\vee},{\cal
O}(-7))$.
This shows that
$${\cal M}^\sigma_a\simeq\qmod{{\Bbb C}^3}{\pm{\rm id}} \
$$
for every integrable connection $a\in{\cal A}(\det(P^u))$. The quotient
$\qmod{{\Bbb C}^3}{\pm{\rm id}}$ has a natural compactification ${\cal
C}:=\qmod{{\Bbb P}^3}{\langle\iota\rangle}$, where $\iota$ is the involution
$$[x_0,x_1,x_2,x_3]\longmapsto [x_0,-x_1,-x_2,-x_3]\ .$$
${\cal C}$ can be identified with cone over the image of ${\Bbb P}^2$ under the
Veronese map
$v_2:{\Bbb P}^2\longrightarrow {\Bbb P}^5$. This compactification coincides with the {\it Uhlenbeck
compactification} of the moduli space [T1], [T2].
Let now $\sigma':P'^u\longrightarrow P_g$ be the $Spin^{U(2)}(4)$-structure in ${\Bbb P}^2$ with
$\det(P'^u)=\det(P^u)$, $p_1(\delta(P'^u))=+1$. It is easy to see by the
same method
that ${\cal M}^{\sigma'}_a$ consists of only one point, which is the
{\it abelian}
solution associated with the {\it stable} oriented pair $({\cal O}\oplus{\cal
O}(1),{\rm id}_{\cal O})$. Via the isomorphism explained in Proposition 1.2.15,
${\cal
M}^{\sigma'}_a$ corresponds to the
moduli space of solutions of the (abelian) twisted Seiberg-Witten equations
associated with the canonical $Spin^c(4)$-structure and the positive
chamber (see
[OT6]). Therefore
\begin{pr} The Uhlenbeck compactification of the moduli space ${\cal
M}^\sigma_a$ can be
identified with the cone ${\cal C}$ over the image of ${\Bbb P}^2$ under the
Veronese
map $v_2$. The vertex of the cone corresponds to the unique Donaldson
point. The
base of the cone corresponds to the space
${\cal M}^{\sigma'}_a\times{\Bbb P}^2$ of ideal solutions concentrated in one
point. The
moduli space ${\cal M}^{\sigma'}_a$ consists of only one abelian point.
\end{pr}
\newpage
\centerline{\large{\bf References}}
\vspace{6 mm}
\parindent 0 cm
[AHS] Atiyah M., Hitchin N. J., Singer I. M.: {\it Selfduality in
four-dimensional Riemannian geometry}, Proc. R. Lond. A. 362, 425-461 (1978)
[B] Bradlow, S. B.: {\it Special metrics and stability for holomorphic
bundles with global sections}, J. Diff. Geom. 33, 169-214 (1991)
[D] Donaldson, S.: {\it Anti-self-dual Yang-Mills connections over
complex algebraic surfaces and stable vector bundles}, Proc. London Math.
Soc. 3, 1-26 (1985)
[DK] Donaldson, S.; Kronheimer, P.B.: {\it The Geometry of
four-manifolds}, Oxford Science Publications 1990
[FU] Freed D. S. ; Uhlenbeck, K.:
{\it Instantons and Four-Manifolds.}
Springer-Verlag 1984
[GS] Guillemin, V.; Sternberg, S.: {\it Birational equivalence in the
symplectic category}, Inv. math. 97, 485-522 (1989)
[HH] Hirzebruch, F.; Hopf, H.: {\it Felder von Fl\"achenelementen in
4-dimensiona\-len 4-Mannigfaltigkeiten}, Math. Ann. 136 (1958)
[H] Hitchin, N.: {\it Harmonic spinors}, Adv. in Math. 14, 1-55 (1974)
[HKLR] Hitchin, N.; Karlhede, A.; Lindstr\"om, U.; Ro\v cek, M.: {\it
Hyperk\"ahler
metrics and supersymmetry}, Commun.\ Math.\ Phys. (108), 535-589 (1987)
[K] Kobayashi, S.: {\it Differential geometry of complex vector bundles},
Princeton University Press 1987
[KM] Kronheimer, P.; Mrowka, T.: {\it The genus of embedded surfaces in
the projective plane}, Math. Res. Letters 1, 797-808 (1994)
[LL] Li, T.; Liu, A.: {\it General wall crossing formula}, Math. Res. Lett.
2,
797-810 (1995).
[LM] Labastida, J. M. F.; Marino, M.: {\it Non-abelian monopoles on
four manifolds}, Preprint,
Departamento de Fisica de Particulas, Santiago de Compostela, April
(1995)
[La] Larsen, R.: {\it Functional analysis, an introduction}, Marcel
Dekker, Inc., New York, 1973
[LMi] Lawson, H. B. Jr.; Michelson, M. L.: {\it Spin Geometry}, Princeton
University Press, New
Jersey, 1989
[LT] L\"ubke, M.; Teleman, A.: {\it The Kobayashi-Hitchin
correspondence},
World Scientific Publishing Co. 1995
[M] Miyajima, K.: {\it Kuranishi families of
vector bundles and algebraic description of
the moduli space of Einstein-Hermitian
connections}, Publ. R.I.M.S. Kyoto Univ. 25,
301-320 (1989)
[MFK] Mumford, D,; Fogarty, J.; Kirwan, F.: {\it Geometric invariant
theory}, Springer Verlag,
1994
[OST] Okonek, Ch.; Schmitt, A.; Teleman, A.: {\it Master spaces for stable
pairs}, Preprint,
alg-geom/9607015
[OT1] Okonek, Ch.; Teleman, A.: {\it The Coupled Seiberg-Witten
Equations, Vortices, and Moduli Spaces of Stable Pairs}, Int. J. Math.
Vol. 6, No. 6, 893-910 (1995)
[OT2] Okonek, Ch.; Teleman, A.: {\it Les invariants de Seiberg-Witten
et la conjecture de Van De Ven}, Comptes Rendus Acad. Sci. Paris, t.
321, S\'erie I, 457-461 (1995)
[OT3] Okonek, Ch.; Teleman, A.: {\it Seiberg-Witten invariants and
rationality of complex surfaces}, Math. Z., to appear
[OT4] Okonek, Ch.; Teleman, A.: {\it Quaternionic monopoles}, Comptes
Rendus Acad. Sci. Paris, t. 321, S\'erie I, 601-606 (1995)
[OT5] Ch, Okonek.; Teleman, A.: {\it Quaternionic monopoles},
Commun.\ Math.\ Phys., Vol. 180, Nr. 2, 363-388, (1996)
[OT6] Ch, Okonek.; Teleman, A.: {\it Seiberg-Witten invariants for
manifolds with
$b_+=1$, and the universal wall crossing formula},
Int. J. Math., to appear
[PT1] Pidstrigach, V.; Tyurin, A.: {\it Invariants of the smooth
structure of an algebraic surface arising from the Dirac operator},
Russian Acad. Izv. Math., Vol. 40, No. 2, 267-351 (1993)
[PT2] Pidstrigach, V.; Tyurin, A.: {\it Localisation of the Donaldson
invariants along the
Seiberg-Witten classes}, Russian Acad. Izv. , to appear
[T1] Teleman, A. :{\it Non-abelian Seiberg-Witten theory},
Habilitationsschrift,
Universit\"at Z\"urich, 1996
[T2] Teleman, A. :{\it Moduli spaces of $PU(2)$-monopoles}, Preprint,
Universit\"at
Z\"urich, 1996
[W] Witten, E.: {\it Monopoles and four-manifolds}, Math. Res.
Letters 1, 769-796 (1994)
\vspace{0.3cm}\\
Author's address : %
Institut f\"ur Mathematik, Universit\"at Z\"urich, Winterthu\-rerstr. 190,
CH-8057 Z\"urich, {\bf e-mail}: teleman@math.unizh.ch\\
\hspace*{2.4cm} and Department of Mathematics, University of Bucharest.\\
\end{document}
| proofpile-arXiv_065-346 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The low-dimensional materials are known to be very susceptible to
various instabilities, such as formation of charge- or
spin-density waves. Probably, the first one discussed is the
famous Peierls instability of one-dimensional metals: lattice
distortion with the new lattice period $2 \pi / Q$, where the
wave vector $Q = 2 k_F$ (if there is one electron per site, the
lattice dimerizes). The lattice distortion opens a gap in
the electron spectrum at the Fermi surface, so that the energies
of all occupied electron states decrease, which drives
the transition. It is also known that this instability survives
when we include the strong on-site Coulomb repulsion between
electrons (the, so called, Peierls-Hubbard model),
\vspace{-0.15cm}
\[
H = - \sum_{l,\sigma} \left( J_0 + \alpha
(u_l - u_{l+1}) \right) \left( c^{\dagger}_{l \sigma} c_{l+1
\sigma} + h. c. \right)
\]
\vspace{-0.35cm}
\begin{equation}
+ U \!\sum_l c^{\dagger}_{l \uparrow} c_{l \uparrow}
c^{\dagger}_{l \downarrow} c_{l \downarrow} +
\sum_l \!\left( \frac{P_l^2}{2M} +
\frac{K}{2}(u_{l+1}\!-\!u_{l})^2 \right)
\end{equation}
Here the first term describes the dependence of the electron
hopping integral $t_{l,l+1}$ on the change of the distance $u_l -
u_{l+1}$ between the neighbouring ions and the last term is the
lattice energy (which after quantization becomes $\sum_q \omega_q
b^{\dagger}_q b_q$). The dimensionless electron--lattice coupling
constant $\lambda = 4 \alpha^2 / (\pi t_0 K)$ determines the
magnitude of the lattice distortion and the energy gap.
When Coulomb repulsion is strong, $U \gg t_0$, and there is one
electron per site, we are in the limit of localized
electrons (Mott-Hubbard insulator) with effective
antiferromagnetic interaction (Spin-Peierls model),
\[
H_{eff} = \sum_l J_{l,l+1} \;{\bf S}_l \cdot {\bf S}_{l+1}
\]
\vspace{-0.3cm}
\begin{equation}
\label{Heff}
+ \sum_l \left( \frac{P_l^2}{2M} +
\frac{K}{2}(u_{l+1} - u_{l})^2 \right)\;\;,
\end{equation}
where the exchange constant $J_{l,l+1} = J_0 + \alpha^{\prime}
(u_{l} - u_{l+1})$, $J_0 = 4 t_0^2 / U$ and $\alpha^{\prime} = 8
t_0 \alpha / U$. The dependence of $J_{l,l+1}$ on the distance
between neighbouring spins again leads to an instability, as
the result of which the spin chain dimerizes. Physically it
corresponds to a formation of singlet dimers---the simplest
configuration in the valence bond picture. This transition,
known as the spin-Peierls (SP) transition, was extensively
studied theoretically \cite{1,2,3} and was previously observed
experimentally in a number of quasi-one-dimensional organic
compounds, such as TTF-CuBDT $(T_{SP} = 12 K)$ or TTF- AuBDT
$(T_{SP} = 2.1 K)$ \cite{4}.
Recently the first inorganic spin-Peierls material CuGeO$_3$ was
discovered \cite{5}. Since then much experimental data on
this material, both pure and doped (mostly by Zn and Si), was
obtained. The spin chains in this compound are formed by CuO$_4$
plaquettes with common edge (See Fig.1). They apparently play
the main role in the spin-Peierls transition with $T_{SP} = 14
K$. However, as we will discuss below, the interchain
interaction is also very important here. The interchain coupling
is provided both by Ge ions (along $b$-axis of the crystal) and
by the well separated apex oxygens (along $a$-axis; direction of
the chains coincides with the $c$-axis of a crystal).
Experimentally it is established that the strongest anomalies in
CuGeO$_3$, both in the normal phase and at the spin-Peierls
transition, occur not along the $c$-axis, but, rather
unexpectedly, along the other two directions, the strongest one
found along the $b$-axis \cite{6,7}. For instance, the
anomalies in the thermal expansion coefficient and in
the magnetostriction along the $b$-axis are several times
stronger than along the direction of the chains \cite{7,8}.
Further interesting information is obtained in the studies of
doping dependence of various properties of CuGeO$_3$. It was
shown that the substitution of Cu by nonmagnetic Zn, as well as
Ge by Si, leads initially to a rather strong reduction of
$T_{SP}$ \cite{9,10}, which according to some recent data
\cite{11} is flattened out at higher doping level.
Simultaneously, antiferromagnetic order develops at lower
temperatures, often coexisting with the SP distortion
\cite{11,12}.
The aim of the present investigation is to provide microscopic
picture of the properties of CuGeO$_3$ taking into account
realistic details of its structure. We will address below
several important issues:
\begin{itemize}
\item
The detailed description of the exchange interaction (Why is
nearest neighbour exchange antiferromagnetic?);
\item
The sensitivity of the exchange constants to different types of
distortion and the resulting from that microscopic picture of
the spin-Peierls transition, which may be called bond-bending
model (Why the anomalies are strongest in the perpendicular
directions? Why is SP transition observed in CuGeO$_3$ and not in
many other known quasi-one-dimensional magnets?);
\item
The nature of elementary excitations in spin-Peierls systems in
general (Are they the ordinary singlet-triplet excitations? How
are they influenced by the interchain interaction?);
\item
The mechanism by which doping affects the properties of SP
system (Why is the effect so strong? Why does the
system develops antiferromagnetic order upon doping?).
\end{itemize}
These questions are raised by the experimental observations, and
we hope that their clarification will help both to elucidate some
general features of SP transitions and to build the detailed
picture of this transition in CuGeO$_3$.
\section{Exchange Interaction in CuGeO$_3$. Role of Side-Groups
in Superexchange}
The first question we would like to address is: why is the
nearest-neighbour Cu-Cu exchange interaction antiferromagnetic
at all? The well-known Goodenough-Kanamori-Anderson rules state,
in particular, that the $90^{\circ}$-exchange between two
half-filled orbitals is ferromagnetic. In CuGeO$_3$ the Cu-O-Cu
angle $\theta$ in the superexchange path is $98^{\circ}$, which
is rather close to $90^{\circ}$. Usually, the antiferromagnetic
character of the exchange is attributed to this small $8^{\circ}$
difference. Our calculation [13], however, shows that it is not
enough: even in this realistic geometry the exchange constant for the
nearest neighbour (nn) spins for realistic values of parameters
(such as copper-oxygen overlap, magnitude of the Coulomb
interaction on copper and oxygen, Hund's rule intraatomic
exchange, etc) is still slightly ferromagnetic, $J^{c}_{nn} = -
0.6$meV.
To explain the observed values of $J^{c}_{nn}$ the idea was put
forward in \cite{13} that the antiferromagnetic coupling may be
enhanced by the, initially ignored, side-groups effect (here Ge).
As it is clear from Fig.1, there is a Ge ion attached to each
oxygen in CuO$_2$ chain. The Coulomb interaction with Ge$^{4+}$
and the hybridization of $2p_y$ orbital of oxygen with Ge (see
Fig.2) destroys the equivalence of $p_x$-orbitals (shaded) and
$p_y$-orbitals (empty) of oxygen, which for $90^{\circ}$-exchange
was responsible for the cancellation of the corresponding
antiferromagnetic contributions to superexchange. As a result, the
exchange with partial delocalization into Ge may become
antiferromagnetic even for $90^\circ$ geometry (for a detailed
discussion see \cite{13}). The calculation gives a reasonable
value for the nearest-neighbour exchange interaction: $J^{c}_{nn} =
11.6$meV (the experimental value is $9 \div 15$meV, depending on the
procedure of extraction of $J^{c}_{nn}$ from the experimental
data).
We also calculated other exchange constants using a similar
approach. For the interchain interaction along $b$ and $a$ axes
we obtained $J^{b}_{nn} = 0.7$meV and $J^{a}_{nn} = -3 \cdot
10^{-4}$meV, so that $J^{b}_{nn} / J^{c}_{nn} \approx 0.06$, and
$J^{a}_{nn} / J^{c}_{nn} \approx - 3 \cdot 10^{-5}$. The
experimental values are: $J^{b}_{nn} / J^{c}_{nn} \approx 0.1$ ,
$J^{a}_{nn} / J^{c}_{nn} \approx - 0.01$. Thus our theoretical
results are not so far from the experiment for the interchain
interaction in the $b$-direction and too small for $a$-axis. We
note, however, that the ferromagnetic exchange in $a$-direction
is in any event very weak and a small variation of parameters
used in our calculation can easily changes this value quite
significantly.
More interesting is the situation with the next-nearest-neighbour
(nnn) interaction in the chain direction $J^{c}_{nnn}$. As is
clear from Figs.1 and 2, there is a relatively large overlap of
the $p_x$ orbitals on neighbouring plaquettes, which leads to a
rather strong antiferromagnetic nnn coupling. Our calculation
gives $\gamma = J^{c}_{nnn} / J^c_{nn} \approx 0.23 \div 0.3$.
From the fit of $\chi(T)$ curve Castilla {\em et al} \cite{15}
obtained the value $\gamma \approx 0.25$. Note also that a
sufficiently strong nnn interaction may lead to a singlet
formation and creation of a spin gap even without the
spin-lattice interaction. Such a state is an exact ground state
at the Majumdar-Ghosh point $\gamma = 0.5$ [16]. The critical
value for appearance of a spin gap is $\gamma \approx 0.25$
\cite{15}. Thus, from both the fit to experimental data and our
calculations it appears that CuGeO$_3$ is rather close to the
critical point, so that one can conclude that both the
frustrating nnn interaction and the spin-lattice interaction
combine to explain the observed properties of CuGeO$_3$ (see also
\cite{8}).
Anticipating the discussion below, we consider here the
modification of the exchange constants caused by doping. In
particular, we calculated the change of $J^{c}_{nn}$ when Ge
ion attached to a bridging oxygen is substituted by Si. As
Si is smaller than Ge, one can expect two consequences.
First, it will pull closer the nearby chain oxygen, somewhat
reducing the corresponding Cu-O-Cu angle $\theta$. The second
effect is the reduced hybridization of $2p_y$ orbital of this
oxygen with Si. According to the above considerations (see also
\cite{13}) both these factors would diminish the
antiferromagnetic nn exchange. Our calculation shows \cite{17}
that for realistic values of parameters the resulting exchange
interaction becomes either very small or even weakly
ferromagnetic, $J^{c}_{nn} = 0\pm1$meV. Thus Si doping
effectively interrupts the chains similar the effect of
substituting Cu by Zn. This result will be used
later in section 5.
\section{Bond-Bending Model of the Spin-Peierls Transition in
CuGeO$_3$}
We return to the discussion of the exchange interaction and its
dependence on the details of crystal structure of CuGeO$_3$. As
follows from the previous section, the largest exchange constant
$J^{c}_{nn}$ is very sensitive to both Cu-O-Cu angle $\theta$
and to the side group (here Ge). As to the second factor, one
has to take into account that, contrary to a simple model of
CuGeO$_3$
shown in Fig.2, in the real crystal structure Ge ion lies not
exactly in the plane of CuO$_2$ chain: the angle $\alpha$ between
Ge and this plane is $\sim 160^{\circ}$.
The actual crystal structure may be schematically depicted
as in Fig.3, where the dashed lines represent CuO$_2$-chains.
One can easily understand that $J^{c}_{nn}$ is also very
sensitive to a Ge-CuO$_2$ angle $\alpha$. The influence of Ge,
which according to the above consideration gives an
antiferromagnetic tendency, is the largest when $\alpha =
180^{\circ}$: just in this case the inequivalence of $2p_x$ and
$2p_y$ orbitals shown in Fig.2, which is crucial for this effect,
becomes the strongest. On the other hand, if, for instance,
$\alpha = 90^{\circ}$ ({\em i.e.} if Ge would sit exactly above
the oxygen) its interaction with $2p_x$ and $2p_y$ orbitals would
be the same and the whole effect of Ge on $J^{c}_{nn}$ would
disappear. Thus bending GeO-bonds with respect to CuO$_2$-plane
would change $J^{c}_{nn}$ (it becomes smaller when $\alpha$
decreases).
These simple considerations immediately allows one to understand
many, at first glance, strange properties of CuGeO$_3$ mentioned
in the introduction \cite{19}. Thus, {\em e.g.} the compression
of CuGeO$_3$ along the $b$-direction would occur predominantly by
way of decreasing of Ge-(CuO$_2$) angle $\alpha$, while the
tethrahedral O-Ge-O angle $\phi$ is known to be quite rigid.
Such a ``hinge'' or ``scharnier'' model explains why the main
lattice anomalies are observed along the $b$-axis \cite{7} and
why the longitudinal mode parallel to $b$ is especially soft
\cite{6}. Within this model one can also naturally explain (even
quantitatively) the fact that the magnetostriction is also
strongest in the b-direction \cite{8}. If we assume
that the main changes in the lattice parameters occur only due to
bond bending ({\em i.e.} due to the change of angles, while bond
lengths remain fixed), we obtain the following
result for the uniaxial pressure dependence of $J \equiv
J^{c}_{nn}$ \cite{19}: $\delta J/\delta P_b =
-1.5$meV/GPa, which is close to the experimental result
$\delta J/\delta P_b = -1.7$meV/GPa \cite{8}.
We can also explain reasonably well the change of the
exchange coupling for other directions.
This picture can be also used to explain the spin-Peierls
transition itself. What occurs below $T_{SP}$, is mostly the
change of bond angles (``bond-bending''), which alternates along
the chains. Experimentally it was found \cite{20} that the
dimerization is accompanied by the alternation of Cu-O-Cu angles
$\theta$. In our model $J$ is also sensitive to Ge-CuO$_2)$
angle $\alpha$ and we speculated in Ref.14 that this angle, most
probably, also alternates in the spin-Peierls phase. Recently
this alternation was observed \cite{18}.
Consequently we have a rather coherent picture of the
microscopic changes in CuGeO$_3$, both above and below $T_{SP}$:
in the first approximation we may describe the main lattice
changes as occurring mostly due to the change of the ``soft'' bond
angles. The strongest effects for $T>T_{SP}$ are then expected along
the $b$-axis, which is consistent with the experiment. The same
bond-bending distortions seem also to be responsible for the
spin-Peierls transition itself, the difference with the normal phase
being the alternation of the corresponding angles in the
$c$-direction.
The bond-bending model allows one to explain another puzzle
related to spin-Peierls transitions (discussed already in
\cite{2}): up to now such transitions have been observed only in a
few of the many known quasi-one-dimensional antiferromagnets.
There might be several reasons for that.
The first one is that the
spin-Peierls phase in CuGeO$_3$ is, at least partially,
stabilized by the frustrating next-nearest neighbour interaction
$J^{c}_{nnn}$. The other factor is that the spin-Peierls
instability is greatly enhanced when the corresponding phonon
mode is soft enough \cite{2}. One can see it {\em e.g.} from the
expression for $T_{SP}$ \cite{3},
\[
T_{SP} = 0.8 \lambda^{\prime} J\;\;.
\]
The spin-phonon coupling constant is
\[
\lambda^{\prime} = \frac{{\alpha^{\prime}}^2}{JK} =
\frac{{\alpha^{\prime}}^2}{J M \omega_0^2}\;\;,
\]
where $\omega_0 = \sqrt{K / M}$ is the typical phonon
frequency.
There is, usually, a competition between the $3d$ magnetic
ordering and the spin-Peierls phase. Apparently, in most
quasi-one-dimensional compounds the $3d$ magnetic ordering wins,
and for the spin-Peierls transition to be realized a strong
spin-lattice coupling,{\em i.e.} rather soft phonons, is
necessary. Such soft phonon modes are known to exist in the
organic spin-Peierls compounds \cite{2}. In CuGeO$_3$ it can be
rather soft bond-bending phonons, especially the ones parallel to
the $b$-axis, which help to stabilize the spin-Peierls phase
relative to the $3d$ antiferromagnetic one. Nevertheless, a
relatively small doping is sufficient to make the
antiferromagnetic state more favourable, although some other
factors are also very important here, as will become clear in
the next section.
\section{Solitons and Strings in Spin-Peierls Systems}
Let us turn now to the second group of problems related to SP
systems, namely, the nature of elementary excitations. In the
simple picture mentioned in the introduction (and valid in the
strong coupling limit) the SP state consists of isolated dimers.
For the rigid dimerized lattice an excited state is a triplet
localized on one of the dimers and separated from the ground
state by an energy gap $J$. The interaction between the
neighbouring dimers gives a certain dispersion to this
excitation, transforming it into an object similar to a usual
magnon.
If, however, the lattice is allowed to adjust to a spin flip, the
localized triplet decays into a pair of
topological excitations. Such excitations (solitons or kinks)
are known to be the lowest energy excitations in electronic
Peierls insulators \cite{21}. The same is also true for
spin-Peierls systems. Indeed, there exist two degenerate ground
states in a SP chain: one with singlets formed on sites
\ldots(12)(34)(56)\ldots, and another of the type
\ldots(23)(45)(67)\ldots. One can characterize them by the phase
of the order parameter $\phi_n$, so that $\phi_n = 0$ in the
first state and $\phi_n = \pi$ in the second.
The soliton is an excited state,
in which the order parameter interpolates from $0$ to $\pi$ or
vice versa. In the strong coupling limit such a state looks like
\ldots(12)(34)$\uparrow$(67)(89)\ldots, see Fig.~4. Actually, the soliton
has a finite width, which (as the correlation length in the BCS
theory) has a form,
\begin{equation}
\xi_0 \left(= \frac{\hbar
v_F}{\Delta_0}\right) \sim \frac{ J}{ \Delta} a \sim
\frac{J}{E_s} a \;\;.
\end{equation}
Here the Fermi velocity
$v_F \sim J a / \hbar$ is the velocity of the spinless
Jordan-Wigner fermions, in terms of which the Hamiltonian (2) has
a form similar to the Hamiltonian of the electronic Peierls
system, $2 \Delta$ is the energy gap and $a$ is the lattice
constant, which below we will put equal to $1$. The excitation
energy of the SP soliton, $E_s$, can easily be determined for
the $XY$-model, {\em i.e.} if one ignores ${\bf S}^{z}_{l} \cdot
{\bf S}^z_{l+1}$ term in the Hamiltonian (\ref{Heff}). Then the
spin-Peierls Hamiltonian (\ref{Heff}) after the Jordan-Wigner
transformation acquires a form of the Su-Schrieffer-Heeger
Hamiltonian \cite{21} for electronic Peierls materials, in which
case $E_s = \frac{2}{\pi} \Delta$ \cite{21}. The omitted term
renormalizes the soliton energy, as well as the mean-field energy
gap $2 \Delta$, but these numerical changes do not play an
important role. One should also note that the kinks are mobile
excitations with the dispersion $\sim E_s$. In CuGeO$_3$ $\xi_0$
is estimated to be of the order of $8$ lattice spacings.
From Fig.4 it is clear that a soliton in SP system corresponds to
one unpaired spin. Thus, these elementary excitations have
$\frac{1}{2}$ rather then $1$ as the singlet-triplet excitations.
Of course, for fixed boundary conditions the solitons (excited
{\em e.g.} thermally or optically) always appear in pairs.
So far we considered the excitations in an isolated SP chain.
Now we want to include the effects of the interchain interaction.
Due to this interaction (mediated, for instance, by
three-dimensional phonons) SP distortions of neighbouring chains
would prefer to be phase coherent, {\em e.g.} in phase. When a
kink-antikink pair of size $r$ is created in one of the chains,
the phase of the distortion between the kink and antikink is
opposite to the initial one as well as to those on neighbouring
chains, which would cost an energy $E(r) = Z \sigma r$, where
$\sigma$ is the effective interaction between the Peierls phases
on different chains per one link and $Z$ is the number of
neighbouring chains (See Fig.5a). Therefore, in the presence of
the interchain interaction the soliton-antisoliton pair forms a
string and $Z \sigma$ may be called the string tension. The
linear potential of the string confines the soliton motion, {\em
i.e.} kink and antikink can not go far from each other in an
ordered phase.
We can use this picture to estimate the value of the temperature
of the $3d$ SP transition. The concentration of thermally
excited kinks in an isolated chain is $n = \exp (- E_s / T)$ and
the average distance between them is ${\bar d(T)} = n^{-1} = \exp
(E_s / T)$. At the same time, the average distance between the
kinks connected by string is ${\bar l}(T) = T / (Z \sigma)$. The
three-dimensional phase transition (ordering of phases of the
lattice distortions of different chains) occurs when ${\bar l}(T)
\sim {\bar d(T)}$, {\em i.e.}
\begin{equation}
\frac{T_{SP}}{Z \sigma}
\sim \exp \left( \frac{E_s}{T_{SP}} \right) \;\;,
\end{equation}
or
\begin{equation}
\label{TSP}
T_{SP} \sim \frac{E_s}{\ln \frac{E_s}{Z \sigma}}
\sim \frac{\lambda^{\prime} J}
{\ln \left(
\frac{\lambda^{\prime} J}{Z \sigma} \right)}
\end{equation} where
we use the relation $E_s \sim \Delta \sim \lambda^{\prime} J$
\cite{3}. In this picture at $T < T_{SP}$ the phases of SP
distortions of different chains are correlated and all solitons
are paired. At $T > T_{SP}$ local SP distortions
still exist in each chain, but there is no long range order.
Therefore, the SP transition in this picture is of a
``deconfinement'' type, which is somewhat similar to the
Kosterlitz-Thouless transition in $2d$-systems.
The approach described above is valid when the value of the
interchain interaction $\sigma$ is much smaller than $J$. Using
Eq.(\ref{TSP}) with $J = 100$K and $\lambda^{\prime} \sim 0.2$
\cite{5} we get $\sigma \sim 0.04J$, which in view of the
logarithmic dependence of $T_{SP}$ on $\sigma$ in (\ref{TSP}) is
just enough for applicability of the results presented above
(these are, of course, only an order of magnitude estimates).
\section{Solitons in Doped Systems}
As we have seen above, Zn and Si, the two most studied dopands of
CuGeO$_3$, lead to an effective interruption of spin chains into
segments of finite length. The segments with even number of Cu
ions can have a perfect SP ordering, while the odd segments
behave differently: one spin $\frac{1}{2}$ remains unpaired,
which means that the ground state of an odd segment contains a
soliton (similarly to what happens in the electronic Peierls
materials \cite{SU}). One can show that the soliton is repeled
by ends of the segment, and in an isolated odd segment the
situation would look like in Fig.4: the soliton carrying spin
$\frac{1}{2}$ would prefer to stay in the middle of a segment.
This conclusion is in contrast with the usual assumption that the
magnetic moments induced by doping are localized near the
impurities.
The situation, however, changes when we take into account the
interchain interaction. As we have seen in the previous section,
moving a soliton along a chain costs an energy which grows
lineary with the distance. As is illustrated in Fig.5a, this
provides a force pulling the soliton back to the impurity. Thus
the soliton moves in a potential shown in Fig.5b: it repels from
the impurity with the potential $V_{imp}(r) \sim J \exp (- r /
\xi_0)$, while the interchain interaction gives the potential
$V_{conf}(r) \sim Z \sigma r$, providing the restoring force. As
a result, the soliton is located at a distance $\sim \xi_0$ from
impurity, so that, in a sense, we return to the traditional
picture. One should keep in mind, however, that for a weak
interchain interaction the total potential $V_{imp} + V_{conf}$
is rather shallow and at finite temperature the soliton can go
rather far from impurity. It seems that it should be possible to
check this picture experimentally, {\em e.g.} by detailed NMR
study of doped SP compounds (cf. the results of M. Chiba {\em
et al}, this conference).
\section{Phase Diagram of Doped Spin-Peierls Systems}
One can use this picture to describe qualitatively
the dependence of the phase transition temperature $T_{SP}$ on
the concentration of dopands $x$. Similar to the treatment given
in section 4, we compare an average distance between the kink and
the nearest end of the segment ${\bar l}(T) \sim T / (Z \sigma)$
with the average length of the segment ${\bar d} \sim 1 / x$.
This gives,
\begin{equation}
\label{T(x)}
T_{SP}(x) \sim \frac{Z \sigma}{x} \;\;.
\end{equation}
This result has two limitations. At large $x$, when an average
length of the segment becomes of the order of soliton size, $1 /
x \sim \xi_0$, there will be no ordering even at $T = 0$. Thus
$x \sim \xi_0^{-1}$ is an absolute limit beyond which the $3d$
ordering disappears. Using our estimate $\xi_0 \sim 8$,
such $x_{max} \sim 15 \%$. On the other hand, the result
(\ref{T(x)}) is also not valid at very small $x$. When an
average size of segment ${\bar d(x)} \sim 1 / x$ becomes
sufficiently large, the thermally induced solitons become as
important as the solitons induced by disorder. In this case the total
concentration of solitons is
\begin{equation}
n_{tot} = n_{imp} + n_{therm} = x + e^{- \frac{E_s}{T}}\;\;,
\end{equation}
and one should compare ${\bar l}(T)$ with
$n_{tot}^{-1}$. For $x = 0$ we return to
Eq.(\ref{TSP}), while for small $x$ we get,
\begin{equation}
\label{small}
T_{SP}(x) = T_{SP}(0) (1 - \alpha x)\;\;,
\end{equation}
where the coefficient $\alpha$ is
\begin{equation}
\alpha \sim \frac{E_s}
{Z \sigma \ln \left( \frac{E_s}{Z \sigma} \right)}.
\end{equation}
One can verify these estimates more rigorously by mapping the
spin-Peierls system onto an effective Ising model. Let us
associate the classical Ising variable $\tau = \pm 1$ with the
two possible types of SP ordering (phases $0$ and $\pi$), so that
the phase $0$ (left domain in Fig.4) corresponds to $\tau = + 1$,
while the phase $\pi$ (right domain in the same figure)
corresponds to $\tau = - 1$. In this language a soliton is
a domain wall in Ising variables. Since it costs an energy
$E_s$ to create a soliton, the Hamiltonian of the intrachain
interaction in the effective Ising model can be written as,
\begin{equation}
H_{intra} = - \frac{E_s}{2} \sum_{n,\alpha}
\left( \tau_{n,\alpha} \tau_{n+1,\alpha} - 1 \right) \;\;,
\end{equation}
(here $\alpha$ is the chain index and $n$ is the site number in
chain). Similarly, an interchain interaction in terms of Ising
variables has a form,
\begin{equation}
H_{inter} = - \frac{\sigma}{2} \sum_{n,\alpha} \tau_{n,\alpha}
\sum_{\delta} \tau_{n,\alpha+\delta}\;\;,
\end{equation}
where the summation over $\delta$ goes over neighbouring chains.
One can also introduce impurities in this effective Ising model.
The detailed treatment of this model will be given in a
separate publication \cite{22}. Here we limit ourselves by
presenting in Fig.6 the results of the numerical solution of the
equation for the transition temperature for several values of the
interchain interaction $\sigma$. The value of $J$ was adjusted to
make the transition temperature equal to $14$K for each value of
$\sigma$. The values of $\sigma$ (from the top curve to the bottom
one) are (in K) 3; 2; 1; 0.5; 0.1. We see that the behaviour of
$T_{SP}(x)$ agrees with the (\ref{T(x)}) at large $x$ and
(\ref{small}) at small $x$ and for the values of the parameter
$\sigma$ not much different from the estimates made in section 4 one
can obtain a reasonable form of the phase diagram for CuGeO$_3$.
(One should also take into account that each Ge is coupled to two
chains, so that its substitution by Si introduces two
interruptions in exchange interaction, whereas Zn interrupts only
one chain.)
As follows from our picture, each soliton introduced by doping
carries an uncompensated spin $\frac{1}{2}$. One can easily show
that in the vicinity of the domain wall where the SP order
parameter is small there exist antiferromagnetic spin
correlations (see Fig.7). Both these correlations and the SP
distortion change on a length scale $\xi_0$.
Antiferromagnetic correlations on neighbouring kinks may overlap,
which could, in principle, lead to the long-range antiferromagnetic
ordering. Thus it is possible to obtain a regime in which
the SP and antiferromagnetic orderings coexist. To study this
question in detail one must also take into account also the
interchain exchange interaction. This question is now under
investigation \cite{23}.
\section{Concluding Remarks}
To summarize we have a rather coherent picture of the main
properties of the SP system CuGeO$_3$. The treatment given in
the first part of this paper allows one to explain many of the
features of this compound, which, at first glance, look rather
puzzling, such as the strong anomalies observed in the direction
perpendicular to chains rather than parallel to them.
Furthermore we showed how the local geometry and the side-groups
(Ge, Si) lead to a rather detailed microscopic
picture of the distortions in CuGeO$_3$ both above and below
$T_{SP}$. These results are largely specific for this particular
compound, although some of the conclusions ({\em e.g.} the role
of side groups in superexchange and the importance of the soft
bending modes) are of a more general nature.
The results of the second part of the paper, though inspired by
the experiments on CuGeO$_3$, have a general character, {\em
e.g.} the conclusions about the domain wall structure of the
elementary excitations, confinement of solitons caused by the
interchain interaction, disorder-induced solitons \cite{MFK},
etc. At the same time, this general treatment provides a
reasonable explanation of the suppression of $T_{SP}$ by doping
and allows to describe, at least qualitatively, the phase diagram
of doped CuGeO$_3$.
We are grateful to J.~Knoester, A.~Lande, O.~Sushkov and
G.~Sawatzky for useful comments. D.~Kh. is especially grateful
to B. B\"uchner for extremely useful discussions and for
informing him of many experimental results prior to publication.
This work was supported by the Dutch Foundation for Fundamental
Studies of Matter (FOM).
| proofpile-arXiv_065-347 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Light-front (LF) coordinates are natural coordinates for describing
scattering processes that involve large momentum transfers ---
particularly deep inelastic scattering. This is because correlation
functions at high energies are often dominated by the free quark
singularities which are along light-like direction.
This is one of the main motivations for formulating field theories
in the LF framework \cite{mb:adv}. But light-front field theories have
another peculiar feature, namely naive reasoning suggests that
the vacuum of all LF Hamiltonians is equal to the Fock vacuum
\cite{ho:vac,mb:adv}. It thus {\it appears} as if LF Hamiltonians
(at least those without the so-called zero-modes, i.e. modes
with $k^+=0$)
cannot be able to describe theories where the vacuum has any nontrivial
features, such as QCD --- where chiral symmetry is believed to
be spontaneously broken. Even if one is only interested in parton
distributions, one might be worried about using a framework where
the vacuum is just empty space to describe a theory like QCD.
However, it is not quite so easy to dismiss LF field theory as
the following few examples show: One of the first field theories that
was completely solved in the LF formulation was $QCD_{1+1}(N_C\rightarrow
\infty)$ \cite{thooft}. 't Hooft's solution did not include any
zero-modes and therefore he had a trivial vacuum.
Nevertheless, his spectrum agreed perfectly well with the numerical
results from calculations based on equal time Hamiltonians \cite{wilets}.
Beyond that, application of current algebra sum rules to spectrum
and decay constants obtained from the LF calculation formally gave
nonzero values for the quark condensates that also agreed with numerical results at equal time \cite{zhit}.
This peculiar result could be understood by defining
LF field theory through a limiting procedure, which showed that some
observables (here: spectrum and decay constants) have a smooth and continuous
LF limit, while others (here quark condensates) have a discontinuous LF limit.
Other examples have been studied, in which it was still possible
to demonstrate equivalence between LF results and equal time results
nonperturbatively, provided the LF Hamiltonian was appropriately renormalized
\cite{mbsg,fr:eps,mb:parity}. Even though these examples are just 1+1
dimensional field theories, it is generally believed among the optimists in
the field \cite{all:lftd,dgr:elfe,mb:adv} that it should be possible
in 3+1 dimensional field theories as well to achieve equivalence between
spectra of LF Hamiltonian and equal time Hamiltonians by appropriate
renormalization. However, no nontrivial examples (examples that
go beyond mean field calculations) to support such a belief existed
so far.
In this paper, we will give a 3+1 dimensional toy model that can
be solved both in a conventional framework (here by solving the
Schwinger-Dyson equations) as well as in the LF framework.
We will unashamedly omit zero modes as explicit degrees of freedom
throughout the calculation. Nevertheless, we are able to show
that appropriate counter-terms to the LF Hamiltonian
are sufficient to demonstrate equivalence of the spectrum and other
physical properties between the two frameworks.
\section{A Simple Toy Model}
The model that we are going to investigate consists of fermions
with some ``color'' degrees of freedom (fundamental representation)
coupled to the transverse component of a vector field, which also
carries ``color'' (adjoint representation). The vector field does not
self-interact.\footnote{Note that, for finite $N_C$, box diagrams with
fermions would induce four boson counter-terms, which we will ignore
here since we will consider the model only in the large $N_C$ limit.}
Furthermore, we will focus on the limit of an infinite number
of colors: $N_C\rightarrow \infty$ ($g$ fixed), which will render the model solvable in the Schwinger-Dyson approach
\begin{equation}
{\cal L} = \bar{\psi}\left( i \partial\!\!\!\!\!\!\not \;\; - m -
\frac{g}{\sqrt{N_C}}{\vec \gamma}_\perp {\vec A}_\perp \right)\psi - \frac{1}{2}
\mbox{tr} \!\!\left( {\vec A}_\perp \Box {\vec A}_\perp +
\lambda^2 {\vec A}_\perp^2\right).
\end{equation}
With ``$\perp$ component'' we mean here the $x$ and $y$ components.
Furthermore we will impose a transverse momentum cutoff on the fields
and we will consider the model at fixed cutoff.
Also, even though we are interested in the chiral limit of this
model, we will keep a finite quark mass since the LF formulation
has notorious difficulties in the strict $m=0$ case.
Those difficulties can be avoided if one takes $m>0$
considers $m\rightarrow 0$.
Even though certain elements of the model bear some resemblance
to terms that appear in the QCD Lagrangian, the model seems is
a rather bizarre construction. However, there is a reason for
this: What we are interested in is a LF-investigating of a model
that exhibits spontaneous breakdown of chiral symmetry. Furthermore,
we wanted to be able to perform a ``reference calculation'' in a
conventional (non-LF) framework. In the large $N_C$ limit, the rainbow
approximation for the fermion self-energy becomes exact, which allows
us to solve the model exactly in the Schwinger-Dyson
approach. The vector coupling of the bosons to the fermions
was chosen because it is chirally invariant and because a similar
coupling occurs in regular QCD. The restriction to the $\perp$
component of the fields avoids interactions involving
``bad currents''.
Finally, using a transverse momentum cutoff both in the
Schwinger-Dyson approach and in the LF calculation should allow us
to directly compare the two formulations.
\subsection{Schwinger-Dyson Solution}
Because the above toy model lacks full covariance
(there is no symmetry relating longitudinal and transverse coordinates)
the full fermion propagator is of the form
\begin{equation}
S_F(p^\mu) = \not \! \! p_L S_L({\vec p}_L^2,{\vec p}_\perp^2)
+ \not \! \! \;p_\perp S_\perp({\vec p}_L^2,{\vec p}_\perp^2)+S_0({\vec p}_L^2,{\vec p}_\perp^2),
\end{equation}
where $\not \! \! k_L \equiv k_0\gamma^0 + k_3 \gamma^3$ and
$\not \! \! k_\perp \equiv k_1\gamma^1 + k_2 \gamma^2$.
On very general grounds, it should always be possible to write
down a spectral representation for $S_F$\footnote{What we need
is that the Green's functions are analytic except for poles
and that the location of the poles are consistent with
longitudinal boost invariance (which is manifest in our model).
The fact that the model is not invariant under transformations
which mix $p_L$ and $p_\perp$ does not prevent us from writing
down a spectral representation for the dependence on $p_L$.
}
\begin{equation}
S_i({\vec p}_L^2,{\vec p}_\perp^2) = \int_0^\infty dM^2 \frac{\rho_i(M^2,{\vec p}_\perp^2)}
{{\vec p}_L^2-M^2+i\varepsilon},
\label{eq:sansatz}
\end{equation}
where $i=L,\perp,0$.
Note that this spectral representation differs from what one
usually writes down as a spectral representation in that we are not
assuming full covariance here.
Note that in a covariant theory, one usually writes down spectral
representations in a different form, namely
$S=\int_0^\infty d\tilde{M}^2 \tilde{\rho}(\tilde{M}^2)/({\vec p}_L^2-{\vec p}_\perp^2-\tilde{M}^2)$, i.e.
with ${\vec p}_\perp^2$ in the denominator. This is a special case of
Eq. (\ref{eq:sansatz}) with $\rho(M^2,{\vec p}_\perp^2)=
\int_0^\infty d\tilde{M}^2 \tilde{\rho}
(\tilde{M}^2)\delta(M^2-\tilde{M}^2-{\vec p}_\perp^2)$.
Using thise above ansatz (\ref{eq:sansatz})
for the spectral densities, one finds for the
self-energy
\begin{eqnarray}
\Sigma(p^\mu) &\equiv& ig^2 \int \frac{d^4k}{(2\pi )^4} {\vec \gamma}_\perp
S_F(p^\mu-k^\mu){\vec \gamma}_\perp \frac{1}{k^2-\lambda^2+i\varepsilon}
\nonumber\\
&=& \not \! \! p_L\Sigma_L({\vec p}_L^2,{\vec p}_\perp^2) +
\Sigma_0({\vec p}_L^2,{\vec p}_\perp^2),
\label{eq:sd1}
\end{eqnarray}
where
\begin{eqnarray}
\Sigma_L({\vec p}_L^2,{\vec p}_\perp^2) &=& g^2 \!\!\int_0^\infty \!\!\!\!\!dM^2 \!\int_0^1\!\!\!dx
\!\!\!\int \!\frac{d^2k_\perp}{8\pi^3}
\frac{(1-x)\rho_L(M^2, ({\vec p}-{\vec k})_\perp^2)}{D}
\nonumber\\
\Sigma_0({\vec p}_L^2,{\vec p}_\perp^2) &=& -g^2 \!\!\int_0^\infty \!\!\!\!\!dM^2 \!\int_0^1\!\!\!dx
\!\int \!\frac{d^2k_\perp}{8\pi^3}
\frac{\rho_0(M^2, ({\vec p}-{\vec k})_\perp^2)}{D}.
\nonumber\\
\label{eq:sd2}
\end{eqnarray}
and
\begin{equation}
D=x(1-x){\vec p}_L^2 - xM^2
-(1-x)\left({\vec k}_\perp^2+\lambda^2\right)
\end{equation}
Note that $\Sigma_\perp$ vanishes, since $\sum_{i=1,2} \gamma_i \gamma_j
\gamma_i=0$ for $j=1,2$.
Self-consistency then requires that
\begin{equation}
S_F = \frac{1}{\not \! \! p_L\left[1-\Sigma_L({\vec p}_L^2,{\vec p}_\perp^2) \right]
+ \not \! \! p_\perp - \left[m+\Sigma_0({\vec p}_L^2,{\vec p}_\perp^2)\right]}
\label{eq:sd3}
\end{equation}
In the above equations we have been sloppy about cutoffs in order
to keep the equations simple, but this can be easily remedied by
multiplying each integral by a cutoff on the fermion momentum, such as
$\Theta\left(\Lambda^2_\perp-({\vec p}-{\vec k})_\perp^2\right)$
In principle, the set of equations
[Eqs. (\ref{eq:sd1}),(\ref{eq:sd2}),(\ref{eq:sd3})]
can now be used to determine the spectrum of the model.
But we are not going to do this here since we are more interested
in the LF solution to the model. However, we would still like
to point out that, for large enough $g$, one obtains a self-consistent
numerical solution to the Euclidean version of the model which
has a non-vanishing scalar piece --- even for vanishing current
quark mass $m$, i.e. chiral symmetry is spontaneously
broken and a dynamical mass is generated for the fermion in this model.
\subsection{LF Solution}
A typical framework that people use when solving LF quantized
field theories is discrete light-cone quantization (DLCQ)
\cite{pa:dlcq}. Since it is hard to take full advantage of the
large $N_C$ limit in DLCQ, we prefer to use a Green's function
framework based on a 4 component formulation of the model.
In a LF formulation of the model, the fermion propagator
(to distinguish the notation from the one above, we denote
the fermion propagator by $G$ here) should be of the form
\footnote{Note that in a LF formulation, $G_+$ and $G_-$ are
not necessarily the same.}
\begin{eqnarray}
G(p^\mu) &=& \gamma^+ p^-G_+(2p^+p^-,{\vec p}_\perp^2)
+\gamma^- p^+G_-(2p^+p^-,{\vec p}_\perp^2)
\nonumber\\
& &+ \not \!\!k_\perp G_\perp(2p^+p^-,{\vec p}_\perp^2)+
G_0(2p^+p^-,{\vec p}_\perp^2).
\label{eq:lf1}
\end{eqnarray}
Again we can write down spectral representations
\begin{eqnarray}
G_i(2p^+p^-,{\vec p}_\perp^2) = \int_0^\infty dM^2
\frac{\rho_i^{LF}(M^2,{\vec p}_\perp^2)}
{2p^+p^--M^2+i\varepsilon},
\label{eq:speclf}
\end{eqnarray}
where $i=+,-,\perp,0$. This requires some explanation:
On the LF, one might be tempted to allow for two terms
in the spectral decomposition of the term proportional to
$\gamma^+$, namely
\begin{equation}
tr(\gamma^-G)\propto
\int_0^\infty dM^2
\frac{p^-\rho_a(M^2,{\vec p}_\perp^2)+\frac{1}{p^+}\rho_b(M^2,{\vec p}_\perp^2)}
{2p^+p^--M^2+i\varepsilon}.
\label{eq:rhoab}
\end{equation}
However, upon writing
\begin{equation}
\frac{1}{p^+}=\frac{1}{p^+M^2}\left(M^2-2p^+p^-\right)+\frac{2p^-}{M^2}
\end{equation}
one can cast Eq. (\ref{eq:rhoab}) into the form
\begin{eqnarray}
tr(\gamma^-G)&\propto&
\int_0^\infty dM^2p^-
\frac{\rho_a(M^2,{\vec p}_\perp^2)+\frac{2}{M^2}\rho_b(M^2,{\vec p}_\perp^2)
}{2p^+p^--M^2+i\varepsilon}
\nonumber\\
& &-\frac{1}{p^+}\int_0^\infty dM^2
\frac{\rho_b(M^2,{\vec p}_\perp^2)}{M^2},
\label{eq:rhoaab}
\end{eqnarray}
which is of the form in Eq.(\ref{eq:speclf}) plus an energy independent
term. The presence of such an additional energy independent
term would spoil the high energy behavior of the model \cite{brazil}:
In a LF Hamiltonian, not all coupling constants are arbitrary.
In many examples, 3-point couplings and the 4-point couplings
must be related to one another so that the high energy behavior
of scattering via the 4-point interaction and via the iterated
3-point interaction cancel \cite{brazil}. If one does not
guarantee such a cancellation then the high-energy behavior of the
LF formulation differs from the high-energy behavior in covariant
field theory and in addition one often also gets a spectrum that is unbounded
from below. In Eq. (\ref{eq:rhoaab}), the energy independent
constant appears if the coupling constants of the "instantaneous
fermion exchange" interaction in the LF Hamiltonian and the
boson-fermion vertex are not properly balanced.
In the following we will assume that one has started with an
ansatz for the LF Hamiltonian with the proper high-energy behavior,
i.e. we will assume that there is no such energy independent
piece in Eq. (\ref{eq:rhoaab}).
The LF analog of the self-energy equation is obtained by
starting from an expression similar to Eq.(\ref{eq:sd2}) and
integrating over $k^-$. One obtains
\begin{equation}
\Sigma^{LF} = \gamma^+\Sigma_+^{LF}+\gamma^-\Sigma_-^{LF}
+\Sigma_0^{LF},
\end{equation}
where
\begin{eqnarray}
\!\Sigma_+^{LF}\!(p) \!&=&\! g^2 \!\!\!\int_0^\infty \!\!\!\!\!\!dM^2
\!\!\!\int_0^{p^+}\!\!\!\!\!\!\!dk^+
\!\!\!\!\int \!\!\frac{d^2k_\perp}{16\pi^3}
\frac{\!\!\left(\!p^-\!\!-\frac{\lambda^2+{\vec k}_\perp^2}
{2k^+}\!\right)\!\rho_+^{LF}(M^2\!\!,
({\vec p}-{\vec k})_\perp^2)}{k^+(p^+-k^+)D^{LF}}
\nonumber\\
& &+ CT
\nonumber\\
\!\Sigma_-^{LF}\!(p) \!&=&\! g^2 \!\!\!\int_0^\infty \!\!\!\!\!\!dM^2 \!\!\!\int_0^{p^+}\!\!\!\!\!\!\!dk^+
\!\!\!\!\int \!\!\frac{d^2k_\perp}{16\pi^3}
\frac{\left(p^+-k^+\right)\rho_-^{LF}(M^2, ({\vec p}-{\vec k})_\perp^2)}{k^+(p^+-k^+)D^{LF}}
\nonumber\\
\!\Sigma_0^{LF}\!(p) \!&=&\! -g^2 \!\!\!\int_0^\infty \!\!\!\!\!\!dM^2 \!\!\!\int_0^{p^+}\!\!\!\!\!\!\!dk^+
\!\!\!\!\int \!\!\frac{d^2k_\perp}{16\pi^3}
\frac{\rho_0^{LF}(M^2, ({\vec p}-{\vec k})_\perp^2)}{k^+(p^+-k^+)D^{LF}}.
\label{eq:lf2}
\end{eqnarray}
where
\begin{equation}
D^{LF}=p^- - \frac{M^2}{2(p^+-k^+)} - \frac{\lambda^2+{\vec k}_\perp^2}{2k^+}
\end{equation}
and CT is an energy ($p^-$)
independent counter-term. The determination of this counter-term, such
that one obtains a complete equivalence with the Schwinger Dyson
approach, is in fact the main achievement of this paper.
First we want to make sure that the counter-term renders the self-energy
finite. This can be achieved by performing a ``zero-energy subtraction''
with a free propagator,
analogous to adding self-induced inertias to a LF Hamiltonian, yielding
\begin{equation}
CT= g^2 \int_0^{p^+}\!\!\!\!\!\!\!dk^+
\!\!\!\!\int \!\!\frac{d^2k_\perp}{16\pi^3}
\frac{\!\!\frac{\lambda^2+{\vec k}_\perp^2}{2k^+}\!}{k^+(p^+-k^+)D_0^{LF}}
+\frac{\Delta m^2_{ZM}}{2p^+},
\label{eq:ct1}
\end{equation}
where
\begin{equation}
D_0^{LF}= - \frac{M_0^2+({\vec p}-{\vec k})_\perp^2}{2(p^+-k^+)} - \frac{\lambda^2+{\vec k}_\perp^2}{2k^+}
\end{equation}
and where we denoted the finite piece by $\Delta m^2_{ZM}$ (for {\it zero-mode}), since
we suspect that it arises from the dynamics of the zero-modes.
$M_0^2$ is an arbitrary scale parameter. We will construct
the finite piece ($\Delta m^2_{ZM}$) so that there is no
dependence on $M_0^2$ lect in CT in the end.
At this point, only the infinite part of $CT$ is unique \cite{brazil}, since it
is needed to cancel the infinity in the $k^+$ integral
in Eq. (\ref{eq:lf2}), while the
finite (w.r.t. the $k^+$ integral) piece (i.e. $\Delta m^2_{ZM}$) seems arbitrary. \footnote{Note that what we called the "finite piece"
w.r.t. the $k^+$ integral is still divergent when one integrates over
$d^2k_\perp$ without a cutoff!}
Below we will show that it is not arbitrary and only
a specific choice for $\Delta m^2_{ZM}$
leads to agreement between the SD and the LF approach.
Note that the equation for the self-energy can also be written in the
form
\begin{eqnarray}
\!\Sigma_+^{LF}\!(p) \!&=&\! g^2
\!\!\!\int_0^{p^+}\!\!\!\!\frac{dk^+}{k^+}
\!\!\!\!\int \!\!\frac{d^2k_\perp}{8\pi^3}
p^-_F
G_+\left(2p^+_Fp^-_F,{\vec p}_{\perp F}^2\right)
+ CT
\nonumber\\
\!\Sigma_-^{LF}\!(p) \!&=&\! g^2
\!\!\!\int_0^{p^+}\!\!\!\!\frac{dk^+}{k^+}
\!\!\!\!\int \!\!\frac{d^2k_\perp}{8\pi^3}
p^+_F
G_-\left(2p^+_Fp^-_F,{\vec p}_{\perp F}^2\right)
\nonumber\\
\!\Sigma_0^{LF}\!(p) \!&=&\! -g^2
\!\!\!\int_0^{p^+}\!\!\!\!\frac{dk^+}{k^+}
\!\!\!\!\int \!\!\frac{d^2k_\perp}{8\pi^3}
G_0\left(2p^+_Fp^-_F,{\vec p}_{\perp F}^2\right),
\label{eq:lf2b}
\end{eqnarray}
where
\begin{eqnarray}
p^+_F&\equiv& p^+-k^+ \nonumber\\
p^-_F&\equiv& p^--\frac{\lambda^2+{\vec k}_\perp^2}{2k^+}\nonumber\\
{\vec p}_{\perp F} &\equiv&{\vec p}_\perp-{\vec k}_\perp
\end{eqnarray}
One can prove this by simply comparing expressions!
Bypassing the use of the spectral function greatly simplifies
the numerical determination of the Green's function in a self-consistent
procedure.
\subsection{DLCQ solution}
There are reasons why one might be sceptical about the
Green's function approach to the LF formulation of the model:
First we used a four-component formulation which resembles
a covariant calculation. Furthermore, we introduced spectral
representations for the Green's functions and assumed certain
properties [Eq.(\ref{eq:speclf})].
Since we were initially also sceptical, we performed the following
calculation: First we formulated the above model as a Hamiltonian
DLCQ problem \cite{pa:dlcq} with anti-periodic boundary conditions
for the fermions and periodic boundary conditions for the bosons in the
longitudinal direction. Zero modes ($k^+=0$) were omitted.
This is a standard procedure and we will not give
any details here. The only nontrivial step was the choice of
the kinetic energy for the fermion, which we took, using
Eq. (\ref{eq:ct1}), to be
\begin{equation}
T=\sum_{{\vec p}_\perp}\sum_{p^+=1,3,..}^{\infty} T(p)\left(b^\dagger_pb_p
+d^\dagger_pd_p\right),
\end{equation}
with
\begin{eqnarray}
T(p)&=&\frac{m^2+{\vec p}_\perp^2+\Delta m^2_{ZM}}{p^+} \\
& &+ \sum_{{\vec q}_\perp} \sum_{q^+=1,3,..}^{p^+-2}
\frac{1}{ {q^+}^2(p^+-q^+)}\frac{m^2+{\vec q}_\perp^2}{\frac{\lambda^2+({\vec p}_\perp-{\vec q}_\perp)^2}{k^+-q^+}
+\frac{m^2+{\vec q}_\perp^2}{q^+}}
\nonumber
\end{eqnarray}
(some cutoff, such as a sharp momentum cutoff, is implicitly assumed).
Having obtained the eigenvalues of the DLCQ Hamiltonian,
we then determined the Green's function self-consistently,
by iteratively solving Eq. (\ref{eq:lf2b}),
using the same cutoffs as in the DLCQ calculation:
the same transverse momentum cutoff and discrete $k^+$ summations instead
of the integrals.
The result was that the invariant mass at the first pole of the
self-consistently determined Green's function coincides to at least
10 significant digits (!) with the invariant mass of the physical fermion
as determined from the DLCQ diagonalization.
This result was independent of the cutoff --- as long as the same
cutoff was used in both the Green's function and the DLCQ approach.
This proves that the self-consistent Green's function calculation and
the DLCQ calculation are in fact completely equivalent for our toy model. This is a very useful result, since it allows us to formally
perform the continuum limit (replace sums by integrals)
--- a step that is clearly impossible for a DLCQ calculation.
\subsection{Comparing the LF and SD solutions}
Having established the equivalence between the Green's function method and
the DLCQ approach, we can now proceed to compare the Green's function
approach (in the continuum) with the Schwinger-Dyson approach.
Motivated by considerations in Ref.\cite{mb:adv}, we make the
following ansatz for ZM:
\begin{equation}
\Delta m^2_{ZM} = g^2\int_0^\infty \!\!\!\!dM^2 \!\!\int
\!\!\frac{d^2k_\perp}{8\pi^3}
\rho_+^{LF}(M^2,{\vec p}_{F\perp}^2) \ln \frac{M^2}{M_0^2+{\vec p}_{F\perp}^2}.
\label{eq:zm}
\end{equation}
The motivation for this particular ansatz becomes obvious one
we rewrite the expression for $\Sigma_+^{LF}$:
For this purpose, we first note that
\begin{eqnarray}
\frac{p^--\frac{\lambda^2+{\vec k}_\perp^2}{2k^+}}{k^+(p^+-k^+)D^{LF}}
&+&
\frac{\frac{\lambda^2+{\vec k}_\perp^2}{2k^+}}{k^+(p^+-k^+)D_0^{LF}}
\\
= \frac{p^-\frac{p^+-k^+}{p^+}}{k^+(p^+-k^+)D^{LF}}
&-& \frac{1}{p^+}\frac{\partial}{\partial k^+} \ln \left[\frac{D^{LF}}{D_0^{LF}}\right].
\nonumber
\end{eqnarray}
Together with the normalization condition\\
$\int_0^\infty dM^2
\rho_+^{LF}(M^2,{\vec k}_\perp^2)=1$, this implies
\begin{eqnarray}
\!\Sigma_+^{LF}\!(p) \!&=&\! g^2\frac{p^-}{p^+} \!\!\!\int_0^\infty \!\!\!\!\!\!dM^2
\!\!\!\int_0^{p^+}\!\!\!\!\!\!\!dk^+
\!\!\!\!\int \!\!\frac{d^2k_\perp}{16\pi^3}
\frac{\!\!\left(p^+-k^+\right)\!\rho_+^{LF}(M^2\!\!, ({\vec p}-{\vec k})_\perp^2)}{k^+(p^+-k^+)D^{LF}}
\nonumber\\
& &-\frac{g^2}{2p^+} \int_0^\infty \!\!\!\!\!\!dM^2 \!\!\!\int \!\!\frac{d^2k_\perp}{8\pi^3}
\rho_+^{LF}(M^2,{\vec p}_{F\perp}^2)\ln \frac{M^2}{M_0^2+{\vec p}_{\perp F}^2 } \nonumber\\
& &+\frac{\Delta m^2_{ZM}}{2p^+}
\nonumber\\
&=&\! g^2\frac{p^-}{p^+} \!\!\!\int_0^\infty \!\!\!\!\!\!dM^2
\!\!\!\int_0^{p^+}\!\!\!\!\!\!\!dk^+
\!\!\!\!\int \!\!\frac{d^2k_\perp}{16\pi^3}
\frac{\!\!\left(p^+-k^+\right)\!\rho_+^{LF}(M^2\!\!, ({\vec p}-{\vec k})_\perp^2)}{k^+(p^+-k^+)D^{LF}},\nonumber\\
\end{eqnarray}
where we used our particular ansatz for $\Delta m^2_{ZM}$ [Eq. (\ref{eq:zm})].
Thus, for our particular choice for the finite piece of the kinetic
energy counter term, the expression for $\Sigma_+^{LF}$ and $\Sigma_-^{LF}$
are almost the same --- the only difference being the replacement of
$\rho_+^{LF}$ with $\rho_-^{LF}$ and an overall factor of $p^-/p^+$.
Furthermore, the most important result of this paper is
a direct comparison (take $x=k^+/p^+$) shows that the same spectral
densities that provide a self-consistent solution to the SD
equations (\ref{eq:sd2}) also yield a self-consistent solution to the
LF equations, provided one chooses
\begin{eqnarray}
\rho_+^{LF}(M^2,{\vec k}_\perp^2) &=&\rho_-^{LF}(M^2,{\vec k}_\perp^2)
=\rho_L(M^2,{\vec k}_\perp^2)\nonumber\\
\rho_0^{LF}(M^2,{\vec k}_\perp^2) &=&\rho_0(M^2,{\vec k}_\perp^2).
\end{eqnarray}
In particular, the physical masses
of all states (in the sector with fermion number one)
must be the same in the SD and the LF framework.
In the formal considerations above, we found it convenient to
express $\Delta m^2_{ZM}$ in terms of the spectral density.
However, this is not really necessary since one can express
it directly in terms of the Green's function
\begin{eqnarray}
\Delta m^2_{ZM}&=&g^2p^+\!\!\!\!\int_{-\infty}^0\!\!\!\!\!\!dp^-\!\!\!\!
\left.\int \!\!\frac{d^2p_\perp}{
4\pi^3} \!\!\right[
G_+(2p^+p^-,{\vec p_\perp}^2
\label{eq:dmgreen}
\\
& &\quad \quad \quad \quad \quad \quad \quad \quad - \left.\frac{1}
{2p^+p^--{\vec p}_\perp^2-M_0^2}\right] .
\nonumber
\end{eqnarray}
Analogously, one can also perform a "zero-energy subtraction" in
Eq. (\ref{eq:lf2b}) with the full Green's function, i.e.
by choosing
\begin{equation}
CT=-g^2
\!\!\!\int_0^{p^+}\!\!\!\!\frac{dk^+}{k^+}
\!\!\!\!\int \!\!\frac{d^2k_\perp}{8\pi^3}
\tilde{p}^-_F
G_+\left(2p^+_F\tilde{p}^-_F,{\vec p}_{\perp F}^2\right),
\label{eq:ctilde}
\end{equation}
with $\tilde{p}^-_F=-(\lambda^2+{\vec k}_\perp^2)/2k^+$.
This expression turns out to be very useful when constructing the
self-consistent Green's function solution.
We used both ans\"atze [Eqs. (\ref{eq:dmgreen}) and
(\ref{eq:ctilde})] to determine the physical masses of the
dressed fermion. In both cases, numerical agreement with
the solution to the Euclidean SD equations was obtained.
Note that, in a canonical
LF calculation (e.g. using DLCQ) one should avoid expressions
involving $G_+$, since it is the propagator for the unphysical ("bad")
component of the fermion field that gets eliminated by solving
the constraint equation.
However, since the model that we considered has an underlying Lagrangian
which is parity invariant, one can use $G_+=G_-$ for the self-consistent
solution and still use Eq. (\ref{eq:dmgreen}) or
Eq. (\ref{eq:ctilde}) but with $G_+$ replaced by $G_-$.
\section{Summary}
We studied a simple 3+1 dimensional model with ``QCD-inspired''
degrees of freedom which exhibits spontaneous breakdown of chiral
symmetry. The methods that we used were the Schwinger-Dyson
approach, a LF Green's function approach and DLCQ.
The LF Green's function approach was used to ``bridge'' between
the SD and DLCQ formulations in the following sense:
On the one hand, we showed analytically that the LF Green's function
solution to the model is equivalent to the SD approach.
On the other hand we verified numerically that by discretizing the
momentum integrals, that appeared in the LF Green's function approach,
agreement between the LF Green's function approach and DLCQ.
Hence we have shown that the SD solution and the DLCQ solution are equivalent.
This remarkable result implies that even though the LF calculation was done
without explicit zero-mode
degrees of freedom, its solution contain the
same physics as the solution to the SD equation --- including dynamical
mass generation for the fermions.
However, we have also shown that the equivalence between the LF approaches and the
SD approach only happens with a very particular choice for the fermion kinetic
mass counter-term in the light-front framework.
Our calculation also showed that
the current quark mass of the SD calculation is to be identified
with the ``vertex mass'' in the LF calculation --- provided the same
cutoffs are being used in both calculations. This result makes
sense, considering that both the current quark mass and the LF vertex
mass are the only parameters that break chiral symmetry explicitly.
The mass generation for the fermion in the chiral limit of the
LF calculation occurs through the kinetic mass counter-term (which
does not break chiral symmetry) \cite{all:lftd}.\footnote{We should add
that the ``kinetic mass counter-term'' did depend on the transverse
momentum of the fermion for most cutoffs other than a transverse momentum
on the fermion.}
Our results contradict Ref. \cite{hari}, where it has been {\it ad hoc}
suggested that the renormalized vertex mass remains finite in the chiral
limit to account for spontaneous breaking of chiral symmetry.
Our work presents an explicit 3+1 dimensional example that there is no
conflict between chiral symmetry breaking and trivial LF vacua
provided the renormalization is properly done.
In our formal considerations, we related the crucial finite piece
of the kinetic mass counter-term to the spectral density.
Several alternative determinations (which might be more suitable for a
practical calculation) are conceivable:
parity invariance for physical observables \cite{mb:parity},
more input (renormalization conditions) such as fitting the fermion
or ``pion'' mass.
However, one must be careful with this result in the following sense: although we have provided an explicit example which shows
that, even in a 3+1 dimensionsonal model with
$\chi SB$ for $m\rightarrow 0$, LF Hamiltonians without explicit
zero-modes can give the right physics, we are still far from
understanding whether this is possible in full QCD and how
complicated the effective LF Hamiltonian for full QCD needs to be.
More work is necessary to answer these questions.
As an extension of this work we had planned to study the pion in the
chiral limit of a 1+1
dimensional version of this model using the LF framework.\footnote{Even in 1+1 dimensions, one expects a massless boson in the chiral limit because
of $N_C\rightarrow \infty$. One can show this in the SD formalism
since the solution for the self-energy equation for the fermion
also solves the Bethe-Salpeter equation for the pseudoscalar
bound state with zero mass.}
We were not able to derive an analog of the Green's function equations
for the pion, so we had to resort to a brute force DLCQ calculation.
Numerical convergence, which was acceptable for
the fermion, was very poor for the pion in the chiral limit, and
we were thus not able to demonstrate that it emerges naturally as a massless
particle. Nevertheless, we expect that other numerical techniques,
which treat the end point behavior of the LF wavefunctions more carefully
than DLCQ, should yield a massless pion for this model.
\acknowledgements
M.B. would like to acknowledge Michael Frank and Craig Roberts for very
helpful discussions on the Schwinger-Dyson solution to the model.
We thank Dave Robertson for critically reading and commenting on
a preliminary version of this paper.
This work was supported by the D.O.E. under contract DE-FG03-96ER40965
and in part by TJNAF.
| proofpile-arXiv_065-348 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The question of the age of the bulk of the stars in elliptical galaxies
is still subject to a large debate. Opposed to the classical picture in
which Ellipticals are basically inhabited by $\sim$ 15 Gyr old stellar
populations (see e. g. Renzini 1986), O'Connell (1986), amongst others,
proposed that a substantial component of stars as young as 5 Gyr has to
be present to account for the observed spectral energy distribution of
Ellipticals. In more recent years many observational evidences have been
found to support the notion of elliptical galaxies formation at high redshift,
including the tightness of the
colour$-$central velocity dispersion ($\sigma$) relation found for Ellipticals
in Virgo and Coma by Bower, Lucey $\&$ Ellis (1992) ; the thinness of the
fundamental plane (Renzini $\&$ Ciotti 1993) for the Ellipticals in the
same two clusters; the modest passive evolution measured for cluster
Ellipticals at intermediate redshifts (Franx $\&$ van Dokkum 1996,
Bender, Ziegler $\&$ Bruzual 1996);
the negligible luminosity evolution observed for the
red galaxies (Lilly et al. 1995) and for early
type galaxies in clusters in the redshift range z $<$ 1 (Dickinson 1996);
the detection
of bright red galaxies at redshifts as large as 1.2 (Dickinson 1996).
On the other hand, hints for a continuous formation of Ellipticals
in a wide redshift range have also been found: the relatively
large \mbox{H$\beta$~} values measured in a sample of nearby Ellipticals, which
could indicate a prolongued star formation activity
in these galaxies, up to $\sim$ 2 Gyr ago (Gonzalez 1993, Faber et al. 1995);
the apparent paucity of high luminosity Ellipticals at z$\simeq$ 1 compared
to now (Kauffmann, Charlot $\&$ White 1996).
\par
Two competing scenarios have been proposed also for the process leading to
the formation of the bulk of the stars in
Ellipticals: early merging of lumps containing gas and stars
(e.g. Bender, Burstein $\&$ Faber 1993), in which some dissipation
plays a role in establishing the chemical structure of the outcoming
galaxy; and the merging of
early formed stellar systems, occurring in a wide redshift range, and
preferentially at late epochs, following the
hierarchical formation of structures (Kauffmann, White $\&$ Guiderdoni 1993).
\par
In order to help understanding when and how elliptical galaxies formed I have
computed synthetic spectral indices for stellar populations with a metallicity
spread, and compared them to the corresponding observations in the nuclei
of Ellipticals.
The study of line strengths in the spectra of early type galaxies has
shown to be a powerful tool for investigating the age and the metallicity
of these systems (Faber et al. 1995; Fisher, Franx and Illingworth 1995;
Buzzoni 1995b and references therein).
With few exceptions (Vazdekis et al. 1996, Bressan, Chiosi $\&$ Tantalo 1996),
most of
the authors have interpreted the observed line strengths through comparisons
with theoretical models constructed for single age, single
metallicity stellar populations (SSPs). The major results of these
studies can be sumarized as follows:
\par\noindent
i) the \mbox{Mg$_2$~} indices measured in elliptical galaxies are consitent with
the notion that these systems are inhabited by old stellar populations,
the differences in \mbox{Mg$_2$~} tracing differences in average metallicity
(Buzzoni, Gariboldi $\&$ Mantegazza 1992). However, the difficulty in
determining separately age and metallicity (Renzini 1986) weakens
considerably this simple picture (Worthey 1994);
\par\noindent
ii) the \mbox{H$\beta$~} line strength offers an opportunity to break the
age$-$metallicity degeneracy, if this index is
measuring the temperature of turn-off stars (see Faber et al. 1995).
In this view, the data derived for a sample of nearby Elliptical galaxies
indicate that the ages of the stellar populations in their nuclei
span a wide range, the weakest \mbox{Mg$_2$~} galaxies being the youngest
(Gonzalez 1993, Faber et al. 1995);
\par\noindent
iii) the Magnesium to Iron abundance ratio in the nuclei of the highest
\mbox{Mg$_2$~} Ellipticals
is likely to be larger than solar (Gorgas, Efstathiou $\&$ Arag\'on Salamanca
1990; Worthey, Faber $\&$ Gonzalez (1992), hereinafter WFG; Davies, Sadler
$\&$ Peletier 1993).
Weiss, Peletier and Matteucci (1995) estimate [Mg/Fe] ranging from 0.3
to 0.7 dex within the brightest ellipticals.
\par
Real galaxies host composite stellar
populations, with a spread in the major parameters like age and metallicity.
This may apply also to the nuclei of galaxies, where typically
$\sim$10$^7$ \mbox{L$_{\odot}$} are sampled, corresponding to $\sim$100 bright globular
clusters.
The existence of a substantial metallicity spread in elliptical
galaxies and bulges is supported by direct observations. For example,
the Colour-Magnitude
diagram of a field in M32 (Freedman 1989) shows a wide red
giant branch, corresponding to stars spanning a metallicity range of
0.6 dex approximately. The K-giants in the galactic bulge
have metallicities ranging from $\sim$ 0.1\mbox{Z$_{\odot}$} to
$\sim$ 5\mbox{Z$_{\odot}$} (Rich 1988).
Finally, the mere evidence for abundance gradients in elliptical galaxies,
as inferred from line strengths gradients,
indicates that a metallicity spread is present in these systems.
This last argument applies to galaxies as a whole: whether or not their
nuclei
host stellar populations with a spread in metal content depends on the
modalities of the galaxy formation. However, due to projection effects,
a substantial fraction of the light measured in the nuclei of Ellipticals comes
from regions located outside the three dimensional core. This
fraction can e.g. amount to $\sim$ 50 $\%$ for King models (Binney $\&$
Tremaine 1987). Therefore a pure radial metallicity gradient
translates into a metallicity spread in the stellar population
contributing to the light measured in the galactic nuclei.
\par
In this paper, the effect of a metallicity spread on
the integrated indices is investigated, viewing a given galaxy
(or a portion of it) as the sum of SSPs. These models are
compared to the relevant observations to derive conclusions on the stellar
content of the nuclear regions of early type galaxies and inferences on their
formation process. Section 2 describes how the models are computed, and
the results are presented in Section 3. In Section 4 the
models are compared to the observational data, and in Section 5 the
implications for the formation of elliptical galaxies are discussed.
The main conclusions are sumarized in Section 6.
\section{Computational Procedure}
The spectral indices considered in the present work are defined as follows
(Burstein et al. 1984):
\begin{equation}
\mbox{Mg$_2$~} = - 2.5~{\rm Log}~\frac{F{_{\rm l}}(\mbox{Mg$_2$~})}
{F{_{\rm c}}(\mbox{Mg$_2$~})} \label{eq:Mgd}
\end{equation}
\begin{equation}
\mbox{H$\beta$~} = \Delta_{\beta} \times [1 -
\frac{F{_{\rm l}}(\mbox{H$\beta$~})}{F{_{\rm c}}(\mbox{H$\beta$~})}] \label{eq:Hbd}
\end{equation}
\begin{equation}
\mbox{Fe52~} = \Delta_{\rm Fe} \times [1 -
\frac{F{_{\rm l}}(\mbox{Fe52~})}{F{_{\rm c}}(\mbox{Fe52~})}] \label{eq:F2d}
\end{equation}
\begin{equation}
\mbox{Fe53~} = \Delta_{\rm Fe} \times [1 -
\frac{F{_{\rm l}}(\mbox{Fe53~})}{F{_{\rm c}}(\mbox{Fe53~})}] \label{eq:F3d}
\end{equation}
where the various $F_{\rm l}$ denote the fluxes measured within the spectral
windows of the different lines, centered at $\lambda \simeq$ 5175, 4863,
5267 and 5334 \AA~ for the \mbox{Mg$_2$~}, \mbox{H$\beta$~}, \mbox{Fe52~} and \mbox{Fe53~} features,
respectively.
$F_{\rm c}$ are the pseudocontinuum fluxes measured at the line location, as
interpolated from the fluxes measured in adjacent windows, and
$\Delta_{\beta}$, $\Delta_{\rm Fe}$ are the wavelength widths of the
windows in which the
\mbox{H$\beta$~} ad Iron indices are measured.\par
The above definitions apply to the spectra of single stars, of SSPs and of
collections of SSPs, inserting the appropriate values for the fluxes.
Therefore, for a collection of N simple stellar populations,
each contributing a fraction $\Phi_{\rm S}$
to the total bolometric flux $F_{\rm bol}$ of the
composite stellar population, the integrated indices are given by
equations (\ref{eq:Mgd}) to (\ref{eq:F3d}) with
\begin{equation}
F_{\rm l} = F_{\rm bol} \times \sum_{S=1}^{N}~(\frac{F_{\rm l}}
{F_{\rm bol}})_ {\rm S}~\Phi_{\rm S} \label{eq:Fld}
\end{equation}
\begin{equation}
F_{\rm c} = F_{\rm bol} \times \sum_{S=1}^{N}~(\frac{F_{\rm c}}
{F_{\rm bol}})_{\rm S}~\Phi_{\rm S} \label{eq:Fcd}
\end{equation}
where the subscript S refers to the single SSP.
The spectral energy distribution of an SSP, and particularly the various
flux ratios, are controlled by a number of parameters,
including the metallicity ($Z$), age ($t$), helium content ($Y$) and
the elemental
abundances of the population. Thus, the integrated indices of a collection of
SSPs depend on how the fractionary bolometric flux
$\Phi_{\rm S}$ is distributed over the range covered by all the relevant
parameters. The problem of deriving these parameters from the observed
line strengths can be symplified considering various suitable indices,
each controlled by different parametes. For example, Gonzalez
(1993) uses a combination of Magnesium and Iron line strengths
(mostly sensitive to $Z$) and the \mbox{H$\beta$~} index
(mostly sensitive to $t$) to determine age
and metallicity of a sample of early type galaxies.
Still, in order to map the integrated indices of collections of SSPs
into their fundamental properties one needs to account for
the presence of a possible spread in the parameters which control the
various line strengths. \par
One possible approach
to this problem consists in computing models for the chemical evolution of
galaxies, which automatically yield the distribution of the SSPs over the
fundamental parameters (e.g. Vazdekis et al. 1996, Tantalo et al. 1996).
The output of these
models, though, depends on the specific ingredients used,
like the adopted star formation rate, initial mass function, ratio of
dark to luminous matter, nucleosynthesis, criteria for the establishment of
a galactic wind, etc. A different approach consists in exploring
the dependence of the various indices on the presence of a spread in the
populations parameters adopting a physically motivated function
$\Phi_{\rm S}$, and using relations (\ref{eq:Mgd}) to (\ref{eq:F3d}).
This approach, which has the advantage of allowing
an easy exploration of the parameter space by simply changing the
$\Phi_{\rm S}$ functions, will be adopted here.
Also, I will restrict to considering collections of SSPs all with
the same age, but a substantial spread in metallicity.
The results are then meant to describe the effects of the presence of
a metallicity spread in a stellar population formed within
a short time scale, so that the integrated indices are not
appreciably influenced by age differences in the individual components.
\subsection{Line strengths for simple stellar populations}
In order to compute the integrated indices of composite stellar
populations one has to know the
$F_{\rm l}$/$F_{\rm bol}$ and $F_{\rm c}$/$F_{\rm bol}$ ratios
of the SSPs as functions of $Z$ and $t$.
Available SSP models in the literature tabulate bolometric corrections, colours
and spectral indices for different ages and metallicities.
I then write:
\begin{equation}
\frac {F_{\rm l}}{F_{\rm bol}} = (\frac{F_{\rm l}}{F_{\rm c}}) \times
(\frac{F_{\rm c}}{F_{\rm bol}}) \label{eq:Ide}
\end{equation}
for each SSP, and approximate $F_{\rm c}$ with the flux in the V band, for
\mbox{Mg$_2$~}, \mbox{Fe52~} and \mbox{Fe53~}, and with the B band flux for \mbox{H$\beta$~}.
Although the pseudocontinuum fluxes do not correspond precisely to the flux in
the V or the B band, the main conclusions of this paper are not affected
by this approximation. For example, the integrated \mbox{Mg$_2$~} indices computed
with $F_{\rm c}$ = $F_{\rm B}$ differ
from those computed with $F_{\rm c}$ = $F_{\rm V}$ by
less than 2 percent.
\par
Two sets of SSP models are used here:
Buzzoni's models (Buzzoni et al. 1992; Buzzoni,
Mantegazza $\&$ Gariboldi, 1994; Buzzoni 1995a),
and Worthey's models (Worthey 1994), hereinafter referred to as B and W
respectively. The metallicity range covered by
W models goes from 0.01\mbox{Z$_{\odot}$} to $\sim$3\mbox{Z$_{\odot}$}~
(\mbox{Z$_{\odot}$} $\simeq$ 0.017), while B models encompass a larger $Z$
range, from $\sim$ 0.006\mbox{Z$_{\odot}$} to $\sim$ 6\mbox{Z$_{\odot}$}. Each metallicity
is characterized by one value for the helium abundance:
Worthey (1994) assumes $Y = 0.228 + 2.7 Z$, while in B models
$Y$ increases less
steeply with $Z$, according to $Y \simeq 0.23 + Z$. Besides, the isochrones
used to construct the SSPs have solar abundance ratios. This corresponds to
specifying the chemical trajectory followed in the evolution of the composite
stellar population.
\par
The two sets of models present systematic differences in the bolometric
output, broad band colours and line strengths (Worthey 1994,
Buzzoni 1995b). At least part of these differences can be ascribed to
the different choices of the $\Delta Y$/$\Delta Z$ parameter (Renzini 1995)
and to
the different fitting functions (i.e. the dependence of individual
stellar indices on effective temperature, gravity and metallicity)
adopted in the computations.
\subsection{The metallicity distribution functions}
The contribution of a given SSP to the bolometric
light of a composite stellar population of total mass $M_{\rm T}$ and total
luminosity $L_{\rm T}$ can be written as
\begin{equation}
\Phi_{\rm S} = \frac {L_{\rm S}}{L_{\rm T}} =
(\frac {L}{M})_{\rm S}~\frac {M_{\rm S}}{M_{\rm T}}~
\frac {M_{\rm T}}{L_{\rm T}}. \label{eq:Moverl}
\end{equation}
The distribution function $\Phi_{\rm S}$ is then proportional to the mass
distribution over the metallicity range, through the inverse of the
$(M/L)_{\rm S}$ ratio, which depends on the metallicity and helium content
(Renzini 1995).
\par
The closed box, simple model for the chemical evolution of galaxies
predicts that the distribution of the stellar mass over the total
metallicity follows the relation
$f~(Z) \propto e^{-Z/y}$ (Tinsley 1980), where {\it y} is the
stellar yield.
According to this model, most stars are formed at the lowest
metallicities. More sophisticated models for the chemical evolution of
elliptical galaxies, which take into account the occurrence of galactic
winds (Arimoto $\&$ Yoshii, 1987; Matteucci
$\&$ Tornamb\'e, 1987), also predict the existence of
a substantial metallicity spread in the stellar content of elliptical galaxies.
In these models, the more massive the galaxy, the later the galactic wind
sets in, and further chemical enrichment is achieved. Correspondingly,
as the galactic mass increases, the metallicity distributions get skewed
towards higher $Z$ values. Nevertheless, a substantial fraction of low $Z$
stars is always present, and
indeed Arimoto $\&$ Yoshii's metallicity distributions (by number)
for model
Ellipticals with mass ranging from 4 $\times$ 10$^9$ to 10$^{12}$
M$_{\odot}$ are well described by a closed box model relation :
\begin{equation}
f(Z) = \frac{{\rm exp}(-Z/y)}{\int_{Z_{\rm m}}^{Z_{\rm M}}
{{\rm exp}(-Z/y)~dZ}} \label{eq:Foz}
\end{equation}
with a minimum metallicity $Z_{\rm m} \simeq$ 0.01\mbox{Z$_{\odot}$}, a maximum
metallicity $Z_{\rm M}$ increasing from $\sim$ 2\mbox{Z$_{\odot}$} to $\sim$ 6\mbox{Z$_{\odot}$},
and {\it y} varying from 2\mbox{Z$_{\odot}$} to 3\mbox{Z$_{\odot}$}. So,
the metallicity range spanned by the stars
in these model ellipticals goes from $\sim$ 2.3 to $\sim 2.8$ dex.
\par
Relation (\ref{eq:Foz}) finds also observational support from the direct
determination of the metallicity distribution of K-giants in the bulge
of our galaxy, which appear to follow a close box model relation with
$Z_{\rm M}$ $\sim$ 5\mbox{Z$_{\odot}$} and {\it y} $\simeq$ 2\mbox{Z$_{\odot}$} (Rich 1988). McWilliam
and Rich (1994) revised Rich (1988) metallicity scale for [Fe/H] towards
values lower by $\sim$ 0.3 dex, but find a [Mg/Fe] overabundance
of the same amount, and state that
the distribution of [(Fe+Mg)/H] in the bulge stars may agree with
Rich (1988) [Fe/H] distribution.
\par
I then adopt eq. (\ref{eq:Foz}) to describing
the $M_{\rm S}$/$M_{\rm T}$ distribution over the metallicity
for the composite stellar populations, and
explore the effect of different values for the three parameters
$Z_{\rm M}$, $Z_{\rm m}$ and {\it y}. For a fixed (low) $Z_{\rm m}$,
increasing values of
$Z_{\rm M}$ (and yield) are meant to describe stellar populations produced in
an environment in which the chemical processing is terminated at progressively
higher levels of completion. These sequences of models, then, conform to
the predictions of chemical
evolution models with galactic winds for increasing galactic mass.
Different values of $Z_{\rm m}$, instead, characterize different degrees of
pre-enrichment of the gas.
\par
To derive the $\Phi_{\rm S}$ distribution the behaviour
of the ($M/L$) ratio for the SSPs with increasing metallicity and helium
content needs to be specified. In B models, which are characterized by a
low $\Delta Y$/$\Delta Z$ parameter, $M/L$ increases with $Z$, going from
2.2 to 4.5 for $Z$ ranging from 0.01\mbox{Z$_{\odot}$} to 3\mbox{Z$_{\odot}$}, for the
15 Gyr old
SSPs. In W 17 Gyr old models, the mass to (bolometric) luminosity ratio
increases
mildly with metallicity up to a maximum value of 4.2 reached at
0.5\mbox{Z$_{\odot}$},and it decreases afterwards, down to 3.3 at 3\mbox{Z$_{\odot}$}.
While the different behaviour reflects the different
$\Delta Y$/$\Delta Z$ (Renzini 1995), these $M/L$ values are not directly
comparable,
the total mass $M$ being computed with different prescriptions
(Worthey 1994). Besides this, neither of the two $M/L$ correspond
to what should be inserted in eq. (\ref{eq:Foz}) :
Buzzoni's $M$ values do not take into account the mass locked into
stellar remnants; Worthey's $M$ values do not include the
remnant masses for stars born with $M >$2 \mbox{M$_{\odot}$}, but do include
both the remnant and the returned mass for stars born with mass between
the turn-off mass and 2\mbox{M$_{\odot}$}. I then chose to
neglect the dependence of $(M/L)_{\rm S}$ on the metallicity of the SSPs,
and adopt $\Phi_{\rm S} \propto f(Z)$.
If $M/L$ increases with metallicity, as in B models, this approximation
leads to an overestimate of the contribution of the high metallicity SSPs
in the integrated indices for the composite stellar populations. If, on
the contrary, the $M/L$ given in W models are more appropriate,
the contribution of the high $Z$ populations will be underestimated.
Notice, however, that in W models
$M/L$ varies by only a factor of 1.2 for $Z$ varying from 0.01\mbox{Z$_{\odot}$} to 3\mbox{Z$_{\odot}$}.
In the following, composite stellar populations with this $\Phi_{\rm S}$
distribution function will be briefly referred to as CSPs.
\section {Results of the computations}
Following the prescriptions described in Sect. 2, the integrated
spectral indices are given by the following relations:
\begin{equation}
\mbox{Mg$_2$~} = - 2.5~{\rm Log}~\frac {\int_{Z_{\rm m}}^{Z_{\rm M}}{dZ~~10^
{-0.4~({\rm Mg}_{2}^{\rm S}- BC_{\rm V}^{\rm S})}~e^{-Z/y}}}
{\int_{Z_{\rm m}}^{Z_{\rm M}}{dZ~~
10^{0.4~BC_{\rm V}^{S}}~e^{-Z/y}}} \label{eq:mgcom}
\end{equation}
\begin{equation}
\mbox{H$\beta$~} = \frac {\int_{Z_{\rm m}}^{Z_{\rm M}}{dZ~~\mbox{H$\beta$~}^{\rm S}~10^
{0.4~BC_{\rm B}^{\rm S}}~e^{-Z/y}}}{\int_{Z_{\rm m}}^{Z_{\rm M}}{dZ~~
10^{0.4~BC_{\rm B}^{\rm S}}~e^{-Z/y}}} \label{eq:Hbcom}
\end{equation}
\begin{equation}
\mbox{Fe52~} = \frac {\int_{Z_{\rm m}}^{Z_{\rm M}}{dZ~~\mbox{Fe52~}^{\rm S}~10^
{0.4~BC_{\rm V}^{\rm S}}~e^{-Z/y}}}{\int_{Z_{\rm m}}^{Z_{\rm M}}{dZ~~
10^{0.4~BC_{\rm V}^{\rm S}}~e^{-Z/y}}} \label{eq:Ftwcom}
\end{equation}
\begin{equation}
\mbox{Fe53~} = \frac {\int_{Z_{\rm m}}^{Z_{\rm M}}{dZ~~\mbox{Fe53~}^{\rm S}~10^
{0.4~BC_{\rm V}^{\rm S}}~e^{-Z/y}}}{\int_{Z_{\rm m}}^{Z_{\rm M}}{dZ~~
10^{0.4~BC_{\rm V}^{\rm S}}~e^{-Z/y}}} \label{eq:Fthcom}
\end{equation}
where $BC_{\rm B}$ and $BC_{\rm V}$ denote the bolometric corrections to the
B and
the V band magnitudes, respectively, and the S index denotes the
SSP quantities, which depend on
metallicity and age. One can immediately notice that, since high $Z$
populations
yield a relatively low contribution in the B and V bands, the low metallicity
component of a composite stellar population will tend to dominate the
integrated indices.
\par
The behaviour of these indices as functions of the average metallicity of
a composite stellar population has been explored by considering different
values for $Z_{\rm m}$ and $Z_{\rm M}$.
In order to characterize each distribution in terms of one parameter,
an {\it average metallicity} has been computed, defined by the following
relation:
\begin{equation}
[<{\rm Fe/H}>] = {\rm Log}~\frac {\int_{Z_{\rm m}}^{Z_{\rm M}}{dZ~~
\frac{(Z/X)^{\rm S}}{(Z/X)_{\odot}}
e^{-Z/y}}}{\int_{Z_{\rm m}}^{Z_{\rm M}}{dZ~~e^{-Z/y}}}. \label{eq:fehc}
\end{equation}
Other definitions of {\it average metallicity} can be found in the
literature (see e.g. Arimoto $\&$ Yoshii 1987). In this respect, it should be
noticed that the quantity
$<[{\rm Fe/H}]>$ differs sistematically from [$<$Fe/H$>$] given by eq.
(\ref{eq:fehc})
because the former assigns more weight to the low metallicity tail
of the distribution. The difference between the two quantities amounts to
$\sim$ 0.2 dex for the widest $Z$ distribution considered here ($0.01\mbox{Z$_{\odot}$} <
Z < 6\mbox{Z$_{\odot}$}$). The choice of using eq. (\ref{eq:fehc})
is motivated by the fact that this parameter better
describes the mass fraction of metals in the CSP, for the given
$f(Z)$ distribution function.
\begin{figure*}[tb]
\vspace {17cm}
\special{psfile=f1.ps angle=0. hoffset=-30. vscale=100. hscale=100. voffset=-145.}
\caption[]{Model line strenghts using Buzzoni's 15 Gyr old
SSPs, as functions of metallicity. The thick lines are the loci of
pure SSP models.
The thin lines display models for CSPs for various values of
the $Z_{\rm m}$ parameter. The following cases are shown:
$Z_{\rm m}$/\mbox{Z$_{\odot}$}= (0.01,0.1,0.3,0.5,1,1.5,2), and the filled squares mark the
values for SSPs with $Z = Z_{\rm m}$.
Along each line, $Z_{\rm M}$ varies from $Z_{\rm m}$
up to 0.1. The filled circles mark CSP models with
$Z_{\rm m}$ = 0.1\mbox{Z$_{\odot}$}, and $Z_{\rm M}$ = (0.5,1,1.5,2,4,6)$\times$\mbox{Z$_{\odot}$}.
The effect of different values of the parameter {\it y} is also shown.
Finally, the dot-dashed lines show
the typical ranges spanned by observational data for the nuclei of elliptical
galaxies.}
\end{figure*}
\subsection {Indices versus metallicity using B models}
I will now describe the results of the integration of equations
(\ref{eq:mgcom}) to
(\ref{eq:Fthcom}) for the different $Z$ distributions, using B
and W sets of SSP models. Among the various models by Buzzoni
I have considered those computed with Salpeter IMF
and red horizontal branches. Analogous assumptions characterize Worthey's
models.
\par
Figure 1 displays the integrated indices obtained using B SSPs at
15 Gyr, which are shown as thick lines in the four panels.
The thin lines illustrate the effect of the presence of a metallicity
spread (see captions), the lowest of which corresponds to
$Z_{\rm m}$ = 0.01\mbox{Z$_{\odot}$}, and shows the
expected behaviour of the indices for galaxies of increasing mass, in the
frame of the wind models for the chemical evolution of galaxies. The
different lines shows the results
obtained with different $Z_{\rm m}$,
describing the effect of assuming different
degrees of preenrichment of the gas.
\par
The loci described by the CSPs
for increasing average metallicity are shallower than the
pure SSP relations: this is due to the fact that the higher $Z$ populations
contribute less than the lower metallicity ones in the optical bands, where
the considered spectral indices are measured. As a consequence, the
\mbox{Mg$_2$~} and Iron indices of these CSPs never reach
values as high as those characteristic of the highest $Z$ SSPs, unless the
metallicity spread is extremely small.
Had I used the B-band flux as a measure of the
pseudocontinuum for calculating the integrated \mbox{Mg$_2$~} index the effect would
be stronger, since the high metallicity populations would receive even
less weight.
Taking into account the $(M/L)_{\rm S}$ ratio dependence on $Z$ as given in B
models, when computing $\Phi_{\rm S}$, would also strengthen the
difference between SSP and CSP models.
\par
At any given average metallicity, the \mbox{Mg$_2$~} and Iron indices for
CSPs are weaker (and \mbox{H$\beta$~} is stronger) than the corresponding
values for SSPs.
For example, a value of [$<$Fe/H$>$] = 0.22 can be obtained with a
composite stellar populations with parameters ($Z_{\rm m}$,$Z_{\rm M}$,$y$)=
(0.01,5,2)$\times$\mbox{Z$_{\odot}$}. The differences between
the line strenghts of such a composite population and those of the SSP
with the same [Fe/H] amount to
$\Delta$ \mbox{Mg$_2$~} $\simeq -$0.05, $\Delta$ \mbox{H$\beta$~} $\simeq$ 0.18,
$\Delta$ \mbox{Fe52~} $\simeq -$0.5 and $\Delta$ \mbox{Fe53~} $\simeq -$0.4. These
results
refer to the $\it y$ = 2$\times$\mbox{Z$_{\odot}$} case. Adopting $\it y$ = 4$\times$\mbox{Z$_{\odot}$}
(dotted lines), so as to enhance the fraction of the high $Z$ component,
the differences are only slightly smaller.
This effect is particularly important when dealing with
the highest metallicity galaxies.
\par
In much the same way, {\bf at any given value of the considered spectral
index, the average metallicity of a composite stellar population
is higher than the metallicity of the SSP which has the same index}.
This is illustrated in Figure 2, where I plot, as a function of \mbox{Mg$_2$~},
the difference ($\Delta$[Fe/H]) between the average metallicity of the
CSPs and that of SSP models with the same value of the \mbox{Mg$_2$~}
index.
Along any line, the metallicity distributions have the same $Z_{\rm m}$
and increasing $Z_{\rm M}$, as in Figure 1. Figure 2 shows that
the metallicity inferred from a given \mbox{Mg$_2$~} value using SSP models is
lower than what would be derived using CSP models, the difference being larger
the wider the metallicity distribution. The lower weight received by the high
$Z$ populations in eq. (\ref{eq:mgcom}) causes the rapid growth
of $\Delta$ [Fe/H] as the high \mbox{Mg$_2$~} ends of the curves are approached:
high \mbox{Mg$_2$~} line strengths are obtained only with very large $Z_{\rm M}$ values.
This is not a small effect: at \mbox{Mg$_2$~} = 0.26 the difference in
the metallicities amounts to 0.3 dex, for a metallicity distribution
extending down to $Z_{\rm m}$ = 0.01\mbox{Z$_{\odot}$}.
\begin{figure}[htb]
\vspace {9cm}
\special{psfile=f2.ps angle=0. hoffset=-35. vscale=95. hscale=95. voffset=-140.}
\caption[]{The difference between the average metallicity of CSPs and the
metallicty of SSPs having the same \mbox{Mg$_2$~} index, as a function of the
index itself, for Buzzoni's 15 Gyr old models.
The different lines correspond to CSPs with different
values for the $Z_{\rm m}$ parameter: (0.01,0.1,0.3,0.5,1)$\times$\mbox{Z$_{\odot}$}.
As in Fig.1, along the lines $Z_{\rm M}$ increases up to Z=0.1.
The minimum in the line corresponding to $Z_{\rm m}=0.01\mbox{Z$_{\odot}$}$
is an artifact of the
linear interpolation between SSP models used in the computation.
The effect of adopting different {\it y} values is shown.}
\end{figure}
Adopting a large {\it y} the effect is still important,
while it gets small only reducing the widths of the metallicity distributions,
(i.e. increasing $Z_{\rm m}$), as obvious. It follows that
the calibration of \mbox{Mg$_2$~} in terms of metallicity via the comparison with
theoretical values for SSPs is affected by a systematic effect, which can
be considerably large, if a CSP spanning a
substantial metallicity range is present.
\par
The dot-dashed lines in Figure 1 show the typical range spanned by the
observational data (e.g. Davies et al. 1987 ; WFG;
Carollo, Danziger $\&$ Buson 1993).
It appears that galaxies with (\mbox{Mg$_2$~},\mbox{Fe52~},\mbox{Fe53~}) up to $\sim$
(0.26,3.,2.6) can be interpreted as hosting composite stellar populations with
a metallicity distribution as in the closed box model. For these
galaxies, higher metallic
line strengths correspond to larger values for $Z_{\rm M}$, and because
of the Mg$-\sigma$ relation (Bender, Burstein $\&$ Faber 1993), to more
massive objects. It should be noticed, however, that
it is not possible to constrain the metallicity distribution from the
integrated indices: the same value can be obtained with different metallicity
ranges for the CSP, not for saying different distribution functions.
Nevertheless, it seems that
the observed metallic line strengths in the nuclei of the less luminous
Ellipticals are consistent with the theoretical expectations from
models for the chemical evolution which include the occurrence
of galactic winds.
\par
On the contrary, galaxy centers with metallic indices in excess than the above
quoted values are not reproduced by these models (see also Casuso et al.
1996, Vazdekis et al. 1996), and appear instead to require some degree of
pre-enrichment. The highest \mbox{Mg$_2$~} are
barely accounted for by pure SSP models, and the strongest Iron
indices are not reproduced by the CSP model with
$0.1\mbox{Z$_{\odot}$} \le Z \le 6\mbox{Z$_{\odot}$}$ ,
in which the fraction of stars with sub$-$solar metallicity
is less than 0.4. As can be seen in Figure 1, adopting a yield as high as
4\mbox{Z$_{\odot}$} does not alter these conclusions.
\par
The \mbox{H$\beta$~} values measured in the highest metallicity Ellipticals
support this same picture: closed box models with
$0.01\mbox{Z$_{\odot}$} \le Z \le 6\mbox{Z$_{\odot}$}$
have \mbox{H$\beta$~} $\simeq$ 1.7, while galaxies with \mbox{H$\beta$~} indices weaker than that
are observed. Besides, the models which account for the weakest metallic
indices, with $0.01\mbox{Z$_{\odot}$} \le Z \le 0.5\mbox{Z$_{\odot}$}$, are characterized by
\mbox{H$\beta$~} $\simeq$ 2, while galaxies with \mbox{H$\beta$~} as large as 2.5 are
observed. Since \mbox{H$\beta$~} is highly sensitive to age, one can interpret the
data as an evidence for a younger age of the low metallicity
Ellipticals (Gonzalez 1993). Notice however that the high $Z$
objects do require an old age. I will further discuss this point later.
\subsection {Indices versus metallicity using W models}
Figure 3 displays the models for composite stellar populations obtained using
Worthey's set of SSP models, with an age of 12 Gyr.
The legend is the same as in Figure 1.
The highest metallicity considered in the $Z$ distributions
is now $\simeq$ 3\mbox{Z$_{\odot}$}.
The qualitative effects of the presence of a metallicity
spread in the stellar population are the same as already discussed for
B models, and the indications derived from the comparison with the
observations are very similar. Also the quantitative effects are close
to those derived using for B models: the model with
$0.01\mbox{Z$_{\odot}$} \le Z \le 3\mbox{Z$_{\odot}$}$ has (\mbox{Mg$_2$~},\mbox{Fe52~},\mbox{Fe53~},\mbox{H$\beta$~}) $\simeq$
(0.23,2.8,2.5,1.7) for {\it y} = 2\mbox{Z$_{\odot}$}, its average metallicity
is [$<$Fe/H$>$] = 0.11, while a pure SSP with the same \mbox{Mg$_2$~} has
[Fe/H]=$-$0.16. Compared to Buzzoni's, W SSP models have
a steeper dependence of \mbox{Mg$_2$~} on [Fe/H] (see also WFG),
higher \mbox{Fe53~} and lower \mbox{H$\beta$~} over all the metallicity range.
On the other hand for increasing [Fe/H], the bolometric corrections to the
V and B bands decrease more rapidly in Worthey's models, leading to
a relatively lower contribution of the high $Z$ SSPs in
equations (\ref{eq:mgcom}) to (\ref{eq:Fthcom}).
The two effects conspire to yield the same quantitative results on the
integrated indices in the CSPs.
\begin{figure*}[tb]
\vspace {17cm}
\special{psfile=f3.ps angle=0. hoffset=-30. vscale=100. hscale=100. voffset=-145.}
\caption[]{The same as Figure 1, but for Worthey's 12 Gyr old SSPs.}
\end{figure*}
\section {Comparison of CSP models with the observations}
I will now compare more in detail the predictions of the CSP models
illustrated in the previous section to the observations of the line strengths
in the nuclei of Ellipticals. I concentrate on some particular aspects
which can be crucial for understanding the modalities of galaxy formation.
\subsection{The strong \mbox{Mg$_2$~} of Giant Ellipticals}
As shown in the previous subsection, a closed box model for the chemical
enrichment in which the gas is initially at $Z \simeq$ 0 fails to produce
a stellar population with \mbox{Mg$_2$~} as high as observed in the nuclei of
the brightest Ellipticals. One way to solve the
problem is assuming that the initial metallicity in the gas is larger
than 0. To investigate this in more detail, I have computed a sequence of
models characterized by a fixed $Z_{\rm M}$ = 3\mbox{Z$_{\odot}$} and $Z_{\rm m}$
decreasing from $Z_{\rm M}$ to 0.05\mbox{Z$_{\odot}$}. The results are shown in
Figure 4 for W 17 Gyr and B 15 Gyr models, where
the hystogram shows the \mbox{Mg$_2$~} distribution of the Ellipticals in the
Davies et al.(1987) sample, which includes 469 objects, $\sim$ 93 $\%$ of
which have \mbox{Mg$_2$~} in excess of 0.22.
\begin{figure}[htb]
\vspace {10cm}
\special{psfile=f4.ps angle=0. hoffset=-25. vscale=98. hscale=98. voffset=-140.}
\caption[]{Dependence of \mbox{Mg$_2$~} on the parameter $Z_{\rm m}$: the curves
show the
loci of CSP models characterized by a maximum value of the metallicity
$Z_{\rm M}$ = 3\mbox{Z$_{\odot}$}, while $Z_{\rm m}$ is made to decrease. Solid and
dotted lines
correspond to {\it y}/\mbox{Z$_{\odot}$} = 2 and 3 respectively. The curves labelled
W have been computed with Worthey's 17 Gyr old models; those labelled
B with Buzzoni's 15 Gyr old ones. Also plotted is the distribution of
the \mbox{Mg$_2$~} observed values for early type galaxies from Davies et al.
(1987). Objects with \mbox{Mg$_2$~} in excess of 0.3 require $Z_{\rm m}$ greater
than $\simeq$ 0.5\mbox{Z$_{\odot}$} for both sets of SSP models.}
\end{figure}
It appears that galaxies with \mbox{Mg$_2$~} higher than 0.3 (more than 40 $\%$ of
the total sample) require $Z_{\rm m}$ larger than 0.5\mbox{Z$_{\odot}$}~for both W and
B models, in their nuclei.
Integrated \mbox{Mg$_2$~} indices in excess of 0.32 are obtained
with $Z_{\rm m}$ larger than $\sim$\mbox{Z$_{\odot}$}, for W, or than
$\sim$1.5\mbox{Z$_{\odot}$} for B SSPs, and only the models computed
with Worthey's SSPs account for the largest \mbox{Mg$_2$~}. Notice that galaxies with
nuclear \mbox{Mg$_2$~} as large as 0.4
have been observed, and that the adoption of a higher value for the yield
(dotted lines) does not solve the problem. \par
The difference in the results obtained using the two sets of models
is due to the different ages of the SSPs and to the
different values of \mbox{Mg$_2$~} at the various metallicities.
Had I used Buzzoni's SSPs with $Z$ in excess of 3\mbox{Z$_{\odot}$}, integrated \mbox{Mg$_2$~}
larger than
0.32 would have required \mbox{Z$_{\odot}$} $\le Z \le$ 6\mbox{Z$_{\odot}$}. On the other hand,
using Worthey's 12 Gyr SSPs, the integrated \mbox{Mg$_2$~} for a metallicity
distribution with 2\mbox{Z$_{\odot}$} $\le Z \le$ 3\mbox{Z$_{\odot}$} is $\simeq$0.33.
Therefore, these two sets of SSPs require that {\bf the nuclei of the strongest
\mbox{Mg$_2$~} ellipticals host both old and high metallicity SSPs, with a narrow
metallicity dispersion}. The older the population, the larger the allowed
metallicity range, but the presence of a significant component of
stars with $Z \le$\mbox{Z$_{\odot}$} in galaxies with \mbox{Mg$_2$~} in excess than $\sim$ 0.32
seems unlikely, due to the strong effect it would have on the integrated
index.
\par
Weiss et al. (1995) derive metallic
line strengths for SSP models with total metallicity larger than solar, and
different elemental ratios. For solar abundance ratios, their \mbox{Mg$_2$~} indices
happen to be in close agreement with those of B and W SSPs
at \mbox{Z$_{\odot}$}, but are instead systematically larger at supersolar metallicities.
For example their 15 Gyr SSP with $Z = 0.04$ has \mbox{Mg$_2$~} = 0.4, $\sim$ 0.07 dex
higher than B 15 Gyr model. Such high values would
relieve the problem of reproducing the data for the most
luminous ellipticals. However, it is likely that a closed box metallicity
distribution with $Z_{\rm m}$ $\simeq$ 0 would still predict too low
\mbox{Mg$_2$~} values,
due to the higher contribution of the low $Z$ component to the optical light.
For example, enhancing by 0.1 dex the \mbox{Mg$_2$~} indices in W 17 Gyr old
SSPs at $Z >$\mbox{Z$_{\odot}$}, I obtain \mbox{Mg$_2$~} = 0.27 for a 17 Gyr old CSP with
0.01\mbox{Z$_{\odot}$} $\le Z \le$ 3\mbox{Z$_{\odot}$}. The analogous
experiment with B 15 Gyr models yields \mbox{Mg$_2$~} = 0.30. Therefore, the need for
a small metallicity dispersion in the nuclei of the strongest \mbox{Mg$_2$~} Ellipticals
is a robust result.
\begin{figure*}[tb]
\vspace {17cm}
\special{psfile=f5.ps angle=0. hoffset=-30. vscale=100. hscale=100. voffset=-145.}
\caption[]{Comparison between model predictions and observations of the
nuclear line strengths. The data points
(shown as small symbols) are from Worthey, Faber $\&$ Gonzalez, 1992
(crosses) and Gonzalez, 1993 (squares). The error bars quoted by the
authors are shown in the two left panels, the smallest being relative
to Gonzalez data.
SSP models by Worthey and Buzzoni are displayed as dotted lines. The large
symbols show the line strengths for a selection of CSP models:
$Z_{\rm m}$=0.01\mbox{Z$_{\odot}$} (triangle), $Z_{\rm m}$=0.1\mbox{Z$_{\odot}$} (square),
$Z_{\rm m}$=0.5\mbox{Z$_{\odot}$}
(penthagon) and $Z_{\rm m}$=\mbox{Z$_{\odot}$} (octagon), all having $Z_{\rm M}$ = 3\mbox{Z$_{\odot}$}
and {\it y} = 3\mbox{Z$_{\odot}$}. The sequence of CSP models obtained with
$Z_{\rm m}$ = 0.01\mbox{Z$_{\odot}$} and increasing $Z_{\rm M}$ is shown as a solid line.
Notice that the closed box simple model fails to account for the high
\mbox{Mg$_2$~} and Iron line strenghts shown by most of the data points.
The skeletal symbols show the indices of SSPs having [Fe/H]
equal to the average metallicity of those CSP models shown as polygons with the
same number of vertices. For example, the cross shows the SSP line strengths
for a metallicity equal to the average [Fe/H] of the CSP shown as a square.}
\end{figure*}
\subsection{Magnesium to Iron overabundance}
The plot of the iron line strengths as functions of \mbox{Mg$_2$~} allows to
check the internal consistency of the indications derived from the different
indices on the composition of the stellar population.
In Figure 5, a selection of the models described in the previous
section (large open symbols, see captions) is compared to the data
from WFG and Gonzalez (1993)(small symbols),
relative to the nuclear values of the indices.
The solid lines show the locus described in this diagram from the
sequence of CSP models with $Z_{\rm m}$ = 0.01\mbox{Z$_{\odot}$} and $Z_{\rm M}$ increasing
up to 3\mbox{Z$_{\odot}$}. It can be seen that,
in these diagrams SSPs add as vectors: the effect
of a $Z$ distribution is that of shifting the model along the SSP line
(WFG) , to
an intermediate position between those corresponding to SSP models
with $Z = Z_{\rm m}$ and $Z = Z_{\rm M}$.
Besides, the line strengths of SSP models are
stronger than those of CSP models
with the same (average) metallicity: this can be seen in Figure 5
comparing the position of the skeletal symbols with that of the
corresponding polygons. Once again, this reflects the larger weight
of the low metallicity component on the CSP line strengths.
\par
It appears that the average location
of galaxies with \mbox{Mg$_2$~}$\le$0.3 is well reproduced by the models, better
with Worthey's than with Buzzoni's SSPs. However, CSP models with
$Z_{\rm m}$ = 0.01\mbox{Z$_{\odot}$} barely reach \mbox{Mg$_2$~} $\sim$ 0.26, \mbox{Fe52~} $\sim$ 2.9,
\mbox{Fe53~} $\sim$ 2.7,
encompassing the range occupied by the weakest \mbox{Mg$_2$~} objects.
Galaxies with stronger metallic nuclear indices require some degree of
preenrichment in their centers, within this class of CSP models.
Notice that this
is needed to account for both \mbox{Mg$_2$~} and Iron line strenghts.
\par
Those objects characterized by \mbox{Mg$_2$~} larger than approximately 0.3 (mostly
represented in Gonzalez's sample) depart from the general (\mbox{Fe52~},\mbox{Fe53~}) $-$
\mbox{Mg$_2$~} relation, exhibiting lower Iron indices than the model predictions,
at the same magnesium index. This has been interpreted as evidence for a Mg/Fe
abundance ratio larger than solar in these (most luminous) ellipticals.
According to the current view of how the chemical enrichment proceeds in
galaxies, Magnesium is mostly produced by
massive stars exploding as Type II SNe, while a substantial fraction of
the Iron is provided by Type Ia SNe. Thus, a Mg to Fe overabundance can
be obtained either by increasing the relative number of Type II to Type Ia
events (e.g. with a flatter IMF), or by stopping the SF process
at early times, i.e. before a substantial amount of pollution by Type Ia
events has taken place (WFG, Davies et al. 1993, Matteucci 1994).
Both scenarios predict that all the $\alpha$ elements, mainly produced in
massive stars, are overabundant with respect to Iron. It is then
reasonable to assume that the [Mg/Fe] enhancement actually traces
and $\alpha$ elements overabundance (or an Fe underabundance), with
respect to the solar ratios. In the following section I will derive
a simple rule which enables to estimate the effect of an $\alpha$ elements
overabundance on the \mbox{Mg$_2$~} and Iron line strenghts.
\subsection{The effect of $\alpha$ element enhancement on the metallic
line strengths}
The metallic line strength of SSPs are sensitive to the metallicity
through two effects: one connected to the change of the overall shape of the
isochrone, and the other to the dependence on the metal abundance of the
specific feature in the individual stars which populate the isochrone,
described by the fitting functions.
Both effects are such that enhancing the metal abundance, the metallic features
get stronger, but they operate in different ways. I try now to
estimate how the shape of the isochrone on one side, and the fitting
functions on the other, vary in response to an $\alpha$ elements
overabundance.
\par
The \mbox{Mg$_2$~} and \mbox{Fe52~} line strengths for SSPs are given by:
\begin{equation}
(\mbox{Mg$_2$~})^{\rm S} = - 2.5~{\rm Log}~\frac{\int_{iso} {n(x)~{\it f}_{\rm c}(x)~
10^{-0.4~Mg_{2}(x)}
~dx}}{\int_{iso} {n(x)~{\it f}_{\rm c}(x)~dx}} \label{eq:mgssp}
\end{equation}
\begin{equation}
(\mbox{Fe52~})^{\rm S} = \frac{\int_{iso} {n(x)~{\it f}_{\rm c}(x)~Fe52(x)~dx}}
{\int_{iso} {n(x)~{\it f}_{\rm c}(x)~dx}} \label{eq:fessp}
\end{equation}
where $x$ describes the (Log g, Log T$_{\rm e}$) values along the isochrone,
$n(x)$ and $f_{\rm c}(x)$ are the number of stars and the contiunuum flux
in the relevant wavelength bands, and $Mg_2(x)$, $Fe52(x)$ are the fitting
functions, all quantities being computed at the point {\it x}.
\par
For a given age, $n(x),f_{\rm c}(x)$ and the fitting functions depend on
the total
metallicity $Z$ and on the fractions $\zeta_{\rm i}$ = $X_{\rm i}/Z$ of all the
different elements ($X_{\rm i}$ being the abundance by mass of the
i-th element).
The effect of changing the fractions $\zeta_{\rm i}$ (at fixed total
metallicity $Z$) from solar ratios
to $\alpha$ enhanced ratios on the shape of the isochrone is likely to
be small. This has been shown to be valid (at low metallicities) for the
turn-off region by Chaboyer, Sarajedini $\&$ Demarque (1992)and
Salaris, Chieffi $\&$ Straniero (1993).
Considering the RGB, its location essentially depends on the abundance of
heavy elements with low ionization potentials (Renzini 1977), which,
being the main electron donors, control the H$^-$ dominated opacity.
The RGB temperature is then controlled by the total abundance of these
elements, namely Mg,Si,S,Ca and Fe (Salaris et al. 1993).
The $\alpha$ enhanced mixtures predicted by detailed models for the
chemical evolution of galaxies (Matteucci 1992) are indeed characterized by
a similar value of the sum of the $\zeta_i$ fractions of
Mg,Si,S,Ca and Fe. For example, it varies from 0.2, at solar ratios,
to 0.18 for an $\alpha$ enhancement of $\sim$ 0.4 dex. This means that
a mixture with $Z=0.02$ and solar abundance ratios has the same
abundance (by mass) of electron donors as a mixture with
$Z$ = 0.022 and an average enhancement of all the $\alpha$ elements of
0.4 dex. It is then reasonable
to assume that the shape of the isochrone is mainly controlled by
the total metallicity, and that variations of the $\zeta_i$ fractions have
only a marginal effect, provided that all the $\alpha$ elements are enhanced.
Actually, compared to the solar ratios mixtures, the $\alpha$ enhanced
ones in Matteucci (1992) models
have a slightly lower fraction of electron donors, implying slightly
warmer RGBs and, keeping fixed all the rest, lower metallic line
strenghts. The SSP models computed for solar ratios and
$\alpha$ enhanced mixtures by Weiss et al. (1995) indeed
support this conclusion (compare their model 7 to 7H).
\begin{figure}[htb]
\vspace {10cm}
\special{psfile=f6.ps angle=0. hoffset=-35. vscale=100. hscale=100. voffset=-150.}
\caption[]{The effect of an $\alpha$ elements enhancement on the \mbox{Mg$_2$~} and
\mbox{Fe52~} line strenghts of W models with an age of 17 Gyr.
The symbols are the same as in Fig. 5.
The arrows show how the indices change for a progressively higher
[Mg/Fe], according to the simple scaling described in the text.
It appears that the highest \mbox{Mg$_2$~} galaxies require both a larger
[Mg/Fe] ratio and a larger total metallicity.}
\end{figure}
I now turn to consider the metallicity dependence of the fitting
functions. Buzzoni's $Mg_2(x)$ and $Fe52(x)$ are expressed as the
sum of two terms, one depending only on metallicity and the other
on gravity and temperature. For such form, the \mbox{Mg$_2$~} and \mbox{Fe52~} indices for SSPs
can be written as:
\begin{equation}
(\mbox{Mg$_2$~})^{\rm S} = \Theta_{\rm Mg}([{\rm Fe/H}]) + G_{\rm iso} \label{eq:Mgs1}
\end{equation}
\begin{equation}
(\mbox{Fe52~})^{\rm S} = \Theta_{\rm Fe}([{\rm Fe/H}]) +
G^{\prime}_{\rm iso} \label{eq:Fes1}
\end{equation}
in which $\Theta_{\rm Mg}$ and $\Theta_{\rm Fe}$ are exactly the
dependences on [Fe/H]
of the $Mg_2(x)$ and $Fe52(x)$ fitting functions, while the $G$ functions
depend on the shape of the isochrone, and on how the stars distribute
along it. For B models, $\Theta_{Fe}$ = 1.15[Fe/H] and
$\Theta_{Mg}$ = 0.05[Fe/H] $\simeq$ 0.05[Mg/H], since Buzzoni's
fitting functions have been
constructed using a sample of stars with likely solar abundance ratios.
Asssuming that the $G$ functions only depend on the total metallicity $Z$, and
that the fitting functions only depend on the abundance of the element
contributing to the feature, for B models I can write:
\begin{equation}
(\mbox{Mg$_2$~})^{\rm S} = 0.05~{\rm Log} \zeta_{\rm Mg} + h(Z) \label{eq:Mgs2}
\end{equation}
\begin{equation}
(\mbox{Fe52~})^{\rm S} = 1.15~{\rm Log} \zeta_{\rm Fe} +
h^{\prime}(Z) \label{eq:Fes2}
\end{equation}
were all the quantities dependent on total metallicity are described by
$h$, $h^{\prime}$, and $\zeta_{\rm Mg}$, $\zeta_{\rm Fe}$ are the
contribution to the
total $Z$ from Magnesium and Iron, respectively.
It is now easy to derive the difference between the indices for two SSPs
with the same total metallicity (and age), but different $\alpha$ enhancements:
\begin{equation}
\Delta(\mbox{Mg$_2$~}) = a \times Log \zeta_{\rm Mg}^{(1)}/
\zeta_{\rm Mg}^{(2)} \label{eq:Deltaam}
\end{equation}
\begin{equation}
\Delta(\mbox{Fe52~}) = b \times Log \zeta_{\rm Fe}^{(1)}/
\zeta_{\rm Fe}^{(2)} \label{eq:Deltaaf}
\end{equation}
with $a = 0.05$ and $b = 1.15$ for B models.\par
Gorgas et al. (1993) fitting functions, used in Worthey's models, do not
allow the separation of the metallicity dependence from that on gravity
and effective temperature. Thus, equations (\ref{eq:Mgs1}) and (\ref{eq:Fes1})
are not strictly applicable. However, for bright RGB stars which dominate
the \mbox{Mg$_2$~} index, Gorgas et al. fitting functions scale according to
$\Theta_{\rm Mg}$ $\sim$ 0.19[Fe/H] (WFG).
Most of the contribution to the
\mbox{Fe52~} index comes instead from lower luminosity RGB stars (Buzzoni 1995a),
for which $\Theta_{\rm Fe}$ $\sim$ 1.5[Fe/H] seems a fair approximation.
Therefore eq. (\ref{eq:Deltaam}) and (\ref{eq:Deltaaf}) can be used to
estimate the effect of an $\alpha$ overabundance on W SSP models, with
$a = 0.19$ and $b = 1.5$, since solar elemental ratios are likely to
characterize also the stars in the Gorgas et al. sample.
\par
To summarize, the line strengths of SSPs with $\alpha$ elements enhancement
can be scaled from those of SSPs with the same age and total metallicity $Z$
(but solar elemental ratios) using relations (\ref{eq:Deltaam}),
(\ref{eq:Deltaaf}) provided that:\par\noindent
i) the shape of the isochrone (at given age and $Z$), and the
distribution of stars along it, are the same;\par\noindent
ii) the dependence of the fitting functions on the metallicity is linear in
[M/H] and can be separated from the other dependences;
\par\noindent
iii) a given index depends only on the abundance of the element which gives
rise to the considered feature. \par
These three requirements are never strictly true; nevertheless
within the current understanding they seem to be valid approximations.
Chemical evolution models by Matteucci (1992) with an $\alpha$
elements enhanced mixture with [O/Fe] = 0.45 are characterized by
$\zeta_{\rm Mg}$/$\zeta_{\rm Mg,\odot}$ = 1.19 and
$\zeta_{\rm Fe}$/$\zeta_{\rm Fe,\odot}$ = 0.48. It
follows that, for this kind of $\alpha$ enhancement, one expects that
at each metallicity point \mbox{Mg$_2$~} increases by $\sim$ 0.004
and \mbox{Fe52~} decreases by $\sim$ 0.37 for B models. For W models
the expected differences are larger: \mbox{Mg$_2$~} increases by $\sim$ 0.015
and \mbox{Fe52~} decreases by $\sim$ 0.48.
The results of the application of these scaling relations are
shown in Figure 6, for W 17 Gyr old SSPs.
One can see that, as long as [Mg/Fe] is fixed, the theoretical indices
describe a locus parallel to the solar ratio sequence. Therefore, the
flattening of the \mbox{Fe52~} vs \mbox{Mg$_2$~} relation at the high metallicity end
can be obtained only assuming different [Mg/Fe] ratios in the different
galaxies. The average (flat) slope of the
data points in the \mbox{Mg$_2$~} $>$ 0.3 domain requires an [Mg/Fe] increasing for
increasing \mbox{Mg$_2$~}, up to an overabundance of $\sim$ 0.4 dex.
It is worth noticing that, in this simple scheme, \mbox{Mg$_2$~} still remains
an indicator of total metallicity: changing the [Mg/Fe] ratio at constant
$Z$ (i.e. enhancing $X_{\rm Mg}$ and decreasing $X_{\rm Fe}$ accordingly),
does not lead to a substantial increase of the \mbox{Mg$_2$~} index, while the effect on
the \mbox{Fe52~} index is more dramatic. As a result, the highest \mbox{Mg$_2$~} objects
are still accounted for by the highest total $Z$ populations.
Finally, I notice that, in spite of leading to higher \mbox{Mg$_2$~}
indices for a given $Z$, an [Mg/Fe] = 0.4 is not sufficient to account for
the strongest \mbox{Mg$_2$~} values observed, unless the nuclei of these galaxies are
inhabited by virtually pure SSPs of large total metallicity. Supersolar
CSPs barely reach \mbox{Mg$_2$~} = 0.34, and metallicity distributions
with $Z_{\rm m}$ = 0.5\mbox{Z$_{\odot}$} don't go beyond \mbox{Mg$_2$~} = 0.32. Therefore, the need
for a small metallicity dispersion in the nuclei of the most luminous
ellipticals still remains, even assuming an overabundance which accounts
for their relatively low \mbox{Fe52~} indices.
\begin{figure*}[tb]
\vspace {11.5cm}
\special{psfile=f7.ps angle=0. hoffset=-30. vscale=100. hscale=100. voffset=-145.}
\caption[]{Comparison between observations of
\mbox{H$\beta$~} and \mbox{Mg$_2$~} indices in the nuclei of Ellipticals and model
predictions using Worthey's (left panel) and Buzzoni's (right
panel) SSPs. The data points (small squares)
are from Gonzalez (1993) and the error bar quoted by the author is shown
in the left panel. The dotted lines connect SSP models of constant age
and different metallicities, each line labelled with its age in Gyr.
The dot-dashed lines connect constant metallicty SSPs with $Z$ = 3\mbox{Z$_{\odot}$}
(left panel), 1.7\mbox{Z$_{\odot}$} (right panel).
The solid lines are the loci described by single age CSPs, {\it y} =
3\mbox{Z$_{\odot}$}, $Z_{\rm m}$ =
0.01\mbox{Z$_{\odot}$}, and $Z_{\rm M}$ increasing up to 3\mbox{Z$_{\odot}$}, for the various ages.
The big symbols indicate the location in this diagram of CSPs with
fixed Z$_{\rm M}$ = 3\mbox{Z$_{\odot}$} and different Z$_{\rm m}$.
The encoding is the same as
in Figure 5, except for the following cases: the 3 Gyr old CSP shown as
a penthagon has $Z_{\rm m}$ = 0.6\mbox{Z$_{\odot}$} (instead of 0.5\mbox{Z$_{\odot}$}); the
8 Gyr old CSPs based on Buzzoni's models have been computed with an upper
metallicity cut off of $Z_{\rm M}$ $\simeq$ 1.7\mbox{Z$_{\odot}$} (instead of 3\mbox{Z$_{\odot}$}).
This different limits are due to the limited Z range covered by the
SSP models at these ages. A constant shift of 0.015 (for
Worthey's models) and 0.004 (for Buzzoni's) has been applied to the
theoretical \mbox{Mg$_2$~} line strenghts to account for the [Mg/Fe] overabundance.}
\end{figure*}
\subsection{The \mbox{H$\beta$~} index}
The \mbox{H$\beta$~} line strength is very sensitive to the temperature of
turn-off stars; thus plotting \mbox{H$\beta$~} versus an index which
is mainly controlled by $Z$ may allow to estimate indipendently age and
metallicity of a stellar system (see e.g. Gonzalez 1993). Adopting this
approach, Faber et al. (1995) suggest that the nuclei of elliptical galaxies
form more an age sequence of high $Z$ objects,
as opposed to a metallicity sequence of old objects.
In their comparison, the effect of a non
zero [Mg/Fe] is taken into account plotting \mbox{H$\beta$~} versus a newly
defined index ([MgFe]), equal to the geometric mean of Mgb and $<$Fe$>$.
This index is meant to trace better the total metallicity $Z$, but there is no
guarantee that it actually does (see also Faber et al. 1995). I prefer to
use \mbox{Mg$_2$~} as metallicity indicator, and account for the Magnesium overabundance
with the simple scaling given by relation (\ref{eq:Deltaam}).
\par
Figure 7 shows the locus described by SSP and CSP models in the \mbox{H$\beta$~}
vs \mbox{Mg$_2$~} plane for various ages, together with Gonzalez (1993) data.
In order to mimic the effect of an $\alpha$ elements enhancement, a constant
shift has been applied to the SSPs \mbox{Mg$_2$~} values, which amounts to
0.015 dex for W and 0.004 dex for B models,
coresponding to a constant overabundance of [Mg/Fe] = 0.4. \par
The CSP models in Figure 7 appear to support Faber et al. (1995) conclusion,
and further show that a metallicity spread would substantially
worsen the agreement between the models and the observations,
at all metallicities. Actually, the dot dashed line in the
left panel of Figure 7, fairly fitting the observations, connects the
$Z=$ 3\mbox{Z$_{\odot}$}~SSPs in the W set of models. Since in this diagram SSPs
add as vectors, an internal age spread of $\sim$ 1-2 Gyr in the CSPs would
hardly affect the interpretation of the data, which seem to require
younger average ages for the lower metallicity objects.
However, since the low \mbox{Mg$_2$~}
galaxies are also the fainter Ellipticals in the local sample, a tight
mass$-$age relation would be implied, with the less massive Ellipticals being
(on the average) younger than the most massive ones.
\par
Basically the same conclusion holds when considering B models
(right panel): in spite of predicting \mbox{H$\beta$~} line strengths which are
systematically higher than Worthey's, at any age and \mbox{Mg$_2$~}, still
the 15 Gyr old locus is too shallow with respect to the data. In essence,
for old ages the model \mbox{H$\beta$~} index is too low at low metallicities, and
ages younger than 8 Gyr are required to fit the \mbox{H$\beta$~} values of the low
\mbox{Mg$_2$~} galaxies.
\par
The need for invoking an age difference in the galaxies of Gonzalez (1993)
sample is related to the mild dependence of the \mbox{H$\beta$~} line strength on
the metallicity, which reflects the dependence on $Z$
of the turn$-$off temperature. This is the case for SSP models
with Red Horizontal Branches (RHB). On the other hand, as is well known,
the HB stars in the galactic globular clusters become bluer
with decreasing metallicity, although the trend is not strictly
monothonic due to the {\it second parameter} problem (see e.g. Renzini 1977).
If the average temperature of HB stars increases for
decreasing metallicity, \mbox{H$\beta$~} will be more sensitive to $Z$ than estimated
in the SSP models considered until now. As a result, the presence of a low
metallicity tail in the stellar populations in Ellipticals
could affect the CSP \mbox{H$\beta$~} line strenghts
appreciably, leading to higher equivalent widths the larger the
population in the low $Z$ tail.
According to Buzzoni et al. (1994), an Intermediate Horizontal Branch (IHB)
(corresponding to a temperature distribution peaked at Log T$_{\rm e}$ =
3.82, and a blue tail extending up to Log T$_{\rm e}$ = 4.05, such as e.g. in
the globular cluster M3) leads to \mbox{H$\beta$~} indices higher by
$\simeq$ 0.7 \AA, with respect to SSPs with RHB.
Thus, in order to estimate the impact of this effect on the \mbox{H$\beta$~}
line strength I have computed integrated indices for CSPs
using Buzzoni's 15 Gyr old models with an artificially increased
\mbox{H$\beta$~}. The enhancement is taken equal to 0.5\AA~ for all metallicitites
less than $\sim$ 0.6\mbox{Z$_{\odot}$}, and linearly vanishing at \mbox{Z$_{\odot}$}.
The effect of the adoption of an IHB on the \mbox{Mg$_2$~} index has instead been
neglected, since it is estimated to lead to a decrease of only a few
10$^{-3}$ mag (Buzzoni et al. 1994). Notice that a
[Mg/Fe] = 0.4, which would correspond to an increase of \mbox{Mg$_2$~} by approximately
the same amount for B models, has not been taken into account in this
computation.
\par
The results are shown in Figure 8, where the locus described
from CSPs with various $Z_{\rm m}$ is displayed. Obviously,
those CSPs with
$Z_{\rm m}$ in excess of $\sim$ 0.5\mbox{Z$_{\odot}$} are not affected by these new
prescriptions, and their corresponding line indices lay on the
locus of RHB SSP models. On the contrary, CSPs with sufficiently
low $Z_{\rm m}$ have substantially higher \mbox{H$\beta$~}, for a given \mbox{Mg$_2$~},
and the data in this diagram could be interpreted as a sequence of
old stellar populations, with increasing average metallicity.
While the need for a small metallicity dispersion in the nuclei of
the most luminous Ellipticals discussed in the previous sections
leaves little room for a sizeable contribution of a low $Z$
component, the less luminous Ellipticals can host a substantial low $Z$,
IHB component in their nuclei.
This experiment shows that the strength of the \mbox{H$\beta$~}
line is very sensitive to the temperature distribution assumed
for HB stars. Actually, just comparing Fig. 8 to the right panel
of Fig. 7, it appears that the data can be equally interpreted as
an age sequence, at a constant high metallicity,
or as a sequence at constant old age of CSPs with decreasing metallicity
spread, corresponding to an increasing average metallicity.
\begin{figure}[htb]
\vspace {11cm}
\special{psfile=f8.ps angle=0. hoffset=-35. vscale=95. hscale=95. voffset=-140.}
\caption[]{Effect of the contribution of an IHB on the \mbox{H$\beta$~} index. SSP models
from Buzzoni with an age of 15 Gyr are shown as dotted lines for a RHB
and an IHB (see text). CSP model sequences computed with IHB SSPs are shown
as different lines, along which $Z_{\rm m}$ increases up to 3\mbox{Z$_{\odot}$}.
All the CSP
models have {\it y} = 3\mbox{Z$_{\odot}$}, while $Z_{\rm m}$/\mbox{Z$_{\odot}$}=0.01,0.1,0.3,0.5
for the solid, long dashed, short dashed and dot$-$dashed lines,
respectively. The last CSP model of each sequence is shown as a big symbol.
The big octagon, on the RHB line, shows the line strenghts for the
CSP model with $Z_{\rm m}$ =\mbox{Z$_{\odot}$}, $Z_{\rm M}$ = 3\mbox{Z$_{\odot}$}, {\it y} = 3.
No shift to higher \mbox{Mg$_2$~} values to
account for the possible [Mg/Fe] enhancement has
been applied here.}
\end{figure}
\section {Discussion}
The numerical experiments performed in this paper illustrate how the
\mbox{Mg$_2$~}, \mbox{Fe52~}, \mbox{Fe53~} and \mbox{H$\beta$~} line strengths are affected by the
presence of a metallicity distribution shaped like the closed box
model predictions. I have explored systematically the results of
changing the minimum and maximum metallicity ($Z_{\rm m}$, $Z_{\rm M}$)
characterizing the chemical processing: the first parameter describes the
possibility of pre-enrichment of the gas in the closed box to different
degrees; the second, the occurrence of galactic winds, inibiting further
chemical processing at various levels of completion.
I sumarize now the major results, and derive
some hints on the stellar population inhabiting the central regions of
Ellipticals.
\subsection{The average metallicity in the nuclei of Ellipticals}
Due to its major contribution to the light in the optical bands, the
low metallicity component tends to dominate the integrated indices.
This implies that the metallic indices of the composite stellar populations
are systematically weaker than those of simple stellar populations,
with the same metallicity. Therefore, a given value for a metallic
line strength, corresponds to CSPs with larger mass averaged metallicities
than SSPs. The difference between the two metallicities depends on
the width of the $Z$ distribution in the CSP model, and it can be as
large as $\sim$ 0.3 dex.
As a consequence, a quantitative relation between
metallic line strength and average metallicity is subject to a substantial
uncertainty: it depends on the metallicity distribution, which
cannot be constrained, and on the SSP models, that may still be affected
by inadequacies. Indeed, some differences are present between the
various sets of models available in the literature (see also Charlot, Worthey
$\&$ Bressan 1996), which affect the relation between
integrated indices and the average metallicity of a given CSP.
This impacts on the calibration of the spectral indices of different
galaxies in term of
their metallicity, as well as on the derivation of abundance gradients
from line strength gradients within a given galaxy.
\subsection{The metallicity spread in the nuclei of Ellipticals}
The systematic exploration of the influence of the $Z_{\rm m}$ and
$Z_{\rm M}$ parameters
shows that the sequence from low to high luminosity ellipticals (as far
as their central stellar population is concerned) can be
interpreted in various ways: as a sequence of virtually pure SSPs of
increasing metallicity; as a sequence of CSPs in which either the low
metallicity component becomes less and less important, or both the minimum
and the maximum metallicity increase. Chemical evolution models with
galactic winds rather predict
a metallicity sequence among Ellipticals characterized by an increasing
$Z_{\rm M}$ for increasing mass (and luminosity) of the
galaxy, with an important low metallicity component always present.
This class of models can account for objects with \mbox{Mg$_2$~}, \mbox{Fe52~} and \mbox{Fe53~} up
to $\sim$ 0.27, 3 and 2.7 respectively, while galaxies with higher values of
these indices cannot be reproduced.
Notice that $\sim$ 80 $\%$ of objects in
the Davies et al.(1987) sample has \mbox{Mg$_2$~} $\ge$ 0.26.
Since these considerations
apply only to the nuclear indices, classical wind
models can still be adequate to describe the global metallicity distribution
in Ellipticals, but a mechanism should be found to
segregate the high metallicity component in the nuclei.\par
A similar problem has been found by Bressan, Chiosi $\&$ Fagotto (1994),
when comparing the spectral energy distribution of their model Ellipticals
with the observations. These authors pointed out that the
theoretical metallicity distribution was too heavily populated in
the low metallicity range, leading to an excess light between 2000 and
4000 \AA~with respect to the observed spectra. They noticed that a much
better fit was achieved if a minimum metallicity of $Z=0.008$ was assumed,
and concluded that the classical chemical evolution models for
elliptical galaxies, like those for the solar neighborhood, are affected by
the {\it G$-$dwaf problem}, i.e. the excess of low metallicity stars predicted
by the closed box model for the solar neighborhood. The classical
solutions to {\it cure} the G$-$dwarf
problem are (see Audouze $\&$ Tinsley 1976):\par\noindent
i) infall of metal free gas, so that the SFR exhibits a maximum
at some intermediate epoch, when a substantial enrichment has been already
accomplished;\par\noindent
ii) prompt initial enrichment (PIE), like what explored here by varying
the $Z_{\rm m}$ parameter; \par\noindent
iii) adopting a SFR enhanced in high metallicity gas, in which
the stars are formed with a larger metallicity
than the average $Z$ of the interstellar medium. \par
A variation of the PIE model consists in assuming that the first
stellar generations are formed with a conveniently flat IMF (Vazdekis et al.
1996), so that they
contribute metals at early times, but not light at the present epoch.
The following
generations would instead form with a normal IMF.\par
All these are in principle viable solutions, and which, if any, of these
applies to the nuclei of Ellipticals remains to be seen. However, I
notice that the infall models predict for the most massive galaxies
\mbox{Mg$_2$~} indices not larger than 0.28 (Bressan et al. 1996):
still a low value with respect to the
observations in the nuclei of massive Ellipticals.
Solution iii) requires large inhomogeneities in the gas, and a variable
IMF seems a rather {\it ad hoc} solution. \par
A prompt initial enrichment
for the gas in the nuclei of Ellipticals is easily realized by relaxing the
hypothesis of istantaneous complete mixing of the gas, and allowing
enriched gas to sink to the center. This is indeed a natural result of
dissipational galaxy formation (Larson 1975).
During its formation process, a galaxy consists
of two components: one dissipationless (the newly formed stars), and
one dissipative (the gas). Once formed, the
stars stop participating to the general collapse, keeping thereafter
memory of their energy state at formation. The gas, instead, will
continue to flow towards the center, being progressively enriched as
SN explosions take place. Thus, gas accumulated in the nuclear regions is
pre-enrinched by stars which formed (and died) in the outer regions, and
will further form stars, which will then be the most metal rich in the
galaxy. The metal poor stars missing in the galaxies nuclei should be
found in the outer regions. In other words, the different dissipation
propertities of the stars
and the gas would lead to a chemical separation within the galaxy, during its
formation, no matter wether the protogalaxy is a monolithically collapsing
cloud, or if it consists of gas rich merging lumps. Interestingly,
this out$-$inward formation, would also allow for
peculiarities in the core kinematics, as observed in a substantial fraction
of galaxies (Bender et al. 1993), since the formation of the nucleus
would be partially decoupled from the formation of the rest of the galaxy.
\subsection{The elemental ratios in the nuclei of Ellipticals}
The Iron indices vs \mbox{Mg$_2$~} plot suggests the presence of a Magnesium
overabundance in the brightest Ellipticals (\mbox{Mg$_2$~} $>$ 0.3),
as pointed out by WFG comparing the data to SSP models. The presence
of a metallicity distribution does not alter this conclusion, since the CSP
models considered here
describe the same locus of SSPs in the Iron vs \mbox{Mg$_2$~} diagram. If the
Magnesium overabundance is tracing a true $\alpha$ elements overabundance,
it is possible to estimate the enhancement quantitatively using
a simple scaling of the SSP models constructed for solar abundance ratios.
This estimate however is very sensitive to the dependence of the \mbox{Mg$_2$~} index
on the Mg abundance, and of the Iron index on the Fe abundance.
Assuming that these dependences are the same as the metallicity dependence
of the relative fitting functions, for Worthey's models I have found that
the brightest ellipticals should be characterized by [Mg/Fe] ratios
ranging from 0 to 0.4 approximately, for increasing \mbox{Mg$_2$~}. It may
seem that a Magnesium overabundance would easily explain the strong \mbox{Mg$_2$~}
values
without invoking high total metallicities in the nuclei of the brightest
Ellipticals. However, [Mg/Fe]$>$0 means Mg enhancement together with Fe
depletion, with respect to the solar ratio. In the frame of an overall
$\alpha$ element overabundance, the chemical evolution
models actually predict a lower fraction of electron donors at fixed
total metallicity, as mentioned in Section 3.4.
Correspondingly, the temperature of the RGB is increased,
counterbalancing the increase of the \mbox{Mg$_2$~} index conveyed by a higher
Mg abundance. Thus, large metallicities and small metallicity dispersions
are still needed to account for the data.\par
According to current modelling an overabundance of
[Mg/Fe]=0.4 implies very short formation
timescales for the whole galaxy, since the Mg and Fe gradients, within the
errors, are the same (Fisher, Franx $\&$ Illingworth 1995).
How short is interesting to look at:
Matteucci (1994) models
for the chemical evolution of Ellipticals predict that a solar [Mg/Fe] is
reached already at 0.3 Gyr. An overabundance implies formation timescales
shorter
than this, and the higher [Mg/Fe], the shorter the timescale required.
Indeed, in Matteucci (1994) {\it inverse wind models}, the galactic
wind occurs at only $\sim$ 0.15 Gyr in a 5 $\times$ 10$^{12}$ M$_\odot$ galaxy,
and yet the corresponding overabundance is not larger than 0.3. The formation
timescales inferred from a given [Mg/Fe] overabundance depend on the
adopted model for the Type Ia supernovae progenitors, and accordingly can
be considered quite uncertain. It seems however unlikely that a Mg to Fe
overabundance could be accomplished with formation time scales longer
than $\sim$ 1 Gyr. For example, Greggio (1996) finds that, following a
burst of Star Formation, $\simeq$ 50$\%$ of the total Iron from the
Type Ia SNe is released within a timescale ranging from 0.3 to 0.8 Gyr,
for a variety of SNIa possible progenitors.
\subsection{The \mbox{H$\beta$~} line strength as age indicator}
As for the Iron indices, the comparison of models to observations in the
\mbox{H$\beta$~} vs \mbox{Mg$_2$~} plot
does not change if a metallicity spread in the stellar populations is
taken into account. In SSPs with red horizontal
branches the only way to enhance \mbox{H$\beta$~} is by adopting a warmer turn$-$off,
that is younger ages. In this case, the constraint from the metallic
indices on the metallicity and metallicity dispersion discussed until now
is even stronger, since younger ages make \mbox{Mg$_2$~}, \mbox{Fe52~} and \mbox{Fe53~} weaker.
Extremely large metallicities should characterize the nuclei of all
Ellipticals in Gonzalez (1993) sample, with virtually no
metallicity dispersion.
If however the temperature distribution of
HB stars is wide enough, strong \mbox{H$\beta$~} indices can be obtained in
old stellar populations, due to the contribution of {\it warm} HB stars. In
this case, the \mbox{H$\beta$~} vs \mbox{Mg$_2$~} plot would trace the different proportions
of this stellar component. On the average, the effect is stronger
for the less metallic galaxies, if they host a composite
stellar population with larger fraction of low $Z$ stars, and the data are
consistent with an old age for the stars in the
centers of this sample of Ellipticals \par
The nuclei of
galaxies with \mbox{Mg$_2$~} in excess of $\sim$ 0.3 are however likely to host
only stars with metallicity larger than $\sim$ 0.5\mbox{Z$_{\odot}$}.
For these galaxies, there is little room for a warm
HB component, at least in the frame of the canonical stellar evolution,
and yet their \mbox{H$\beta$~} line strengths range from $\sim$ 1.4 to $\sim$ 1.8 \AA.
Adopting Worthey's 3\mbox{Z$_{\odot}$}~SSP models, this implies and age range from
$\sim$ 12 to $\sim$ 5 Gyr, if the stellar populations in the nuclei of
these galaxies are coeval. If, however, I consider a two component population,
one very old and one very young, relatively high \mbox{H$\beta$~} values can be
accomplished with a small contribution to the total light (and even smaller
to the total mass) from the young
component. For example, a combination of a 17 Gyr plus
a 1.5 Gyr old SSP, both with 3 times solar metallicity, contributing 80
$\%$ and 20 $\%$ of the light respectively, has \mbox{H$\beta$~} $\simeq$ 1.74 and
\mbox{Mg$_2$~} $\simeq$ 0.32. The contribution to the total mass of the young SSP
would amount to only $\sim$ 7 $\%$.
Adopting a solar metallicity for the young component,
one gets \mbox{H$\beta$~} $\simeq$ 1.8, \mbox{Mg$_2$~} $\simeq$ 0.32 with only 10$\%$ of the
light coming from the 1.5 Gyr old population, corresponding to a
$\sim$ 3$\%$ contribution to the total mass. Indeed, owing to the steep age
dependence of \mbox{H$\beta$~}, a small fraction of light from the young stellar
population is sufficient to enhance the \mbox{H$\beta$~} line strengh.
If this was the case, the bulk of the population in the nuclei of
these galaxies would be truly old, with \mbox{H$\beta$~} tracing a relatively
unconspicuous recent Star Formation event.
\section {Conclusions}
Using the SSP models currently available in the literature to construct
integrated indices for composite stellar populations with a metallicity
spread I have shown that the nuclei of the most luminous elliptical galaxies
should host stellar populations with:
\par\noindent
a) high total metallicity;
\par\noindent
b) a Magnesium overabundance with respect to Iron, with varying
degrees of the [Mg/Fe] ratio. \par\noindent
c) a small metallicity spread.\par
Condition a) is met by processing the gas through multiple
stellar generations, and condition b) requires that this processing occurs
within a short time scale. This inevitably means that during the
chemical enrichment the star formation rate was very high, implying
a correspondingly large SNII rate. In order to proceed with the chemical
processing, the gas had to be subject to confinement, that is it had to be
located within a deep potential well.
Condition c ) is met if the gas turning into stars in the nuclei of
Ellipticals has been substantially pre$-$enriched, or if the maximum SFR was
achieved at some
late stage, when partial chemical processing had already been completed, like
in infall models. However, for any behaviour of the SFR with time,
as long as one considers a selfpolluting gas mass, a low metallicity
component in the final stellar population is unavoidable: it is this
low metallicity component which provides the metals to build up the
high $Z$ stars. As a consequence, the extremely high \mbox{Mg$_2$~} indices would
rather favour the pre$-$enrichment alternative.
\par
These facts support the notion that
the gas out of which the nuclei of the most luminous ellipticals
formed was produced within the galaxy itself, and was not accreted from the
outside.
Several evidences indicate that merging must have played a role in the
formation of these galaxies, including the relatively shallow metal
line gradients (e.g. Davies et al. 1993), and the
peculiar kinematics in the
nuclei of a substantial fraction of galaxies (see Bender 1996).
The indications from the analysis performed in this paper
suggest that the merging subunits should have been mostly gaseous, and
confined within the deep potential well in which the galaxy itself is
found.
The segregation of the high $Z$ component in the inner parts
could result from the gas partecipating of the general collapse more than
the stars, which could have formed within the merging subunits.
\par
As discussed in the previous section, the indications from the \mbox{H$\beta$~} line
strength are sufficiently ambiguous that the possibility that the bulk
of the stars in the nuclei of the brightest Ellipticals are indeed old
remains favourable.
Therefore, the formation of these galaxies should
have occurred at high redshifts, both the stellar component and the potential
wells, within which the chemical processing could proceed up to
high metallicities in short time scales.
\par
For the lower luminosity Ellipticals the conclusions are more ambiguous,
as their nuclear line strengths are consistent with both wide and
narrow metallicity distributions. However, galaxies with central \mbox{Mg$_2$~}
in excess of $\sim$ 0.27 (approximately 70$\%$ in Davies et al. sample)
are not accounted for by closed box models with $Z_{\rm m} \sim$ 0.
Therefore, for the majority of Ellipticals some chemical separation
should have taken place during their formation, although a substantial
low metallicity component could be present in their nuclei. If this was
the case, the \mbox{H$\beta$~}
line strength would trace the proportion IHB stars produced by this
component of the composite stellar population in the different
galaxies, which would then be old. Finally, the Mg/Fe ratio in the
lower luminosity Ellipticals
is likely to be solar, suggesting longer timescales for
the formation of the bulk of their stars, with respect to
the brigther Ellipticals.
\par
A final caveat concernes the reliability of the SSP models used for
the interpretation of the observational data. The differences among the
various
sets of models, the different fitting functions,
and the lack in the stellar data sets of a fair coverage
of the age, metallicity and [Mg/Fe] ratio cast some doubt on the
use of these models to derive quantitative informations.
The data seem
to suggest that the stars in the nuclei of elliptical galaxies have
\mbox{Mg$_2$~} indices stronger than what can be reasonably inferred using these
SSP models. Actually \mbox{Mg$_2$~} as large as 0.4
are difficult to account even with the oldest and most metal rich SSPs
in the Worthey's and Buzzoni's sets of models. Although this may be a
problem of just a few object, this fact may suggest some inadequacies in the
SSP models. If the dependence of the \mbox{Mg$_2$~} index on metallicity is currently
underestimated at the high $Z$ end,
lower total metallicities, larger metallicity
dispersions and lower [Mg/Fe] ratios would be possibly allowed for the
stellar populations in the nuclei of the brightest ellipticals. Nevertheless,
the presence of a small metallicity dispersion in the nuclei of giant
ellipticals and the need for old ages seem to be quite robust conclusions,
due to the larger contribution to the optical light of the less metallic
populations.
\section {Acknowledgements}
It's a pleasure to thank the whole staff at the Universitaets Sternwarte$-$
Muenchen for the kind and generous hospitality, and in particular the
extragalactic group, who made my research work specially
enjoyable for the cheerful environment and stimulating scientific discussions.
I am particularly grateful to Ralf Bender and Alvio Renzini for many
enligthning discussions on this work and careful reading of the manuscript.
The Alexander von Humboldt--Stiftung is acknowledged for support.
| proofpile-arXiv_065-349 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In heavy ion collisions at very high energies, the matter created
differs qualitatively from what is traditionally studied in nuclear
and elementary particle physics. In the initial stages of the
collision, copious production of gluons and quarks in a large volume
leads to a rapid increase in the entropy, and the distinct possibility
of a new phase of matter characterized by deconfined degrees of
freedom. One therefore hopes that relativistic heavy ion experiments
can provide insight into the structure of the QCD vacuum,
deconfinement, and chiral symmetry restoration.
The hot transient system eventually evolves into a gas of hadrons at
high energy densities, whose properties may be studied theoretically
using, for example, hadronic cascades \cite{RQMD,ARC,WB}. In
principle, these models provide information on the early, dense phase
by tracing the evolution of the system from hadronization to
freeze--out. Of course, in ultrarelativistic heavy ion collisions,
most the of the produced secondaries are pions. For example, in
central Au+Au collisions at center--of--mass energies of $200~{\rm
A~GeV}$ estimates from the FRITIOF event generator suggest that
$\sim 4000$ pions per isospin state might be produced. Further,
recent measurements \cite{zajc2} at lower energies and comparison to
simulations \cite{RQMD2} show that freeze--out source sizes probably
deviate quite drastically from a simple multiplicity scaling law:
present calculations indicate $10$--$20~{\rm fm}$ Au+Au source radii at
$\sqrt{s}=200~{\rm A\cdot GeV}$. In any event, these high energy
collisions might well create highly degenerate bose systems, and even
possibly Bose--Einstein condensates (BEC). Since practical
conclusions from dynamical simulations \cite{BDSW} depend
qualitatively on the effect of the medium on particle interactions
\cite{ACSW,BBDS}, one needs to better understand the properties of
such degenerate systems of pions within the environment of a
relativistic heavy ion collision.
Non--relativistically, the problem of interacting, degenerate
bose systems has been discussed extensively by several authors.
Evans and Imry \cite{Evans1} established the pairing theory of a
bose superfluid in analogy to the BCS theory of superconductivity.
For an attractive interaction, the resulting gap equation may have a
non--trivial solution. Further, though, there appears the possibility
of having a macroscopic occupation of the $k=0$ particle state when
the corresponding BCS quasiparticle energy vanishes. In turn, this
leads to a spectrum which is linear and gapless in the long
wavelength limit \cite{Evans1}. In a second paper, Evans
and Rashid \cite{Evans2} rederived the equations of
Ref.~\cite{Evans1} using the Hartree--Fock--Gorkov decoupling
method, and solved them for the case of superfluid helium.
This boson pairing theory has been generalized by
D\"orre {\it et al.} \cite{Haug}, who carried out a thermodynamic
variational calculation with a trial Hamiltonian containing a
c--number part. An extensive discussion on the boson pairing problem
is also given by Nozi{\`e}res and Saint James \cite{StJNoz}.
It has further been shown by Stoof \cite{Stoof1} and, independently,
Chanfray {\em et al.} \cite{CSW} that the critical temperature $T_{\rm
c}$ for the transition from the normal phase to the phase with a
non--vanishing gap (the Evans--Rashid transition) is given by a
``Thouless criterion'' \cite{Bogoliubov,Thouless} for the bosonic
$T$--matrix in the quasiparticle approximation, in analogy to the
fermion case. Moreover, it has been demonstrated that there exists a
second critical temperature $T_{\rm BEC}<T_{\rm c}$, where the
condition for the macroscopic occupation of the zero momentum mode of
Ref.~\cite{Evans1} is fulfilled \cite{Stoof1,CSW}.
The mechanism is the same as for
Bose--Einstein condensation in the ideal bose gas \cite{Stoof1}.
Here we wish to consider $\pi$--$\pi$ interactions in the presence of
a dense and hot pion gas along the lines of a previous approach
\cite{ACSW,CSW}.
We address the question of pion pair formation and
the pion dispersion relation in a thermal medium, first in a qualitative way
(section II), then in a more detailed numerical calculation with a
realistic two pion interaction (section III). As we shall
see in section IV, the in--medium $2\pi$ propagator exhibits a pole above a
certain critical temperature, signaling a possible instability
with respect to pion pair formation.
We conclude in section V with a discussion of
the effect in high energy heavy ion collisions.
The effects we present here require rather large phase space densities
for the pions, but are independent of whether full thermal
equilibration has been reached. Nonetheless, we choose to couch the
discussion in terms of thermal language, because it is convenient, but
also because the actual situation is probably not too far removed from
it. Dynamical calculations \cite{WB,BDSW} show that a high degree of
thermal equilibration is quite reasonable.
Chemical equilibration, on the other hand, may well cease
at later stages of the system's
evolution and lead to a condensation of pions
in the adiabatic limit. Of course, the system actually expands
rather rapidly, but nonetheless large chemical potentials
($\mu \sim 130~{\rm MeV}$) may be built up by freezeout ($T \sim
100~{\rm MeV}$). One might thus expect large phase space occupation
numbers at low momenta, which drive the pion pair formation that we
discuss here.
\section{The Evans--Rashid transition in a hot pion gas}
In order to treat the gas of interacting pions we will use the
boson pairing theory of Evans {\it et al.} \cite{Evans1,Evans2}.
In analogy to the fermion (BCS) case, they obtain a system of
coupled equations for the gap energy and the density by linearizing
the equations of motion. The
usual Thouless criterion for fermions can be established
analogously for the bose system, and yields the critical temperature
below which the gap equation begins to exhibit non--trivial solutions.
However, in contrast to the fermion case, a second, lower, critical
temperature appears at which the quasiparticle energy vanishes at zero
momentum. This temperature is associated with the Bose--Einstein
condensation (BEC) of single bosons, in analogy to the ideal bose gas,
as discussed in Ref.~\cite{CSW}, and in detail for atomic cesium
in Ref.~\cite{Stoof1}. An interesting feature of the
formalism developed by Evans {\it et al.} \cite{Evans1} is that below
the second critical temperature the dispersion relation for the single
bosons is of the Bogoliubov form, {\it i.e.}, linear, or phonon--like,
for small momenta.
In this section, we illustrate these remarks
concerning the Evans--Rashid transition for a pion
gas in a qualitative way, returning to a more detailed numerical
calculation in section~III. While relativistic kinematics is taken
into account, corrections from backward diagrams are ignored. We shall
see in section~III that such an approximation is justified for the
physical regions in which a solution to the gap equation exists. For
clarity in this preliminary discussion, we shall also
generally neglect the $k$-dependence of the self--energy,
$\Sigma(k)$, and further condense it into the chemical potential.
The gap equation for the pion pairs is derived in the appendix,
using Gorkov decoupling:
\begin{equation}\label{pgap}
\Delta(k)\;=\; -{1\over 2}
\,\sum_{{\vec k}^\prime}\: V(k,k^\prime,E=2 \mu)\,
\frac{\Delta(k^\prime)}{2 E(k^\prime)} \,\coth{\frac{E(k^\prime)}{2T}}~ ~,
\end{equation}
where the quasiparticle energy is given by
\begin{eqnarray}\label{Ek}
E(k) \;=\; \sqrt{\epsilon(k)^2-|\Delta(k)|^2}~ ~ ~,
\end{eqnarray}
with $\epsilon(k) \equiv \omega(k)-\mu$, and where
$\omega(k)$ is the free pion dispersion. The $\coth$--factor
represents the medium effect for a thermalized pion gas at temperature
$T$ and chemical potential $\mu$, and $V(k,k^\prime,E)$ is the as yet
unspecified bare two--particle interaction. The corresponding pion
density is
\begin{equation}\label{pdens}
n\;=\;\sum_{{\vec k}^\prime} \: \bigg [ \frac{\epsilon(k^\prime)}
{2 E(k^\prime)}\,\coth{\frac{E(k^\prime)}{2T}} \:-\: \frac{1}{2}
\bigg ]~ ~.
\end{equation}
In spite of the formal similarities of Eq.~(\ref{pgap}) with the
corresponding fermionic gap equation, there are important differences:
For bosons, the $\Delta^2$ is subtracted in $E(k)$ (fermions:
added), and the temperature factor is a hyperbolic cotangent (fermions: tanh).
We discuss the solution to the gap equation for decreasing
temperature, at a fixed value of the chemical potential $\mu$. The
possibility of a finite chemical potential in a pion gas has been
pointed out in the introduction. At very high temperatures, the gap
equation (\ref{pgap}) has only the trivial solution $\Delta\!=\!0$,
and Eq.~(\ref{pdens}) is the usual quasiparticle density. The
dispersion relation is also of the usual form
\begin{equation}\label{df}
\lim_{\Delta\rightarrow 0} E(k) = \epsilon(k)
=\omega(k)-\mu=\sqrt{k^2+m^2_{\pi}}-\mu~ ~.
\end{equation}
With decreasing temperature, however, a critical temperature $T_c^u$ is
reached, at which the gap equation (\ref{pgap}) first exhibits a
non--trivial solution $\Delta \neq 0$. The value of $T_c^u$ may be
found by linearizing the gap equation, {\it i.e.}, setting $E(k)
\approx \epsilon(k)$ in Eq.~(\ref{pgap}). We return to this point in
section~III, showing that the resulting equation for $T_c^u$
is identical to the condition for a pole in the two--pion $T$--matrix
at the particular energy $E\!=\!2\mu$ (for total momentum
$\vec{K}\!=\!0$). Thus we have a bosonic version of the well--known
Thouless criterion for the onset of superfluidity in fermion
systems with attractive interactions.
Below the critical temperature $T_c^u$ the order parameter $\Delta$
becomes finite, and the corresponding dispersion relation is now
given by Eq.~(\ref{Ek}).
As the temperature drops further, $|\Delta|$ increases to a point
where the condition $|\Delta(k=0)|=|m_{\pi}-\mu|$ is reached. This is
the maximum possible value of $|\Delta|$, since otherwise imaginary
quasiparticle energies result. It defines a second critical
temperature $T^\ell_c$, below which the occupation $n_0$ of the zero
momentum state becomes macroscopically large because
$E(k)\!\rightarrow\! 0$ for $k\!\rightarrow\! 0$ \cite{Evans1}. The
possibility of a macroscopic occupation of the $k=0$ mode below
$T^\ell_c$ follows from the pion density Eq.~(\ref{pdens}): for
$E(k=0)\!=\!0$, the $k\!=\!0$ contribution to the density must be treated
separately, as in the case of the ideal bose gas. A similar comment
applies to Eq.~(\ref{pgap}) for the gap, so that we obtain the
two inhomogeneous equations
\begin{eqnarray}
\Delta(k)&=& -\frac 12 \,V(k,0,2 \mu)\:n_0
\:-\:\frac 12\,\sum_{{\vec k}^\prime \neq 0}\:V(k,k^\prime,2 \mu)\,\frac
{\Delta(k^\prime)}{2 E(k^\prime)} \coth{\frac{E(k^\prime)}{2T}}~ ~,
\label{ggu} \\
n&=&n_0\:+\:\sum_{{\vec k}^\prime \neq 0}\:\bigg [
\frac{\epsilon(k')} {2 E(k')}\coth{\frac{E(k')}{2T}}\;-\;
\frac{1}{2}\bigg ]~ ~ ~. \label{dgu}
\end{eqnarray}
In contrast to the ideal bose case, the condensation of quasiparticles
happens at $\mu\!<\!m_\pi$, because of the finite value of the gap.
Below $T_c^\ell$ the dispersion relation is given by
\begin{eqnarray}\label{duu}
E(k)&=&\sqrt{\omega(k)^2-2 \omega(k) \mu+2
\mu m_{\pi}-m^2_{\pi}}~ ~, \nonumber \\
&\approx &\sqrt{2(m_\pi-\mu)\frac{k^2}{2m_{\pi}}\:+\:
\frac{\mu}{m_\pi} (\frac{k^2}{2m_{\pi}})^2}~ ~,\label{dgur}
\end{eqnarray}
in the small $k$, non--relativistic approximation.
Thus, instead of the usual $k^2$--behavior, the pion dispersion is
linear in the long wavelength limit.
Eq.~(\ref{dgur}) may be rewritten in the more usual form of the
well--known Bogoliubov dispersion relation
\cite{Bogoliubov} for a weakly interacting bose gas:
\begin{equation}\label{Bdr}
E(k)\;=\; \sqrt{|V(0,0,2 \mu)|\, n_0\:
\frac{k^2}{2m_{\pi}} \;+\; O(k^4)}~ ~ ~.
\end{equation}
Here, we have used $m_{\pi}-\mu= -V(0,0,2 \mu)\, n_0/2$, which follows from
Eq.~(\ref{ggu}) for sufficiently low temperatures.
\section{Numerical results for the gap equation}
We now consider the qualitative discussion of the previous section in
more detail, by numerically solving our system of equations for a
realistic pion--pion interaction in the $\ell=I=0$ channel. We choose
a rank--2 separable $\pi$--$\pi$ interaction inspired by the linear
$\sigma$--model (see appendix) which possesses all the desired low
energy chiral properties, as is explicitly discussed in
Ref.~\cite{Drop}. For vanishing incoming total momentum, ${\vec
K}=0$, it reads (see Eq.~(\ref{InvMB}))
\begin{eqnarray}
\langle {\vec k}, -{\vec k} \mid V_{I=0}(E) \mid {\vec k}^\prime,
-{\vec k}^\prime
\rangle &=& \frac {v({\vec k})}{2\omega(k)}\: \frac
{M_\sigma^2-m_\pi^2} {f_\pi^2}\: \bigg [ 3\,\frac
{E^2-m_\pi^2}{E^2-M_\sigma^2} \:+\: \frac
{4\omega(k)\omega(k^\prime)
- 2m_\pi^2}{M_\sigma^2} \bigg ] \; \frac {v({\vec k}^\prime)}
{2\omega(k^\prime)}~ ~ ~,\nonumber \\
&=& \frac {1}{2\omega(k)} \; \langle k \mid {\cal M}_B(E) \mid
k^\prime \rangle \; \frac {1}{2\omega(k^\prime)}~ ~ ~, \label{0PV}
\end{eqnarray}
where, for later convenience, we have introduced the bare invariant
matrix
\begin{equation}
\langle k \mid {\cal M}_B(E) \mid k^\prime \rangle \;\equiv\;
\lambda_1(E)\:v_1(k)v_1(k^\prime) \;+\;
\lambda_2\:v_2(k)v_2(k^\prime) ~ ~ ~\label{1PV}
\end{equation}
with notation $v_1(k)\equiv v(k) = [1+(k/8m_\pi)^2]^{-1}$,
$v_2(k) = (\omega(k)/m_\pi) v(k)$, and
\begin{equation}
\lambda_1(E) \equiv \frac {M_\sigma^2-m_\pi^2}{f_\pi^2}
\; \bigg [ 3 \: \frac {E^2-m_\pi^2}{E^2-M_\sigma^2} \:-\: \frac
{2m_\pi^2}{M_\sigma^2} \bigg ]~ ~, ~ ~ ~ ~
\lambda_2 \equiv \frac {M_\sigma^2-m_\pi^2}{f_\pi^2}
\; \frac{4m_\pi^2}{M_\sigma^2} ~ ~ ~. \label{Vab}
\end{equation}
The form factor $v(k)$ and $\sigma$--mass $M_\sigma=1~{\rm GeV}$ are
fit to experimental phase shifts, as in Ref.~\cite{Drop}. For free
$\pi^+$--$\pi^-$ scattering this force yields, when used in the
$T$--matrix (see below), a scattering length which vanishes in the
chiral limit, as it should. This feature induces off--shell repulsion
below the $2\pi$--threshold in spite of the fact that the positive
$\delta^0_0$ phase shifts indicate attraction. It is remarkable that
the gap equation still shows a non--trivial solution, signaling pion
pair formation, as we will show later. It is evident that bound pair
formation, or even larger clusters of pions, can deeply influence the
dynamics of the pion gas.
In the sigma channel $(\ell=0,I=0)$
we rewrite the gap equation (\ref{pgap}) as
\begin{equation}\label{gaps1}
\Delta(k) \;=\; -\frac 12 \:\int \! \frac {d^3k^\prime}{(2\pi)^3}
\: \langle {\vec k}, -{\vec k} \mid V_{I=0}(E\!=\!2\mu) \mid
{\vec k}^\prime, -{\vec k}^\prime
\rangle \; \frac {\Delta(k^\prime)} {2E(k^\prime)}
\; \coth \frac {E(k^\prime)}{2T}~ ~ ~,\label{gapeqn}
\end{equation}
With the form of our interaction solutions of this equation
may be written as
\begin{equation}
\Delta(k) \;=\; \frac {m_\pi}{\omega(k)} \: \bigg [ \delta_1
\,v_1(k) \:+\: \delta_2\, v_2(k) \bigg ]~ ~ ~,\label{SolnForm}
\end{equation}
and Eq.~(\ref{gapeqn}) reduces to two coupled non--linear equations
for the ``gap strengths'' $\delta_1$ and $\delta_2$. For a
non--trivial solution, one can show that $\delta_1 > - \delta_2 > 0$.
We also note that while $\lambda_2$ is always repulsive,
$\lambda_1(E)$ is attractive at $E\!=\!2\mu$ only if $\mid\! \mu\!
\mid\, >M_\sigma m_\pi/2\sqrt{3M_\sigma^2-2m_\pi^2} \sim 40~{\rm
MeV}$. This inequality is also the formal condition for a solution
to the gap equation to exist at some temperature. Intuitively, we
require at least some attraction because, as we shall see, a solution
to the gap equation is connected to the existence of a pole in the
$T$--matrix. We note that the repulsive part of the $\pi$--$\pi$
interaction Eq.~(\ref{0PV}) helps to avoid collapse. This is
different from our previous calculation \cite{CSW}, which was
performed with an entirely attractive interaction. The presence of
this repulsion is a consequence of chiral symmetry and PCAC
\cite{Drop}.
In the previous section, we introduced the critical temperatures
$T_c^u$, at which the gap vanishes, and $T_c^\ell$, where the gap has
reached its maximum value and quasiparticle condensation occurs.
Fig.~1 shows the numerical results for these temperatures
in the $\mu$--$T$ plane.
The $T_c^u$ (solid line) are obtained by linearizing Eq.~(\ref{gapeqn}):
\begin{equation}
\Delta(k) \;=\; -\frac 12 \:\int \! \frac {d^3k^\prime}{(2\pi)^3}
\: \langle {\vec k}, -{\vec k} \mid V_{I=0}(E\!=\!2\mu) \mid
{\vec k}^\prime, -{\vec k}^\prime
\rangle \; \frac {\Delta(k^\prime)} {2\epsilon(k^\prime)}
\; \coth \frac {\epsilon(k^\prime)}{2T_c^u}~ ~ ~,\label{thou1}
\end{equation}
while the $T_c^\ell$ (dashes) result when the gap
strength increases to a point where $E(k=0)=0$,
{\it i.e.}, $m_\pi-\mu = \delta_1+\delta_2$.
At high temperatures $T>T_c^u$ (region III),
the system is in the normal state with no gap, while below
the dashed line, $T<T_c^\ell$ (region I),
there is macroscopic occupation of the
$k=0$ mode. For $T_c^\ell<T<T_c^u$ (region II),
non--trivial gap solutions exist. Notice
that for physically realistic solutions ($T<200~{\rm MeV}$,
say) we have $\mu\; {\buildrel < \over \sim} \; m_\pi$, and $\omega-m_\pi
\ll m_\pi$, and, in hindsight, are justified in neglecting
relativistic corrections to the gap equation (see appendix).
Fig.~2 shows the gap strengths $\delta_1$ (solid line) and $-\delta_2$
(dashes) versus temperature for a fixed chemical potential
$\mu=135~{\rm MeV}$. Again, we see that at high
temperatures, in region~III, only the trivial solution
$\delta_1=\delta_2=0$ exists. As the temperature drops to
$T_c^u\sim 123~{\rm MeV}$, the order parameter $\Delta$ switches on,
and we have a transition to a paired state in region II (see
discussion below). Finally, at $T=T_c^\ell \sim 77~{\rm
MeV}$, the gap has reached its maximum value $\delta_1+\delta_2 = m_\pi
-\mu \sim 3~{\rm MeV}$ and quasiparticles condense in the
lowest energy mode in region I.
The change in the pion dispersion relation $E(k)$ is investigated
in Fig.~3 in the temperature range $T_c^\ell\le T\le T_c^u$, for a fixed
chemical potential of $\mu=135~{\rm MeV}$. At $T=T_c^u\sim 123~{\rm
MeV}$ (solid line), and above,
we simply have the normal--state pion dispersion
relation $\epsilon(k)= \omega(k)-\mu$. With decreasing temperature the
influence of the finite gap becomes visible at long wavelengths: The
dot--dashed line shows $E(k)$ for $T=115~{\rm MeV}$. A further drop in the
temperature to $T=T_c^\ell\sim 77~{\rm MeV}$ qualitatively changes
the character of the pion dispersion relation
to a linear, phonon--like dispersion at small $k$.
\section{The in--medium $\pi\pi$ scattering matrix}
We turn now to a discussion of the $T$--matrix ${\cal
M}_{I=0}(E,K)$ for a pion pair with total momentum $K = \mid\!{\vec
K}\!\mid$ with respect to a thermal medium. Writing the
on--shell $T$--matrix ({\it c.f.} Eq.~(\ref{A:MSol})) as
\begin{equation}
\langle k^* \mid {\cal M}_{I=0}(E,K) \mid k^* \rangle \;=\;
\sum_{i=1}^2 \: \lambda_i(s) \: v_i(k^*)\,\tau_i(k^*; s,K)~ ~ ~,
\label{MSol}
\end{equation}
where $k^{*\,2}=s/4-m_\pi^2$ and $s=E^2-{\vec K}^2$ is the square of
the total c.m. energy, the Lippmann--Schwinger equation
(\ref{A:Tmat2}) becomes a set of two linear equations for the functions
$\tau_i$:
\begin{equation}
\sum_{j=1}^2 \; \bigg [ \delta_{ij}\:-\; \lambda_j(s)\,g_{ij}(s,K)
\bigg ] \:\tau_j(k^*; s,K) \;=\; v_i(k^*)~ ~,~ ~ ~ ~i=1,2 \label{MME}
\end{equation}
with
\begin{equation}
g_{ij}(s,K) \;\equiv\; \frac 12 \: \int \frac{d^3k}{(2\pi)^3} \;
v_i(k) \: \frac {1}{\omega(k)} \, \frac {\langle 1+f_++f_- \rangle}
{s-4\omega_k^2 +i\eta} \: v_j(k)~ ~ ~.\label{Deffij}
\end{equation}
Here, $\langle 1+f_++f_- \rangle$ denotes an average over the angles
of the c.m. relative momentum of the pair. For thermal occupation
numbers it is given by Eq.~(\ref{A:Angle}), and reduces to unity in
free space and $\coth(\omega(k)-\mu)/2T$ for vanishing total
momentum ${\vec K}$. We note that Eq.~(\ref{MME}) does not
incorporate the non--linear effect of the gap.\label{Concern1}
The solid line in Fig.~4 shows $\mid\!{\cal M}_{I=0}\!\mid^2$ for free
space scattering. Compared to our previous calculation
\cite{ACSW}, the $T$--matrix is relatively flat above the resonance,
this being due to the repulsion in the interaction at high energies.
The short dashes give $\mid\!{\cal M}_{I=0}\!\mid^2$ in a thermal
bath of $T=100~{\rm MeV}$ and $\mu=135~{\rm MeV}$, for $K=0$. The
medium strongly suppresses the cross
section, an effect that also occurs in the $(I=1,\ell=1)$ $\rho$--channel
\cite{ACSW,BBDS}. At high c.m. energies, the phase space occupation
becomes negligible, and the cross section returns to its free space
value. The three remaining curves show results in the same thermal
bath, but for
$K=200~{\rm MeV}/c$ (long dashes), $1~{\rm GeV}/c$ (dot--dashed),
and $3~{\rm GeV}/c$ (dotted). As $K$ increases, the pair is boosted
more and more out of the occupied phase space of the medium, and the
cross section again returns to its free space value. We also see a
threshold behavior in Fig.~4: as $K$ becomes larger, a resonance
peak emerges from below the threshold which continues to shift up in
energy and strengthen until it coincides with the free scattering
peak. We shall see below that this is the continuation of an upward
shift of the Cooper pole in the $T$--matrix with decreasing phase space
occupation \cite{CSW}.\label{Concern2}
We consider now the existence of poles in the $T$--matrix, first
for the special case of zero total momentum, $K=0$ \cite{CSW}, and define
the determinant function
\begin{equation}
F_{\mu,T}(E) \;\equiv\; -\,
{\rm det} \:\bigg [ \delta_{ij}\:-\; \lambda_j(E)\,g_{ij}(E)
\bigg ]~ ~ ~. \label{DetFunc}
\end{equation}
This function is shown in Fig.~5 for five different temperatures (solid
lines) at a fixed pion chemical
potential of $135~{\rm MeV}$. The intersection of these curves with zero
(horizontal dashes) below $2m_\pi$ (the bound state
domain) gives the pole position. We see that a pole always occurs
provided the temperature lies above some critical
value $T_0^\ell \approx 47~{\rm MeV}$, for which the pole is at
threshold and ceases to exist.
This $T_0^\ell$ is close to
the lower critical temperature for the gap, $T_c^\ell$,
where the excitation spectrum vanishes at $k\!=\!0$ and quasiparticles
begin to condense as singles. Thus, the
bound state and gap solution disappear at a similar critical
temperature; differences are ascribable to the fact that we use free
quasiparticle energies in the $T$--matrix.\label{point1}
There is a second special temperature $T_0^u$, for which
a pole exists at $E=2\mu$ (see Fig.~5). It is identical
to the upper critical temperature $T_c^u$ at which the gap vanishes, as
may easily be seen by rewriting the $T$--matrix for $E$ near $2\mu$,
\begin{equation}
\langle {\vec k}, -{\vec k} \mid T_{I=0}(E) \mid {\vec k}^\prime, -{\vec
k}^\prime \rangle \;\equiv \; Z(k)\: \frac {1}{E-2\mu} \:
Z(k^\prime)~ ~ ~.\label{TnearPole}
\end{equation}
In the non--relativistic limit, $Z(k)$ follows as (see appendix)
\begin{eqnarray}
Z(k) &=& -\frac 12 \:\int \! \frac {d^3k^\prime}{(2\pi)^3} \: \langle {\vec
k}, -{\vec k} \mid V_{I=0}(E=2\mu) \mid {\vec k}^\prime, -{\vec k}^\prime
\rangle \; \frac {Z(k^\prime)} {2(\omega(k^\prime)-\mu)} \; \coth \frac
{\omega(k^\prime)-\mu}{2T_0^u}~ ~ ~,\label{ZEqn}
\end{eqnarray}
which is precisely the same condition as for $T_c^u$, Eq.~(\ref{thou1}).
The gap equation (\ref{gapeqn}) thus reduces to the $T$--matrix pole
condition at the particular energy $E=2\mu$.
In fermion systems, this is the well--known Thouless criterion
\cite{Thouless} for the onset of a phase transition to a pair
condensate. We note that the
Thouless criterion is only approximately valid if relativistic
corrections are included.
Several observations can be made. Firstly, one always obtains a pole
in the $T$--matrix
if the temperature lies above $T_0^\ell(\mu)$.
Thus, at fixed $\mu$, no matter how weak the
interaction strength is (provided it is attractive in the neighborhood
of the $2\pi$ threshold), one always obtains a pole for
sufficiently high temperatures (for fermions at a sufficiently
low temperature). In practice, $T_0^\ell$ (and $T_0^u$) will exceed sensible
values for pions as soon as $\mu$ drops below $\sim 130~{\rm MeV}$,
since they are increasing functions of $\mu$. Secondly,
for a fixed interaction strength, the pole position
shifts downward with increasing temperature (for fermions: pole position
moves up with increasing temperature).
As a function of temperature, we therefore see a behavior for bosons
opposite to that for fermions.
The fact that increasing temperature reinforces the binding is
somewhat counterintuitive, but is an immediate consequence of the
coth--factor associated with bose statistics in Eq.~(\ref{A:Tmat2}).
Indeed, one realizes that the coth--factor increases with increasing
temperature and thus effectively enhances the two--body interaction.
We can therefore always find a bound state for arbitrarily small
attraction: it suffices to increase the temperature or, equivalently,
the density accordingly. This is opposite to the fermion case where
the corresponding tanh--factor suppresses the interaction with
increasing temperature. Therefore, in the fermion case, even at the
$T$--matrix level there exists a critical temperature where the Cooper
pole ceases to exist. For bosons, on the other hand, once one has
reached $T=T_0^\ell$, a bound state (here $E^{2\pi}_B < 2m_\pi$)
exists and the bound state energy simply continues to decrease as the
temperature increases. Of course, this becomes unphysical as soon as
the density of pairs becomes so strong that the bound states start to
obstruct each other, and finally dissolve at an upper critical
temperature (Mott effect). Precisely this non--linear effect is very
efficiently taken care of in the gap equation. In spite of the fact
that we still have a coth--factor in the gap equation, there is now a
crucial difference: the argument of the coth--factor is the
quasiparticle energy, Eq.~(\ref{Ek}), (over $T$) and thus, due to the
presence of $-\Delta^2(k)$ in $E(k)$, the origin of the coth is
shifted to the right with respect to the $T$--matrix case. Now, as $T$
increases, the only way to keep the equality of the gap equation is
for $\Delta(k)$ to decrease -- this pushes the origin of the coth back
to the left, counterbalancing its increase due to the increasing
temperature. Of course, this only works until $\Delta=0$, {\it i.e.},
until the temperature has reached $T_c^u$. This is precisely the
temperature $T_0^u$ for which the bound state in the $T$--matrix
reaches an energy $E_B^{2\pi}=2(m_\pi-\mu)$. We therefore see that in
spite of the fact that the bosons prefer high phase space density, the
formation of bound states ceases to exist beyond a critical
temperature -- just as for fermions.
Lastly, we return to the behavior of the pole for varying total
momentum $K$, and the threshold effect seen in Fig.~4. Since
$F_{\mu,T}(s,K)$ becomes complex above threshold, we show in Fig.~6
its magnitude for fixed $T=100~{\rm MeV}$ and $\mu=135~{\rm MeV}$, and
various values of $K$. As expected, for increasing $K$ ({\it i.e.},
decreasing phase space occupation felt by the two pions in question)
the pole (zero of $F$) moves up in energy until it dissapears at some
critical momentum $100~{\rm MeV}/c < K_c < 250~{\rm MeV}/c$. For
$K>K_c$, the now non--zero minimum of the determinant function
continues to shift to higher energies, corresponding roughly to the
similar shift in the resonance peak in Fig.~4.
\section{Discussion and conclusions}
In the previous section, we investigated the effect of a thermal
medium on the pion dispersion relation at low momenta $k$. In
particular, one finds a critical temperature $T_c^\ell$ at which the
pion dispersion relation is linear (phonon-like) in $k$ for small $k$.
This result is independent of the details of the interaction and
characteristic of any bose system (see Ref.~\cite{Evans2} for the case
of $^4$He).
Such a change in the pion dispersion relation at low temperatures
would influence the pion spectra at low momentum. For this to occur,
rather large medium phase space occupation numbers are required. In
particular, for a physically reasonable system with, say, $T <
200~{\rm MeV}$, this means that we require large chemical potentials.
In fact, dynamical calculations \cite{WB,BDSW} show that a buildup of
$\mu$ can indeed occur, provided that the scattering rate is
sufficiently large compared to expansion rate and the inelastic
collisions have ceased to be a factor.
To demonstrate the possible effect in a qualitative way, consider
the pion transverse momentum spectrum for longitudinally boost invariant
expansion
\begin{equation}\label{C22}
\frac{d N}{m_t d m_t dy} \;=\;
(\pi R^2 \tau) \frac{m_{\pi}}{(2 \pi)^2}
\sum^{\propto}_{n=1}\:
\exp({\frac{n \mu}{T}})\, K_1(\frac{n m_t}{T}) ~ ~,
\end{equation}
where $K_1$ is a Mc--Donald function, $m_t$ is the transverse mass,
and the normalization volume $\pi R^2 \tau$ is of the order $200$ to
$300~{\rm fm^3}$ \cite{Kataja}. At mid-rapidity, the transverse mass
coincides with the full energy of the pion, and we follow
Ref.~\cite{Chanfray1} in replacing $m_t$ by the in--medium pion
dispersion relation $E(k)+\mu$ derived in the previous section. Of
course, as remarked in Ref.~\cite{Chanfray1}, the use of this
procedure is rather tenuous since the system is by definition still
far from freeze--out. In a dynamical calculation, hard collisions
would re--thermalize the system at ever decreasing temperatures.
In Fig.~7, the thermal transverse momentum spectrum for pions with
$\mu=135~{\rm MeV}$ and $T=100~{\rm MeV} > T_c^\ell(\mu)$ is shown
with (solid line) and without (dashes) the effect of the gap energy.
Essentially, the presence of a large chemical potential gives the
spectrum the appearance of one for small--mass particles, and the gap
energy, which causes $E(k) \sim k$ for long wavelengths, strengthens
this effect. We would like to mention here again that the use of our
force, Eq.~(\ref{0PV}), which respects chiral symmetry constraints
\cite{Drop}, considerably reduces the effect of binding with respect
to a purely phenomenological interaction, fitted to the phase shifts
(see, for example, Ref.~\cite{Drop}). This stems from the fact that
the expression (\ref{0PV}) becomes repulsive sufficiently below the
$2m_\pi$ threshold. This is not the case for commonly employed
phenomenological forces \cite{Drop}. As a consequence, the effect we
see in Fig.~7 is relatively weak, but one should remember that the
force (\ref{0PV}) is by no means a definitive expression. It is well
known that in a many body system screening effects should be taken
into account. Whereas in a fermi system this tends to weaken the
force, it is likely that screening strengthens it in bose systems. In
this sense our investigation can only be considered schematic. A
quantitative answer to the question of bound state formation in a hot
pion gas is certainly very difficult to give. Qualitatively, the
curves in Fig.~7 agree with the trend in the pion data at SPS
\cite{NA35} to be ``concave--up,'' but this is mainly an effect from
the finite value of the chemical potential \cite{WB,Kataja}. While the
gap changes the spectrum by a factor of $\sim 3$ at $m_t-m_\pi \sim
0$, this region is not part of detector acceptances.
In summary, we have shown that finite temperature induces real poles
in the $2\pi$ propagator below the $2m_\pi$ threshold, even for
situations where there is no $2\pi$ bound state in free space
\cite{CSW}. The situation is analogous to the Cooper pole of fermion
systems, and we therefore studied the corresponding bosonic ``gap''
equation. This equation has non--trivial solutions in a certain domain
of the $\mu$--$T$ plane. Such a region always exists, even in the
limit of infinitesimally weak attraction. This is different from the
$T=0$ case discussed by Nozi{\`e}res and Saint James \cite{StJNoz},
where a nontrivial solution to the gap equation only exists when there
is a two boson bound state in free space. Our study has to be
considered preliminary. The final aim will be to obtain an equation of
state for a hot pion gas within a
Br\"uckner--Hartree--Fock--Bogoliubov approach. Also, the subtle
question of single boson versus pair condensation must be addressed
(see Ref.~\cite{StJNoz} and references therein). Furthermore, the fact
that we obtain two pion bound states in a pionic environment leads to
the speculation that higher clusters, such as four--pion bound states,
{\em etc.}, may also occur, and perhaps even be favored over pair
states. Such considerations, though interesting, are very difficult to
treat on a quantitative basis. However, substantial progress towards
the solution of four body equations has recently been made
\cite{PeterPriv}, and one may hope that investigations for this case
will be possible in the near future.
We are grateful to P.~Danielewicz for useful discussions.
This work was supported in part by the U.S. Department of Energy
under Grant No. DE-FG02-93ER40713.
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
\section{Appendix: Derivation of $T$--matrix and gap equation}
This appendix is devoted to a derivation of the gap equation for a
bosonic system governed by a field--theoretic Hamiltonian. The basic
problem one has to deal with is the formal introduction of a chemical
potential for bosons, since the total bosons number operator ({\it i.e.},
$\pi^++\pi^-+\pi^0$) does not commute with the Hamiltonian. Hence, if
we consider a pion gas at a typical temperature of $200~{\rm MeV}$,
it will correspond to zero chemical
potential. However, for a system lifetime on the order of tens of
fermi, the inelastic collision rate is negligible. Therefore,
provided the elastic collision rate is sufficiently large, a thermal
equilibrium with a finite chemical potential may well be reached.
Let us consider a pion system at temperature $T$.
Inspired by the
linear $\sigma$--model, with form factors fitted to the $\pi$--$\pi$
phase shifts, we take the Hamiltonian
\begin{equation}
H \;=\; H_0 \:+\: H_{\rm int}~ ~ ~,\label{A:H}
\end{equation}
where $H_0$ is the kinetic Hamiltonian for the $\pi$ and ``$\sigma$''
mesons
\begin{equation}
H_0 \;=\; \sum \,\omega_1 \, b_1^\dagger b_1 \:+\: \sum \,
\Omega_\alpha \, \sigma_\alpha^\dagger \sigma_\alpha~ ~ ~.\label{A:H0}
\end{equation}
The index ``1'' refers to the momentum and isospin of the pion, and
``$\alpha$'' to the momentum and identity of the heavy meson carrying
the interaction. The interaction Hamiltonian has the form
\begin{eqnarray}
H_{\rm int} &=& \frac 12 \, \sum\:
\bigg [ (\sigma_\alpha+\sigma_{-\alpha}^\dagger)\:
(b_1^\dagger b_2^\dagger+b_{-1}b_{-2}) \bigg ] \: \langle 12
\mid W \mid \alpha \rangle \nonumber \\
&+& \frac 14 \sum\: \bigg [b_1^\dagger b_2^\dagger b_3 b_4 \:+\: \frac 12
(b_{-1} b_{-2} b_3 b_4 + b_1^\dagger b_2^\dagger b_{-3}^\dagger
b_{-4}^\dagger)\bigg ] \: \langle 12 \mid V \mid 34 \rangle \label{A:Hint}
\end{eqnarray}
In the linear $\sigma$--model one has ($L^3$ is a normalization
volume)
\begin{eqnarray}
\langle 12 \mid W \mid \alpha \rangle &=& \bigg [ \frac {1} {
2\omega_1 L^3\, 2\omega_2 L^3\, 2\Omega_\alpha L^3} \bigg ]^{1/2} \:
v(k^*_{12}) \; (2\pi)^3\delta({\vec k}_1+{\vec k}_2-{\vec k}_\alpha)
\;\frac {M_\sigma^2-m_\pi^2}{f_\pi} \; \delta_{12}
\label{A:W}\\
\langle 12 \mid V \mid 34 \rangle &=& \bigg [ \frac {1} {
2\omega_1 L^3\, 2\omega_2 L^3\, 2\omega_3 L^3\,
2\omega_4 L^3} \bigg ]^{1/2} \: v(k^*_{12})\:v(k^*_{34})
\; (2\pi)^3\delta({\vec k}_1+{\vec k}_2-{\vec k}_3-{\vec k}_4)
\nonumber \\ &\times& \frac {M_\sigma^2-m_\pi^2}{f_\pi^2} \; \bigg [
\delta_{12}\,\delta_{34} \:+\: \delta_{13}\,\delta_{24} \, \frac
{2\omega_1\omega_3 -2{\vec k}_1\cdot{\vec k}_3 - m_\pi^2}{M_\sigma^2}
\:+\: \delta_{14}\,\delta_{23} \, \frac
{2\omega_1\omega_4 -2{\vec k}_1\cdot{\vec k}_4 -
m_\pi^2}{M_\sigma^2}\bigg ]~ ~ ~.
\label{A:V}
\end{eqnarray}
The form factor taken at c.m. momentum $k^*$ of the pion pair is fitted
to the experimental phase shifts, and $M_\sigma$ is the
$\sigma$--mass. The static quartic interaction contains the
$\pi^2\pi^2$ interaction of the $\sigma$--model, and the $t$ and $u$
channel $\sigma$--exchange terms. We neglect the $t$ and $u$
dependence of the denominator (see Ref.~\cite{Drop}),
since their effect is extremely small. Further, terms like $\sigma
b^\dagger b$ and $b^\dagger b b b$ have been dropped, since they are
not essential for our purpose.
{\large {\it The Dyson equation for the pion propagator}}
We now derive the equation of motion for the pion propagator
\begin{equation}
G_{1{\bar 1}}(t,t^\prime) \;=\; \bigg \langle -i T\bigg
( b_1(t) b_1^\dagger(t^\prime)\bigg )\bigg \rangle \label{A:G11}
\end{equation}
where the $b_1$ are normal Heisenberg operators
$b_1(t)=\exp(iHt)b_1(0)\exp(-iHt)$. In principle, the extension to finite
temperature requires a matrix formulation (real time formulation) or
Matsubara Green's function. However, for simplicity we consider here the
normal zero temperature $G$, and replace it by a thermal propagator at
the end. We have checked that the final result is not modified.
After standard manipulation, using the Hamiltonian (\ref{A:H}), we obtain
\begin{equation}
\bigg (i \frac {\partial}{\partial t} - \omega_1 \bigg ) G_{1{\bar
1}}(t,t^\prime) \;=\; \delta(t-t^\prime) \:+\: \int
dt^{\prime\prime} \: \Sigma_1(t,t^{\prime\prime})\: G_{1{\bar
1}}(t^{\prime\prime}, t^\prime)~ ~ ~,\label{A:EoMG}
\end{equation}
with $\Sigma_1(t,t^\prime)=\Sigma_1^S(t,t^\prime)+
\Sigma_1^D(t,t^\prime)$. The static part of the mass operator is
\begin{equation}
\Sigma_1^S(t,t^\prime) \;=\; \sum_2\: \langle b_2^\dagger b_2 \rangle \:
\langle 12 \mid V \mid 12 \rangle \: \delta(t-t^\prime)~ ~
~,\label{A:MS}
\end{equation}
while the dynamical part is given by
\begin{equation}
\Sigma_1^D(t,t^\prime) \;=\; \bigg \langle -iT\bigg ( [H_{\rm int},b_1](t)\:
[H_{\rm int},b_1]^\dagger(t^\prime) \bigg ) \bigg \rangle~ ~ ~.\label{A:MD}
\end{equation}
Making a standard factorization approximation, we obtain
\begin{eqnarray}
\Sigma_1^D(t,t^\prime) &=& i\sum_2 G_{{\bar 2}2}(t,t^\prime)\nonumber \\
&\times& \bigg\langle -iT\bigg ( \bigg [\langle 12 \mid W\mid\alpha\rangle
\:(\sigma_\alpha+ \sigma_{-\alpha}^\dagger)(t) \:+\: \frac 12 \langle 12
\mid V \mid 34\rangle \:(b_3b_4+b_{-3}^\dagger b_{-4}^\dagger)(t)\bigg ],
\nonumber \\
&\mbox{}& \bigg[(\sigma_{\alpha^\prime}^\dagger+
\sigma_{-\alpha^\prime}^\dagger)(t^\prime)\:
\langle \alpha^\prime \mid W\mid 12\rangle \:+\: \frac 12
(b^\dagger_{3^\prime} b^\dagger_{4^\prime} + b_{-3^\prime}
b_{-4^\prime})(t^\prime)\:\langle 3^\prime 4^\prime \mid V \mid 12
\rangle \bigg ] \bigg )\bigg\rangle~ ~ ~,\label{A:MD2}
\end{eqnarray}
with $G_{{\bar 2}2}(t,t^\prime) = \langle -i T
(b^\dagger_2(t),b_2(t^\prime)) \rangle$, and there is an implicit
summation over repeated indices.
{\large {\it Extraction of the condensates}}
In the above expression the operator $b_3^\dagger b_4^\dagger$
connects states with $N$ particles to states with $N+2$
particles. Among these states those with excitation energy $2\mu$ play
a prominent role (Cooper poles). To separate the influence of these
states, we split the fluctuating part of the operator from the
condensate
\begin{equation}
b_3^\dagger b_4^\dagger(t) \;=\; \langle b_3^\dagger b_4^\dagger(t)
\rangle \:+\: :b_3^\dagger b_4^\dagger(t):~ ~ ~ ~. \label{A:Split}
\end{equation}
The time evolution is
\begin{equation}
\langle b_3^\dagger b_4^\dagger(t)\rangle \;=\; \langle b_3^\dagger
b_{-3}^\dagger\rangle \; {\rm e}^{i2\mu t} \;
\delta_{3,-4}~ ~ ~,\label{A:timeEv}
\end{equation}
where $\langle b_3^\dagger b_{-3}^\dagger \rangle$ is the usual time
independent pion density. Similarly, we obtain
\begin{equation}
b_3 b_4(t) \;=\; \langle b_3 b_{-3} \rangle \; {\rm e}^{-i2\mu t} \;
\delta_{3,-4} \:+\: :b_3 b_4(t):~ ~ ~ ~. \label{A:Split2}
\end{equation}
We now extract the condensate part of the $\sigma$--field operator from
the fluctuating part:
\begin{equation}
\sigma_\alpha(t) \;=\; \langle \sigma_\alpha(t) \rangle \:+\:
s_\alpha(t)~ ~ ~. \label{A:SigCon}
\end{equation}
The equation of motion gives
\begin{equation}
i \frac{\partial}{\partial t} \langle \sigma_\alpha(t) \rangle \;=\;
\Omega_\alpha \langle \sigma_\alpha(t) \rangle \:+\: \frac 12
\langle (b_1b_2 + b_{-1}^\dagger b_{-2}^\dagger \rangle \:
\langle\alpha\mid W\mid 12\rangle~ ~ ~.\label{A:SigEoM}
\end{equation}
We look for a solution of the form
\begin{equation}
\langle \sigma_\alpha(t) \rangle \;=\; ( A\, {\rm e}^{-i2\mu t} \:+\:
B\,{\rm e}^{i2\mu t} )\: \delta_{\alpha 0}~ ~ ~.\label{A:Sol}
\end{equation}
$A$ and $B$ are straightforwardly obtained from the equation of motion:
\begin{equation}
\langle \sigma_0(t) \rangle \;=\; -\frac 12\: \frac {\langle b_1 b_{-1}
\rangle \langle 0\mid W\mid 1-1\rangle}{M_\sigma-2\mu} \:
{\rm e}^{-i2\mu t}
\;-\;\frac 12\: \frac {\langle b_1^\dagger b_{-1}^\dagger
\rangle \langle 0\mid W\mid 1-1\rangle}{M_\sigma+2\mu} \:
{\rm e}^{i2\mu t}~ ~ ~.\label{A:Sol2}
\end{equation}
In the expression of the dynamical mass operator one can extract a
Cooper pole part, where only the condensates occur. The remaining part
involves only the fluctuating pieces. Grouping the latter with the
static mass operator, we can write
\begin{equation}
\Sigma_1(t,t^\prime) \;=\; \Sigma_{1C}(t,t^\prime) \:+\:
\Sigma_{1H}(t,t^\prime)~ ~ ~,\label{A:MC-H}
\end{equation}
where $\Sigma_{1H}(t,t^\prime)$ is the normal Hartree mass operator which
depends on the full in--medium $T$--matrix:
\begin{equation}
\Sigma_{1H}(t,t^\prime) \;=\; i \sum_2 \: G_{{\bar 2}2}(t,t^\prime) \:
\langle 12 \mid T(t,t^\prime) \mid 12 \rangle~ ~ ~.\label{A:MtoT}
\end{equation}
with
\begin{eqnarray}
\langle 12 \mid T(t,t^\prime) \mid 34 \rangle
&=& \langle 12 \mid V \mid 34 \rangle \: \delta(t-t^\prime)\nonumber\\
&+& \bigg\langle -iT\bigg ( \bigg [\langle 12 \mid W\mid\alpha\rangle
\:(s_\alpha+ s_{-\alpha}^\dagger)(t) \:+\: \frac 12 \langle 12
\mid V \mid 56\rangle \: (:b_5b_6+b_{-5}^\dagger b_{-6}^\dagger :)(t)\bigg ],
\nonumber \\
&\mbox{}& \bigg[(s_{\alpha^\prime}^\dagger+
s_{-\alpha^\prime}^\dagger)(t^\prime)\:
\langle \alpha^\prime \mid W\mid 34\rangle \:+\: \frac 12
(:b^\dagger_{5^\prime} b^\dagger_{6^\prime} + b_{-5^\prime}
b_{-6^\prime}:)(t^\prime)\:\langle 5^\prime 6^\prime \mid V \mid 34
\rangle \bigg ] \bigg )\bigg\rangle~ ~ ~\label{A:TMat1}
\end{eqnarray}
Using the Dyson equation for the $b$ and $s$ operators, it is a purely
technical matter to show that this scattering amplitude satisfies a
Lippmann--Schwinger equation. In energy space, and in the $I=0$
channel, it reads:
\begin{equation}
\langle 12 \mid T_{I=0}(E) \mid 34 \rangle \;=\; \langle 12 \mid
V_{I=0}(E) \mid 34 \rangle \;+\; \frac 12\, \langle 12 \mid
V_{I=0}(E) \mid 56 \rangle \: G_{2\pi}^{56}(E) \: \langle 56 \mid
T_{I=0}(E) \mid 34 \rangle~ ~ ~,\label{A:LSE}
\end{equation}
where $G_{2\pi}(E)$ is the in--medium $2\pi$ propagator
\begin{equation}
G_{2\pi}^{56}(E) \;=\; \bigg [ \frac {1} {E - (\omega_5+\omega_6)
+i\eta} \;-\; \frac {1} {E + (\omega_5+\omega_6) + i\eta}
\bigg ] \; (1\:+\:f_5\:+\:f_6)~ ~ ~,\label{A:2piProp}
\end{equation}
with thermal occupation numbers $f(k)=[ \exp (\omega(k)-\mu)/T - 1 ]^{-1}$.
As mentioned above, we have checked that using the correct matrix
form of the two pion propagators instead of (\ref{A:2piProp}) yields
the same final result. In Eq.~(\ref{A:LSE}), $V_{I=0}(E)$ is the
effective $\pi$--$\pi$ potential in the $I=0$ channel which
incorporates all the tree level diagrams. For total incoming momentum
${\vec K} = {\vec k}_1+{\vec k}_2 = {\vec k}_3+{\vec k}_4$, it reads
\begin{equation}
\langle {\vec k}_1, {\vec k}_2 \mid V_{I=0}(E) \mid {\vec k}_3, {\vec
k}_4 \rangle \;=\;
\bigg ( \frac {1}{2\omega_12\omega_22\omega_32\omega_4}\bigg )^{1/2}
\: \langle {\vec k}_1, {\vec k}_2 \mid {\cal M}_B(E) \mid {\vec k}_3, {\vec
k}_4 \rangle~ ~ ~,\label{InvMB}
\end{equation}
where the bare invariant interaction ${\cal M}_B$ is
\begin{equation}
\langle {\vec k}_1, {\vec k}_2 \mid {\cal M}_B(E) \mid {\vec k}_3, {\vec
k}_4 \rangle \; \equiv \;
\langle k_{12}^* \mid {\cal M}_B(s) \mid k_{34}^*\rangle
\;=\; \sum_{i=1}^{2}\: \lambda_i(s)\: v_i(k_{12}^*)\,v_i(k_{34}^*)
~ ~ ~,\label{A:0PV}
\end{equation}
with
\begin{eqnarray}
v_1(k) &=& v(k) \equiv [1+(k/8m_\pi)^2]^{-1}~ ~, ~ ~ ~ ~ ~
v_2(k) \;=\; \frac{\omega(k)}{m_\pi} \:v(k)~ ~, \label{A:FormFactor}\\
\lambda_1(s) &=& \frac {M_\sigma^2-m_\pi^2}{f_\pi^2}
\; \bigg [ 3 \: \frac {s-m_\pi^2}{s-M_\sigma^2} \:-\: \frac
{2m_\pi^2}{M_\sigma^2} \bigg ]~ ~, ~ ~ ~ ~ ~
\lambda_2 \;=\; \frac {M_\sigma^2-m_\pi^2}{f_\pi^2}
\; \frac{4m_\pi^2}{M_\sigma^2} ~ ~ ~. \label{A:Vab}
\end{eqnarray}
In these equations
$s=E^2-{\vec K}^2$ is the square of the total c.m. energy, and the
$k_{ij}^*$ are the magnitudes of the relative 3--momenta in the c.m.
frame
\begin{eqnarray}
\omega_{ij}^{*\,2} &=& m_\pi^2 \:+\: {\vec k}_{ij}^{*\,2} \;=\; \frac 14
\bigg [ (\omega_i + \omega_j)^2 \:-\: {\vec K}^2 \bigg ]~ ~ ~, ~
i,j=1,2~{\rm or}~3,4. \nonumber
\end{eqnarray}
The form factor $v(k)$, Eq.~(\ref{A:FormFactor}),
and $\sigma$--mass $M_\sigma = 1~{\rm GeV}$
have been fitted to the experimental phase shifts.
The Lippmann--Schwinger equation for the invariant $T$--matrix,
${\cal M}_{I=0}$, may finally be rewritten in a form suitable for
practical purposes:
\begin{eqnarray}
\langle k_{12}^* \mid {\cal M}_{I=0}(E,K) \mid k_{34}^* \rangle &=&
\langle k_{12}^*\mid {\cal M}_B(s)\mid k_{34}^*\rangle\nonumber\\
&+& \frac 12 \,\int \! \frac {d^3k_{56}^*}{(2\pi)^3} \;
\langle k_{12}^* \mid {\cal M}_B(s) \mid k_{56}^*\rangle \;
\frac {1}{\omega_{56}^*} \: \frac {\langle 1+f_++f_-\rangle}
{s-4\omega_{56}^{*\,2}+i\eta} \;
\langle k_{56}^* \mid {\cal M}_{I=0}(E,K)
\mid k_{34}^* \rangle~ ~ ~.\label{A:Tmat2}
\end{eqnarray}
In the special case of a single fireball of temperature $T$ and
chemical potential $\mu$, the angle average factor is given by
\begin{equation}
\langle 1+f_++f_-\rangle \;=\; \frac {T}{\gamma \beta k^*_{56}} \: \ln
\frac {\sinh [\{\gamma(\omega^*_{56}+\beta k^*_{56})-\mu\}/2T]}
{\sinh [\{\gamma(\omega^*_{56}-\beta k^*_{56})-\mu\}/2T]}~ ~ ~,
\label{A:Angle}
\end{equation}
where $\beta$ and $\gamma$ are the velocity and gamma--factor of the
pair with respect to the bath. This factor reduces to $\coth
(\omega_{56}^*-\mu)/2T$ for vanishing incoming total momentum ${\vec K}$.
Eq.~(\ref{A:Tmat2}) is solved by a separable ansatz
\begin{equation}
\langle k_{12}^* \mid {\cal M}_{I=0}(E,K) \mid k_{34}^* \rangle \;=\;
\sum_{i=1}^2 \: \lambda_i(s) \: v_i(k_{12}^*)\,\tau_i(k^*_{34};s, {\vec
K})~ ~ ~, \label{A:MSol}
\end{equation}
where the functions $\tau_i$ obey the coupled set of equations
\begin{equation}
\sum_{j=1}^2 \; \bigg [ \delta_{ij}\:-\; \lambda_j(s)\,g_{ij}(s,{\vec
K}) \bigg ] \:
\tau_j(k; s,K) \;=\; v_i(k)~ ~,~ ~ ~ ~ ~i=1,2 \label{A:MME}
\end{equation}
with
\begin{equation}
g_{ij}(s,K) \;\equiv\; \frac 12 \: \int \frac{d^3k}{(2\pi)^3} \;
v_i(k) \: \frac {1}{\omega(k)} \, \frac {\langle 1+f_++f_- \rangle}
{s-4\omega_k^2 +i\eta} \: v_j(k)~ ~ ~.\label{A:Deffij}
\end{equation}
{\large {\it The gap equation}}
To obtain the Cooper piece of the mass operator we must simply
replace $\sigma$ and $b b$ by $\langle \sigma \rangle$ and
$\langle b b\rangle$. According to the previous result, we find after
some straightforward algebra, and noting that the index $2$ is
necessarily $-1$,
\begin{equation}
\Sigma_{1C}(t,t^\prime) \;=\; -i\,G_{-{\bar 1},-1}(t,t^\prime) \: F_1^2 \:
\bigg ( {\rm e}^{-i2\mu(t-t^\prime)} \:+\: {\rm e}^{i2\mu(t-t^\prime)}
\bigg )~ ~ ~.\label{A:M1C}
\end{equation}
The important point is that $F_1$ involves the $I=0,\ell=0$ energy
dependent $\pi$--$\pi$ potential at $E = 2\mu$:
\begin{equation}
F_1 \;=\; -\frac 12 \: \int \! \frac {d^3k_2}{(2\pi)^3} \: \langle {\vec
k}_1, -{\vec k}_1 \mid V_{I=0}(E=2\mu) \mid {\vec k}_2, -{\vec k}_2
\rangle \; \langle b_2 b_{-2} \rangle~ ~ ~.\label{A:F1}
\end{equation}
In energy space, $\Sigma_{1C}(\omega)$ is
\begin{eqnarray}
\Sigma_{1C}(\omega) &=& F_1^2 \:
\int \! d\tau \: {\rm e}^{i\omega\tau} \: \{
\theta(\tau)\, \langle b_1^\dagger(\tau), b_1(0) \rangle \:+\:
\theta(-\tau) \, \langle b_1(0), b_1^\dagger(\tau) \rangle \}
\nonumber \\
&\mbox{ }& \times ({\rm e}^{-i2\mu\tau} \:+\: {\rm e}^{i2\mu\tau}
)~ ~ ~. \label{A:M1CE}
\end{eqnarray}
Taking for $b^\dagger_1(\tau)$ the bare time evolution
$b_1^\dagger(\tau) = b_1 {\rm e}^{i\omega\tau}$, and keeping only the
real part, we finally find
\begin{equation}
\Sigma_{1C}(\omega) \;=\; -F_1^2 \: \bigg ( \frac {1}{\omega+\omega_1-2\mu}
\:+\: \frac {1}{\omega+\omega_1+2\mu} \bigg )~ ~ ~.\label{A:M1CEF}
\end{equation}
The first term is the usual non--relativistic result, and the second
one corresponds to a relativistic correction.
Reinserting the result (\ref{A:M1CEF}) into the Dyson equation for the
pion propagator, and ignoring the Hartree correction, we find that the
pole of the pion propagator is the solution of
\begin{equation}
(\omega-\mu)^2 \;=\; (\omega_1-\mu)^2 \:-\: F_1^2 \:\bigg [ 1 \: +\:
\frac {(\omega-\mu)+(\omega_1-\mu)}{(\omega-\mu)+(\omega_1-\mu)+4\mu}
\bigg ]~ ~ ~,\label{A:Soln}
\end{equation}
The second term in the square brackets represents a relativistic
correction to the standard dispersion relation, since for typical
non--relativistic situation one has
\begin{equation}\nonumber
\mu\; {\buildrel < \over \sim} \; m_\pi \mbox{,~ ~ ~ ~} \omega-m_\pi
\ll m_\pi \mbox{, ~ ~and~ ~ ~} \omega_1 - m_\pi \ll m_\pi~ ~ ~.
\end{equation}
Calling
\begin{equation}
\Delta_1^2 \;=\; F_1^2\: \bigg [ 1 \: +\:
\frac {E_1+(\omega_1-\mu)}{E_1+(\omega_1-\mu)+4\mu}
\bigg ]~ ~ ~,\label{A:DDef}
\end{equation}
the quartic equation can be approximated by a quadratic one in terms
of $\omega-\mu$:
\begin{equation}
E_1^2 \;=\; (\omega-\mu)^2 \:-\: \Delta_1^2~ ~ ~,\label{A:E1Def}
\end{equation}
with a gap equation following from Eq.~(\ref{A:F1}) and (\ref{A:M1CEF})
\begin{equation}
\Delta_1 \;=\; -\frac 12 \:\int \! \frac {d^3k_2}{(2\pi)^3} \: \langle {\vec
k}_1, -{\vec k}_1 \mid V_{I=0}(E=2\mu) \mid {\vec k}_2, -{\vec k}_2
\rangle \; \frac {\Delta_2} {2E_2} \; \coth \frac {E_2}{2T} \; \bigg [
1 \:+\: \frac {E_1+(\omega_1-\mu)}{E_1+(\omega_1-\mu)+4\mu} \bigg ]^{1/2}~
~ ~,\label{A:GapEqn}
\end{equation}
which is the standard gap equation with a relativistic correction.
The presence of the factor $1/2$ is somewhat unconventional, but is
simply related to the fact that the matrix element of the interaction
incorporates the exchange term. Note that the factor $1/4$ in front of
the quartic term of the interaction Hamiltonian Eq.~(\ref{A:Hint}) has
the same origin.
To calculate the occupation number, we note, using the explicit
form of $G_{1{\bar 1}}$, that the bare vacuum is the vacuum of
quasi--particle operators $B_1$, such that
\begin{equation}
b_1 \;=\; \bigg [ \frac {\omega_1-\mu}{2E_1} \:+\: \frac 12 \bigg
]^{1/2}\: B_1 \;+\; \bigg [ \frac {\omega_1-\mu}{2E_1} \:-\: \frac 12
\bigg ]^{1/2}\: B_{-1}^\dagger~ ~ ~.\label{A:btoB}
\end{equation}
Using $\langle B^\dagger B \rangle = [\exp(E_1/T) - 1 ]^{-1}$, it
follows that
\begin{eqnarray}
\langle b_1^\dagger b_1 \rangle &=& \frac {\omega_1-\mu}{2E_1}\:
\coth \frac {E_1}{2T} \;-\; \frac 12 \nonumber \\
\langle b_1 b_{-1} \rangle &=& \frac{\Delta_1}{2E_1} \; \coth
\frac {E_1}{2T}~ ~ ~.\label{A:Dens}
\end{eqnarray}
{\large {\it The Thouless Criterion}}
We may also obtain the condition for having a pole in the $T$--matrix
at $E=2\mu$. For $E$ near $2\mu$, we write
\begin{equation}
\langle {\vec k}_1, -{\vec k}_1 \mid T_{I=0}(E) \mid {\vec k}_2, -{\vec
k}_2 \rangle \;\equiv \; Z_1\: \frac {1}{E-2\mu} \: Z_2~ ~
~.\label{A:TnearPole}
\end{equation}
Multiplying (\ref{A:Tmat2}) by $E-2\mu$ and taking the limit $E \to 2 \mu$,
one obtains an equation for $Z_1$
\begin{eqnarray}
Z_1 &=& -\frac 12 \:\int \! \frac {d^3k_2}{(2\pi)^3} \: \langle {\vec
k}_1, -{\vec k}_1 \mid V_{I=0}(E=2\mu) \mid {\vec k}_2, -{\vec k}_2
\rangle \; \frac {Z_2} {2(\omega_2-\mu)} \; \coth \frac
{\omega_2-\mu}{2T} \nonumber \\ &\mbox{}&~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
~ ~ ~ \times \bigg [
1 \:+\: \frac {E_2+(\omega_2-\mu)}{E_2+(\omega_2-\mu)+4\mu} \bigg ]~
~ ~,\label{A:ZEqn}
\end{eqnarray}
In the non--relativistic limit, {\it i.e.}, neglecting the last term
in the square brackets, this equation coincides with the linearized
form of the (non--relativistic) gap equation. This is just the
Thouless criterion: the gap equation begins to exhibit non--trivial
solutions at the point where the $T$--matrix has a pole at zero energy,
$\langle H-\mu N\rangle = E -2\mu =0$. We see that the
Thouless criterion is only approximately valid if relativistic
corrections are included.
\newpage
| proofpile-arXiv_065-350 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In \cite{Unf} free equations of spin 0 and spin 1/2 matter fields
in 2+1 - dimensional anti-de Sitter (AdS) space were reformulated
in a form of certain covariant constantness
conditions (``unfolded form''). Being equivalent to the standard
one, such a formulation is useful at least in two respects. It
leads to a simple construction of a general solution of the free
equations and gives important hints how to describe
non-linear dynamics exhibiting infinite-dimensional higher-spin
symmetries. In \cite{Unf} it was also observed that the
proposed construction admits a natural realization in terms of
the Heisenberg-Weyl oscillator algebra for the case of massless
fields. Based on this realization, non-linear dynamics of
massless matter fields interacting through higher-spin gauge
fields was then formulated in \cite{Eq} in all orders in
interactions.
In the present paper we address the question how one can
extend the oscillator realization of
the massless equations of \cite{Unf} to the case of
an arbitrary mass of matter fields. We show that
the relevant algebraic construction is provided
by the deformed oscillator algebra suggested in \cite{Quant}
with the deformation parameter related to the parameter of mass.
In a future publication of the two of the authors \cite{Fut}
the results of this paper will be used
for the analysis of non-linear dynamics of matter fields
in 2+1 dimensions, interacting through higher-spin gauge fields.
The 2+1 dimensional model considered in this
paper can be regarded as a toy model exhibiting some of the general properties
of physically more important higher-spin gauge
theories
in higher dimensions $d\geq 4$.
\section{Preliminaries}
We describe the 2+1 dimensional
AdS space in terms of
the Lorentz connection one-form
$\omega^{\alpha\beta}=dx^\nu
\omega_\nu{}^{\alpha\beta}(x)$
and dreibein one-form
$h^{\alpha\beta}=
dx^\nu h_\nu{}^{\alpha\beta} (x)$.
Here
$x^\nu$ are
space-time coordinates
$(\nu =0,1,2)$
and
$\alpha,\beta,\ldots =1,2$ are spinor indices, which are
raised and lowered with the aid of the symplectic form
$\epsilon_{\alpha\beta}=-\epsilon_{\beta\alpha}$,
$A^{\alpha}=\epsilon^{\alpha\beta}A_{\beta}$,
$A_{\alpha}=A^{\beta}\epsilon_{\beta\alpha}$,
$\epsilon_{12}=\epsilon^{12}=1$.
The AdS geometry can be
described by the equations
\begin{equation}
\label{d omega}
d\omega_{\alpha\beta}=\omega_{\alpha\gamma}\wedge\omega_\beta{}^\gamma+
\lambda^2h_{\alpha\gamma}\wedge h_\beta{}^\gamma\,,
\end{equation}
\begin{equation}
\label{dh}
dh_{\alpha\beta}=\omega_{\alpha\gamma}\wedge h_\beta{}^\gamma+
\omega_{\beta\gamma}\wedge h_\alpha{}^\gamma \, ,
\end{equation}
which have a form of zero-curvature conditions for the
$o(2,2)\sim sp(2)\oplus sp(2)$ Yang-Mills field strengths.
Here $\omega_{\alpha\beta}$ and $h_{\alpha\beta}$ are
symmetric in $\alpha$ and $\beta$.
For the space-time geometric interpretation of these equations
one has to assume that the dreibein $h_\nu{}^{\alpha\beta}$ is
a non-degenerate $3\times 3$ matrix.
Then
(\ref{dh}) reduces to the zero-torsion condition which expresses
Lorentz connection via dreibein $h_\nu{}^{\alpha\beta}$ and (\ref{d omega})
implies that the Riemann tensor 2-form $R_{\alpha\beta}=
d\omega_{\alpha\beta}-\omega_{\alpha\gamma}\wedge\omega_\beta{}^\gamma$
acquires the AdS form
\begin{equation}
\label{R}
R_{\alpha\beta}=
\lambda^2h_{\alpha\gamma}\wedge h_\beta{}^\gamma
\end{equation}
with
$\lambda^{-1}$ identified with the AdS radius.
In \cite{Unf} it was shown that
one can reformulate free field equations for matter fields in
2+1 dimensions
in terms of the generating function
$C(y|x)$
\begin{equation}
\label{C0}
C(y|x)=
\sum_{n=0}^\infty \frac1{n!}C_{\alpha_1 \ldots\alpha_n}(x) y^{\alpha_1}\ldots
y^{\alpha_n}\,
\end{equation}
in the following ``unfolded'' form
\begin{equation}
\label{DC mod}
DC=h^{\alpha\beta} \left[a(N)
\frac{\partial}{\partial y^\alpha }\frac{\partial}{\partial y^\beta}+
b(N) y_\alpha\frac{\partial}{\partial y^\beta}+ e(N)
y_\alpha y_\beta \right]C \, ,
\end{equation}
where $D$ is the Lorentz covariant differential
\begin{equation}
\label{lorcov}
D=d-\omega^{\alpha\beta}y_\alpha \frac{\partial}{\partial y^\beta}\,
\end{equation}
and $N$ is the Euler operator
\begin{equation}
N\equiv y^\alpha\frac{\partial}{\partial y^\alpha} \, .
\end{equation}
The integrability conditions of the equations (\ref{DC mod})
(i.e. the consistency with $d^2 =0$) require
the functions $a,b$ and $e$ to satisfy the following restrictions \cite{Unf}
\begin{equation}
\label{consist}
\alpha(n)=0 \mbox{\qquad for \, $n\ge 0$ ,\qquad $\gamma(n)=0$
\qquad for \quad $n\ge 2$ ,}
\end{equation}
$$
\beta(n)=0\mbox{ \qquad for \quad $n\ge 1$ ,}
$$
where
\begin{equation}
\alpha(N)=a(N)\left[(N+4)b(N+2)-Nb(N)\right]\,,
\end{equation}
\begin{equation}
\gamma(N)=e(N)\left[(N+2)b(N)-(N-2)b(N-2)\right]\,,
\end{equation}
\begin{equation}
\beta(N)=(N+3)a(N)e(N+2)-(N-1)e(N)a(N-2)+b^2(N)-\lambda^2\, .
\end{equation}
It was shown in \cite{Unf} that, for the condition that
$a(n)\ne 0$ $\forall$ $n\ge 0$ and up to a freedom of
field redefinitions $C\rightarrow \tilde{C} =\varphi (N) C$,
$\varphi(n) \neq 0 \quad \forall n\in {\bf Z}^{+}$,
there exist two one parametric
classes of independent solutions of~(\ref{consist}),
$$
a(n)=1\,,\qquad b(n)=0\,, \qquad e(n)=\frac14\lambda^2-\frac{M^2}
{2(n+1)(n-1)}\, ,\qquad
n\,\mbox{--even}\,,
$$
\begin{equation}
\label{cob}
a(n)=b(n)=e(n)=0\,,\qquad n\,\mbox{--odd}\,,
\end{equation}
and
$$
a(n)=b(n)=e(n)=0\,,\qquad n\,\mbox{--even}\,,
$$
\begin{equation}
\label{cof}
a(n)=1\,,\qquad b(n)=\frac{\sqrt2M}{n(n+2)}\,,\qquad
e(n)=\frac14\lambda^2-\frac{M^2}{2n^2}\, ,\qquad
n\,\mbox{--odd}\,,
\end{equation}
with an arbitrary parameter $M$.
As a result, the system (\ref{DC mod})
reduces to two independent infinite chains of equations
for bosons and fermions described by multispinors with even and odd number
of indices, respectively.
To elucidate the physical content of these equations
one has to identify the
lowest components of the expansion (\ref{C0}), $C(x)$ and $C_\alpha (x)$,
with the physical spin-0 boson and spin 1/2 fermion matter fields,
respectively, and to check,
first, that the system (\ref{DC mod}) amounts to the physical massive
Klein-Gordon and Dirac equations,
\begin{equation}
\label{M K-G}
\Box C=\left(\frac32\lambda^2-M^2\right)C\,,
\end{equation}
\begin{equation}
\label{D}
h^\nu{}_\alpha{}^\beta D_{\nu}C_\beta=\frac M{\sqrt2} C_\alpha \,,
\end{equation}
and, second, that
all other equations in (\ref{DC mod}) express
all highest multispinors via highest derivatives of
the matter fields $C$ and $C_{\alpha}$ imposing no additional
constraints on the latter. Note that the
D'Alambertian is defined as usual
\begin{equation}
\label{dal}
\Box =D^{\mu}D_{\mu} \,,
\end{equation}
where $D_{\mu}$ is
a full background covariant derivative
involving the zero-torsion Christoffel connection defined
through the metric postulate $D_{\mu}h_{\nu}^{\alpha\beta}=0$.
The inverse dreibein $h^\nu{}_{\alpha\beta}$ is defined as
in \cite{Unf},
\begin{equation}
h_\nu{}^{\alpha\beta}h^\nu{}_{\gamma\delta}=
\frac12(\delta^\alpha_\gamma\delta^\beta_\delta+\delta^\alpha_
\delta\delta^\beta_\gamma)\,.
\end{equation}
Note also that the indices $\mu$, $\nu$ are raised and lowered
by the metric tensor
$$
g_{\mu\nu}=h_\mu{}^{\alpha\beta}h_\nu{}_{\alpha\beta} \,.
$$
As emphasized in \cite{Unf}, the equations (\ref{DC mod})
provide a particular example of covariant constantness conditions
\begin{equation}
\label{dC}
dC_i =A_i{}^j C_j
\end{equation}
with the gauge fields $A_i{}^j =A^a(T_a)_i{}^j$
obeying the zero-curvature conditions
\begin{equation}
\label{dA}
dA^a=U^a_{bc}A^b \wedge A^c \,,
\end{equation}
where $U^a_{bc}$ are structure coefficients of the Lie (super)algebra
which gives rise to the gauge fields $A^a$ (cf (1), (2)).
Then the requirement that the integrability conditions
for (\ref{dC}) must be true is equivalent to the requirement that
$(T_a)_i{}^j$ form some matrix representation of the gauge algebra.
Thus, the problem consists of finding an appropriate representation of
the space-time symmetry group which leads to correct field equations.
As a result, after the equations are rewritten in this ``unfolded form'',
one can write down their general solution in a pure gauge form
$A(x)=-g^{-1}(x) dg(x)$, $C(x)=T(g^{-1})(x) C_0$, where $C_0$ is
an arbitrary $x$ - independent element of the representation space.
This general solution has a structure of the covariantized Taylor
type expansion \cite{Unf}. For the problem under consideration the relevant
(infinite-dimensional) representation of the AdS algebra is characterized
by the coefficients (\ref{cob}) and (\ref{cof}).
\section{Operator Realization for Arbitrary Mass}
Let us now describe an operator algebra that leads automatically
to the correct massive field equations of the form
(\ref{DC mod}).
Following to \cite{Quant} we introduce oscillators obeying the commutation
relations
\begin{equation}
\label{y mod}
[\hat{y}_\alpha,\hat{y}_\beta]=2i\epsilon_{\alpha\beta}(1+\nu k)\, ,
\end{equation}
where $\alpha ,\beta =1,2$, $k$ is the Klein operator anticommuting with
$\hat{y}_\alpha$,
\begin{equation}
\label{k}
k\hat{y}_\alpha=-\hat{y}_\alpha k\, , \qquad k^2 =1
\end{equation}
and $\nu$ is a free parameter.
The main property of these oscillators is that
the bilinears
\begin{equation}
\label{Q}
T_{\alpha\beta} =\frac{1}{4i} \{\hat{y}_\alpha ,\hat{y}_\beta\}
\end{equation}
fulfill the standard $sp(2)$ commutation relations
\begin{equation}
\label{sp(2) com}
[T_{\alpha\beta},T_{\gamma\delta}]=
\epsilon_{\alpha\gamma}T_{\beta\delta}+
\epsilon_{\beta\delta}T_{\alpha\gamma}+
\epsilon_{\alpha\delta}T_{\beta\gamma}+
\epsilon_{\beta\gamma}T_{\alpha\delta}
\end{equation}
as well as
\begin{equation}
\label{oscom}
[T_{\alpha\beta} ,\hat{y}_{\gamma}]=
\epsilon_{\alpha\gamma}\hat{y}_{\beta}+
\epsilon_{\beta\gamma}\hat{y}_{\alpha}\,
\end{equation}
for any $\nu$.
Note that a specific realization of this kind of oscillators
was considered by Wigner
\cite{Wig} who addressed a question whether it is possible to
modify the oscillator commutation relations in such a way that
the relation $[H, a_\pm ]=\pm a_\pm $ remains valid.
This relation is a particular case of (\ref{oscom}) with
$H=T_{12}$ and $a_\pm =y_{1,2}$.
The property
(\ref{sp(2) com})
allows us to realize the $o(2,2)$ gravitational
fields as
\begin{equation}
\label{W}
W_{gr} (x)= \omega +\lambda h ;\qquad
\omega\equiv\frac1{8i}\omega^{\alpha\beta}\{\hat{y}_\alpha,
\hat{y}_\beta\} \, ,
\quad h\equiv\frac1{8i}h^{\alpha\beta}\{\hat{y}_\alpha,\hat{y}_\beta\}
\psi \, ,
\end{equation}
where $\psi$ is an additional central involutive element,
\begin{equation}
\psi^2=1\,,\qquad [\psi,\hat{y}_{\alpha}]=0\,,\qquad
[\psi,k]=0\,,
\end{equation}
which is introduced to describe the 3d AdS algebra
$o(2,2)\sim sp(2)\oplus sp(2)$ spanned by the generators
\begin{equation}
\label{al}
L_{\alpha\beta}=\frac1{4i}\{\hat{y}_\alpha,
\hat{y}_\beta\}\,,\qquad \,
P_{\alpha\beta}=\frac1{4i}\{\hat{y}_\alpha,
\hat{y}_\beta\}\psi\,.
\end{equation}
Now the equations (\ref{d omega}) and (\ref{dh})
describing the vacuum anti-de Sitter geometry acquire a
form
\begin{equation}
\label{va}
dW_{gr} =W_{gr} \wedge W_{gr}\, .
\end{equation}
Let us introduce the operator-valued generating function $C(\hat{y},k|x)$
\begin{equation}
\label{hatC}
C(\hat{y},k,\psi|x)=\sum_{A,B=0,1}
\sum_{n=0}^\infty \frac 1{n!} \lambda^{-[\frac n2]}
C^{AB}_{\alpha_1 \ldots\alpha_n}(x) k^A
\psi^B\hat{y}^{\alpha_1}\ldots \hat{y}^{\alpha_n}\, ,
\end{equation}
where $C^{AB}_{\alpha_1 \ldots\alpha_n}$ are totally symmetric tensors
(which implies the Weyl ordering with respect to $\hat{y}_{\alpha}$).
It is easy to see that the following two types of equations
\begin{equation}
\label{aux}
DC=\lambda[h,C] \, ,
\end{equation}
and
\begin{equation}
\label{D hatC}
DC=\lambda\{h,C\} \, ,
\end{equation}
where
\begin{equation}
DC\equiv dC-[\omega,C] \,
\end{equation}
are consistent (i.e. the integrability conditions are satisfied
as a consequence of the vacuum conditions (\ref{va})). Indeed,
(\ref{aux}) corresponds to the adjoint action of the
space-time algebra (\ref{al}) on the algebra of modified
oscillators. The equations (\ref{D hatC}) correspond to another
representation of the space-time symmetry which we call twisted
representation. The fact that one can replace the commutator
by the anticommutator in the term proportional to dreibein is a simple
consequence of the property that AdS algebra possesses an involutive
automorphism changing a sign of the AdS translations.
In the particular realization used here it is induced by the
automorphism $\psi\to -\psi$.
There is an important difference between these two representations.
The first one involving the commutator decomposes into an infinite
direct sum of finite-dimensional representations of the space-time symmetry
algebra. Moreover, because of the property
(\ref{oscom}) this representation is $\nu$-independent and therefore is
equivalent to the representation with $\nu=0$ which was shown
in \cite{Unf} to describe
an infinite set of auxiliary (topological) fields. The twisted representation
on the other hand is just the infinite-dimensional representation
needed for the description of
matter fields (in what follows we will use the symbol $C$ only for the twisted
representation).
To see this one has to carry out a component analysis
of the equations
(\ref{D hatC}) which consists of some operator reorderings bringing
all terms into the Weyl ordered form with respect to
$\hat{y}_\alpha$.
As a result one finds that
(\ref{D hatC})
takes the form of the
equation (\ref{DC mod}) with the following
values of the coefficients $a(n)$, $b(n)$ and $e(n)$ :
\begin{eqnarray}
\label{a}
\lefteqn{a(n)=\frac{i\lambda}2 \left[1+\nu k\frac{1+(-1)^n}{(n+2)^2-1}
\right.} \nonumber\\
& & \left.{}-\frac{\nu^2}{(n+2)^2((n+2)^2-1)} \left((n+2)^2-
\frac{1-(-1)^n}2 \right)\right]\,,
\end{eqnarray}
\begin{equation}
\label{b}
b(n)=-\nu k\lambda\,\frac{1-(-1)^n}{2n(n+2)}\,,
\end{equation}
\begin{equation}
\label{e}
e(n)=-\frac{i\lambda}2\, .
\end{equation}
As expected, these expressions satisfy the conditions~(\ref{consist}).
Now let us remind ourselves that due to the presence of the Klein operator
$k$ we have a doubled number of fields compared to the analysis in the
beginning of this section. One can project out the irreducible
subsets with the aid of the two projectors $P_\pm$,
\begin{equation}
C_\pm\equiv P_\pm C\, ,\qquad P_\pm\equiv\frac{1\pm k}2\, .
\end{equation}
As a result we get the following component form of eq.~(\ref{DC mod})
with the coefficients (\ref{a})-(\ref{e}),
\begin{equation}
\label{chainbos+-}
DC^{\pm}_{\alpha(n)}=\frac i2\left[\left(1-\frac{\nu(\nu\mp2)}{(n+1)(n+3)}
\right) h^{\beta\gamma}C^{\pm}_{\beta\gamma\alpha(n)}-
\lambda^2n(n-1)h_{\alpha\alpha}C^{\pm}_{\alpha(n-2)}\right]
\end{equation}
for even $n$, and
\begin{eqnarray}
\label{chainferm+-}
DC^{\pm}_{\alpha(n)} & = & \frac i2\left(1-\frac{\nu^2}{(n+2)^2}\right)
h^{\beta\gamma}C^{\pm}_{\beta\gamma\alpha(n)} \pm
\frac {\nu\lambda}{n+2}h_{\alpha}{}^{\beta}C^{\pm}_{\beta\alpha(n-1)}
\nonumber\\
& & {}-\frac i2 \lambda^2n(n-1)h_{\alpha\alpha}C^{\pm}_{\alpha(n-2)}
\end{eqnarray}
for odd $n$. Here we use the notation
$C_{\alpha(n)}=C_{\alpha_1,\dots,\alpha_n}$ and assume the full
symmetrization of the indices denoted by $\alpha$.
As it was shown in \cite{Unf}, the D'Alambertian
corresponding to eq.~(\ref{DC mod}) has the following form
\begin{eqnarray}
\label{D'Al}
\Box C & = & \Biggl[(N+3)(N+2)a(N)e(N+2)+ \nonumber\\
& & \left.+N(N-1)e(N)a(N-2)-\frac12N(N+2)b^2(N)\right]C\, .
\end{eqnarray}
Insertion of~(\ref{a})-(\ref{e}) into~(\ref{D'Al}) yields
\begin{equation}
\label{L M}
\Box C_\pm =\left[\lambda^2\frac{N(N+2)}2+\lambda^2\frac32-
M^2_\pm \right]C_\pm\,,
\end{equation}
with
\begin{equation}
\label{M}
M^2_\pm =\lambda^2\frac{\nu(\nu\mp 2)}2\, ,\qquad n\mbox{ -even,}
\end{equation}
\begin{equation}
\label{M f}
M^2_\pm =\lambda^2\frac{\nu^2}2\, ,\qquad n\mbox{ -odd.}
\end{equation}
Thus, it is shown that the
modification (\ref{y mod}) allows one to
describe matter fields
\footnote{Let us remind the reader that the physical matter
field components are singled out by the conditions $NC_\pm=0$
in the bosonic sector and $NC_\pm=C_\pm$ in the fermionic sector}
with an arbitrary mass parameter related to $\nu$.
This construction generalizes in a natural way the realization
of equations for massless matter fields in terms of
the ordinary ($\nu=0$) oscillators proposed
in \cite{Unf}. An important comment however is that
this construction
not necessarily leads to non-vanishing coefficients $a(n)$.
Consider, for example,
expression~(\ref{a}) for the bosonic part of
$C_{+}$, i.e., set $k=1\,,\,n=2m$, $m$ is some integer,
\begin{equation}
\label{a1}
a(2m)=\frac{i\lambda}2 \left[1-\frac{\nu(\nu-2)}{(2m+1)(2m+3)}\right]\, .
\end{equation}
We observe that $a(2l)=0$ at $\nu=\pm 2(l+1)+1 $. It is not difficult to see
that some of the coefficients $a(n)$ vanish if and only if
$\nu=2k+1$ for some integer $k$.
This conclusion is in agreement with the results
of~\cite{Quant} where it was shown that for these values
of $\nu$ the enveloping algebra of the relations (\ref{y mod}),
$Aq(2;\nu |{\bf C})$, possesses ideals.
Thus, strictly speaking for $\nu =2k+1$
the system of equation derived from the operator realization
(\ref{D hatC}) is different from that considered in \cite{Unf}.
The specificities of the degenerated systems with
$\nu=2k+1$ will be discussed in the section 5.
In \cite{BWV} it was shown that the algebra $Aq(2,\nu )$
is isomorphic to the factor algebra $U(osp(1,2))/I(C_2 -\nu^2 )$,
where $U(osp(1,2))$ is the enveloping algebra
of $osp(1,2)$, while $I(C_2 -\nu^2 )$ is the ideal
spanned by all elements of the form
$$
(C_2-\nu^2)\, x\,, \qquad \forall x\in U(osp(1,2)) \,,
$$
where $C_2$ is the quadratic Casimir operator of $osp(1,2)$.
{}From this observation it follows in particular that
the oscillator realization described above
is explicitly supersymmetric. In fact it is N=2 supersymmetric \cite{BWV}
with the generators of $osp(2,2)$ of the form
$$
T_{\alpha\beta}=\frac1{4i}\{\hat{y}_\alpha,\hat{y}_\beta \}\,,\quad
Q_\alpha =\hat{y}_\alpha\,,\quad
S_\alpha =\hat{y}_\alpha k\,,\quad
J=k+\nu \,.
$$
This observation guarantees that the system of equations
under consideration possesses N=2 global supersymmetry.
It is this $N=2$ supersymmetry which leads to a
doubled number of boson and fermion fields in the model.
\section{Bosonic Case and U(o(2,1))}
In the purely bosonic case one can proceed
in terms of bosonic operators,
avoiding the doubling of fields caused by supersymmetry.
To this end, let us use the orthogonal realization of the AdS algebra
$o(2,2)\sim o(2,1)\oplus o(2,1)$.
Let $T_a$ be the generators of $o(2,1)$,
\begin{equation}
\label{comr}
[T_a,T_b]=\epsilon_{ab}{}^c T_c \,,
\end{equation}
where $\epsilon_{abc}$ is a totally antisymmetric 3d tensor,
$\epsilon_{012}=1$, and Latin indices are raised and lowered by
the Killing metrics of $o(2,1)$,
$$
A^a=\eta^{ab}A_b\,,\qquad \eta=diag(1,-1,-1) \,.
$$
Let the background gravitational field have a form
\begin{equation}
\label{W T}
W_\mu=\omega_\mu{}^a T_a +\tilde\lambda\psi h_\mu{}^a T_a\,,
\end{equation}
where $\psi$ is a central involutive element,
\begin{equation}
\psi^2=1,\qquad [\psi, T_a]=0\,,
\end{equation}
and let $W$ obey the zero-curvature conditions (\ref{va}).
Note, that the inverse dreibein $h^\mu{}_a$
is normalized so that
\begin{equation}
h_\mu{}^a h^\mu{}^b=\eta^{ab} \,.
\end{equation}
Let $T_a$ be restricted by the following additional condition on the
quadratic Casimir operator
\begin{equation}
\label{tr}
C_2\equiv T_a T^a=\frac18\left(\frac32-\frac{M^2}{\tilde\lambda^2}\right)\,.
\end{equation}
We introduce the dynamical 0-form
$C$ as a function of $T_a$ and $\psi$
\begin{equation}
\label{CT}
C=\sum_{n=0}^\infty\sum_{A=0,1}\frac1{n!}\psi^A C_A{}^{a_1\ldots a_n}(x)
T_{a_1}\ldots T_{a_n}\, ,
\end{equation}
where $C_A{}^{a_1\ldots a_n}$ are totally symmetric traceless tensors.
Equivalently one can say that $C$ takes values in
the algebra $A_M \oplus A_M$ where
$A_M = U(o(2,1))/I_{(C_2-\frac18(\frac32-\frac{M^2}{\tilde\lambda^2}))}$.
Here $U(o(2,1))$ is the enveloping algebra for
the relations (\ref{comr}) and
$I_{(C_2-\frac18(\frac32-\frac{M^2}{\tilde\lambda^2}))}$ is the ideal
spanned by all elements of the form
$$
\left[C_2-\frac18\left(\frac32-\frac{M^2}{\tilde\lambda^2}\right)\right]\,x
\,,\qquad \forall x\in U(o(2,1) \,.
$$
We can then write down the equation analogous to~(\ref{D hatC})
in the form
\begin{equation}
\label{DC T}
D_{\mu}C=\tilde\lambda\psi h_{\mu}{}^a\{T_a,C\}\, ,
\end{equation}
where
\begin{equation}
D_{\mu}C=\partial_{\mu}C-\omega_{\mu}{}^a[T_a,C]\, .
\end{equation}
Acting on the both sides of eq.~(\ref{DC T}) by
the full covariant derivative $D^{\mu}$,
defined through the metric postulate
$D_{\mu}(h^a_{\nu}T_a)=0$
under the condition that the
Christoffel connection is symmetric, one can derive
\begin{equation}
\Box C_n=\frac12\tilde\lambda^2\left[2n(n+1)+\frac32-
\frac{M^2}{\tilde\lambda^2} \right]C_n \,,
\end{equation}
where $C_n$ denotes a $n$-th power
monomial in (\ref{CT}). We see that this result coincides with
(\ref{L M}) at $N=2n$ and
\begin{equation}
\label{ll}
\lambda^2=\frac12\tilde\lambda^2 \,.
\end{equation}
Also one can check that the zero-curvature conditions
for the gauge fields (\ref{W}) and (\ref{W T})
are equivalent to each other provided that (\ref{ll}) is true.
The explicit relationships are
$$
\omega_\mu{}^{\alpha\beta}=-\frac12\omega_\mu{}^a\sigma_a^{\alpha\beta}\,,\quad
h_\mu{}^{\alpha\beta}=-\frac1{\sqrt2}h_\mu{}^a\sigma_a^{\alpha\beta}\,,\quad
T_a=-\frac1{16i}\sigma_a^{\alpha\beta}\{\hat{y}_\alpha,\hat{y}_\beta\}\,,
$$
where $\sigma_a^{\alpha\beta}=(I,\sigma_1,\sigma_3)$,
$\sigma_1\,,\sigma_3$ are symmetric Pauli matrices.
One can also check that, as expected, eq.~(\ref{DC T}) possesses
the same degenerate points in M as eq.~(\ref{DC mod}) does
according to~(\ref{a1}).
\section{Degenerate Points}
In this section we discuss briefly the specificities of
the equation~(\ref{D hatC}) at singular points in $\nu$.
Let us substitute the expansion~(\ref{hatC}) into~(\ref{DC mod})
with the coefficients defined by~(\ref{a})-(\ref{e})
and project (\ref{DC mod}) to the subspace of bosons $C_{+}$
by setting $k=1$ and $n$ to be even. Then we get in the component form
\begin{equation}
\label{chain}
DC_{\alpha(n)}=\frac i2\left[\left(1-\frac{\nu(\nu-2)}{(n+1)(n+3)}
\right) h^{\beta\gamma}C_{\beta\gamma\alpha(n)}-
\lambda^2n(n-1)h_{\alpha\alpha}C_{\alpha(n-2)}\right] \,.
\end{equation}
In the general case (i.e., $\nu\ne 2l+1$, $l$-integer)
this chain of equations starts from the scalar component
and is equivalent to the dynamical equation~(\ref{M K-G})
with $M^2=\lambda^2\frac{\nu(\nu-2)}2$ supplemented either by
relations expressing highest multispinors via highest derivatives
of $C$ or identities which express the fact that higher derivatives
are symmetric.
At $\nu=2l+1$ the first term on the r.h.s.
of~(\ref{chain}) vanishes for $n=2(\pm l-1)$.
Since $n$ is non-negative let us choose for definiteness
a solution with $n=2(l-1)$, $l>0$.
One observes that the rank-$2l$ component is not any longer
expressed by~(\ref{chain}) via derivatives of the scalar $C$,
thus becoming an independent dynamical variable.
Instead, the equation (\ref{chain}) tells us that
(appropriately AdS covariantized) $l$-th derivative of the scalar field
$C$ vanishes. As a result, at degenerate points the system of equations
(\ref{chain}) acquires a non-decomposable triangle-type form with a
finite subsystem of equations for the set of
multispinors $C_{\alpha (2n)},$ $n<l$
and an infinite system of equations
for the dynamical field $C_{\alpha (2l)}$ and higher multispinors,
which contains (derivatives of)
the original field $C$ as a sort of sources on the right hand side.
The subsystem for lower multispinors describes a system analogous
to that of topological fields (\ref{aux}) which can contain at most a finite
number of degrees of freedom. In fact this system should be
dynamically trivial by the unitarity requirements (there are no finite-dimensional
unitary representations of the space-time symmetry groups)
\footnote{The only exception is when the degeneracy
takes place on the lowest level
and the representation turns out to be trivial
(constant).}.
Physically,
this is equivalent to imposing appropriate boundary conditions at infinity
which must kill these degrees of freedom because, having only a finite number
of non-vanishing derivatives, these fields have a polynomial growth
at the space-time infinity (except for a case of a constant field $C$).
Thus one can factor out the decoupling lowest components arriving
at the system of equations which starts from the field $C_{\alpha (2l)}$.
These systems are dynamically non-trivial and correspond to
certain gauge systems. For example, one can show that the first degenerate
point $\nu=3$ just corresponds to 3d electrodynamics.
To see this one can introduce a two-form
\begin{equation}
F=h^{\alpha}{}_{\gamma}\wedge h^{\gamma\beta}
C_{\alpha\beta}\,
\end{equation}
and verify that the infinite part of the system (\ref{chain}) with
$n\ge2$ (i.e. with the scalar field factored out) is equivalent to the
Maxwell equations
\begin{equation}
dF=0\,,\qquad d\,{}^* F=0\,
\end{equation}
supplemented with an infinite chain of higher Bianchi identities
(here ${}^* F$ denotes a form dual to $F$).
Note that, for our normalization of a mass,
electrodynamics turns out to be
massive with the mass $M^2=\frac32\lambda^2$ which
vanishes in the flat limit $\lambda\to 0 $.
A more detailed analysis of this formulation of
electrodynamics and its counterparts corresponding to
higher degenerate points will be given in \cite{Fut}.
Now let us note that there exists an alternative formulation of
the dynamics of matter fields which is equivalent to the original one
of \cite{Unf} for all $\nu$ and is based on the co-twisted representation
$\tilde C$ . Namely, let us introduce a non-degenerate invariant
form
\begin{equation}
\langle C,\tilde C \rangle = \int d^4x \sum_{n=0}^\infty
\frac 1{(2n)!}C_{\alpha(2n)}\tilde C^{\alpha(2n)}\,
\end{equation}
confining ourselves for simplicity to the purely bosonic case
in the sector $C_{+}$.
The covariant differential corresponding to the
twisted representation $C$ of $o(2,2)$ has the form
\begin{equation}
{\cal D}C= dC-[\omega,C]-\lambda\{h,C\}\, ,
\end{equation}
so that eq.~(\ref{D hatC}) acquires a form \quad
${\cal D}C=0$.
The covariant derivative in the co-twisted representation can be obtained
from the invariance condition
\begin{equation}
\langle C,{\cal D}\tilde C \rangle =-\langle {\cal D}C,\tilde C \rangle
\,.
\end{equation}
It has the following explicit form
\begin{eqnarray}
\lefteqn{{\cal D}\tilde C^{\alpha(n)}=d\tilde C^{\alpha(n)}-
n\omega^{\alpha}{}_{\beta}\tilde C^{\beta\alpha(n-1)} }\nonumber\\
& & {}-\frac i2\left[h_{\beta\gamma}\tilde C^{\beta\gamma\alpha(n)}-
\lambda^2n(n-1)\left(1-\frac{\nu(\nu-2)}{(n-1)(n+1)}\right)
h^{\alpha\alpha}\tilde C^{\alpha(n-2)}\right] \,.
\end{eqnarray}
As a result the equation for $\tilde C$ analogous
to~(\ref{chain}) reads
\begin{equation}
\label{co-chain}
D\tilde C^{\alpha(n)}=\frac i2\left[
h_{\beta\gamma}\tilde C^{\beta\gamma\alpha(n)}-
\lambda^2n(n-1)\left(1-\frac{\nu(\nu-2)}{(n-1)(n+1)}\right)
h^{\alpha\alpha}\tilde C^{\alpha(n-2)}\right] \,.
\end{equation}
We see that now the term containing a higher multispinor
appears with a unite coefficient while the coefficients in front of the
lower multispinor sometimes vanish.
The equations (\ref{co-chain}) identically
coincide with the equations derived in \cite{Unf}
which are reproduced in the section 2 of this paper.
Let us note that the twisted and co-twisted representations are equivalent for
all $\nu \neq 2l+1$ because the algebra of deformed oscillators
possesses an invariant quadratic form which is non-degenerate
for all $\nu \neq 2l+1$ \cite{Quant}.
For $\nu = 2l+1$ this is not the case
any longer since the invariant quadratic form degenerates and therefore
twisted and co-twisted representations turn out to be formally inequivalent.
Two questions are now in order. First, what is a physical difference
between the equations corresponding to twisted and co-twisted representations
at the degenerate points, and second which of these two representations can be used
in an interacting theory. These issues will be considered in more detail
in \cite{Fut}. Here we just mention that at the free field level the two
formulations are still physically equivalent and in fact turn out to be dual to each other.
For example for the case of electrodynamics the scalar field component $C$
in the co-twisted representation can be interpreted as
a magnetic potential such that ${}^*F = dC$. A non-trivial question
then is whether such a formulation can be extended
to any consistent local interacting theory. Naively one can expect that the formulation
in terms of the twisted representation has better chances to be extended beyond the
linear problem. It will be shown in \cite{Fut} that this is indeed the case.
\section{Conclusion}
In this paper we suggested a simple algebraic
method of formulating free field equations
for massive spin-0 and spin 1/2 matter fields
in 2+1 dimensional AdS space in the form of covariant
constantness conditions for certain infinite-dimensional
representations of the space-time symmetry group.
An important advantage of this formulation is that it
allows one to describe in a simple way a structure of the global
higher-spin symmetries. These symmetries are described by the
parameters which take values in the infinite-dimensional
algebra of functions of all generating elements $y_\alpha$, $k$
and $\psi$, i.e.
$\varepsilon=\varepsilon(y_\alpha,k,\psi |x)$. The full
transformation law has a form
\begin{equation}
\label{trans}
\delta C = \varepsilon C - C\tilde\varepsilon\,,
\end{equation}
where
\begin{equation}
\tilde\varepsilon(y_\alpha,k,\psi |x)=\varepsilon(y_\alpha,k,-\psi |x)
\end{equation}
and the dependence of
$\varepsilon $ on $x$ is fixed by the equation
\begin{equation}
d \varepsilon=W_{gr} \varepsilon - \varepsilon W_{gr} \,,
\end{equation}
which is integrable as a consequence of the zero-curvature
conditions (\ref{va}) and
therefore admits a unique solution in terms of an arbitrary
function $\varepsilon_0 (y_\alpha,k,\psi)=\varepsilon
(y_\alpha,k,\psi|x_0)$ for an arbitrary point of space-time
$x_0$. It is obvious that the equations (\ref{D hatC})
are indeed invariant with respect to the transformations
(\ref{trans}).
Explicit knowledge of the structure of the global higher-spin symmetry
is one of the main results obtained in this paper. In \cite{Fut}
it will serve as a starting point for the analysis of higher-spin
interactions of matter fields in 2+1 dimension. An interesting feature of higher-spin
symmetries demonstrated in this paper
is that their form depends on a particular dynamical system under
consideration. Indeed, the higher-spin algebras with
different $M^2 (\nu )$ are pairwise non-isomorphic.
This is obvious from the identification of the higher-spin symmetries with
certain factor-algebras of
the enveloping algebras of space-time symmetry algebras
along the lines of the Section 4. Ordinary space-time symmetries on their turn
can be identified with (maximal)
finite-dimensional subalgebras of the higher-spin algebras which do not
depend on the dynamical parameters like $\nu$ (cf (\ref{y mod})).
The infinite-dimensional algebras isomorphic to
those considered in the section 4 have been originally introduced
in \cite{BBS,H}
as candidates for 3d bosonic higher-spin algebras, while the
superalgebras of deformed oscillators described in the section 3
were suggested in \cite{Quant} as candidates for 3d higher-spin
superalgebras. Using all these algebras
and the definition of supertrace given in \cite{Quant}
it was possible to write
a Chern-Simons action for the 3d higher-spin gauge fields which
are all dynamically trivial in the absence of matter fields
(in a topologically trivial situation). Originally this was done
by Blencowe \cite{bl} for the case of the Heisenberg algebra (i.e. $\nu =0$).
It was not clear, however, what is a physical meaning of the
ambiguity in the continuous parameter like $\nu$ parametrizing
pairwise non-isomorphic 3d higher-spin algebras. In this paper
we have shown that different symmetries are realized on different
matter multiplets, thus concluding that higher-spin symmetries
turn out to be dependent on a particular physical model
under consideration.
\section*{Acknowledgements}
The research described in this article
was supported in part by the European Community
Commission under the contract INTAS, Grant No.94-2317 and by the
Russian Foundation for Basic Research, Grant No.96-01-01144.
| proofpile-arXiv_065-351 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction.}
The derivation of the thermal properties of a black hole is typically
carried out in the context of the
Euclidean path integral approach, as initiated by
Gibbons and Hawking \cite{GibHawk77}. In this language, which is
manifestly geometrical,
the black hole partition function is identified with the Euclidean path
integral, the integration over four-metrics playing the role of the
configurational sum, and the hamiltonian, a function of the metric and
curvature, taken to be that of the gravitational system.
A black hole of mass $M$
has a temperature $T_{\infty} ={\hbar}/{(8 \pi M)}$ measured at large
distances from the hole. Following and extending this approach, York
demonstrated that the canonical ensemble with elements of radius $r$ and
temperature $T(r)$ for hot gravity with black holes is well-defined
\cite{York86}.
That is, one treats a collection of spherical cavities of radius $r$
with a temperature $T$ at $r$. These cavities may contain either, no black
hole, or one of two physically distinct black holes
depending on the value of the product
$rT$. In the case when the two distinct solutions pertains, only one of
them will correspond to a thermodynamically stable black hole. This
ensemble resolves a number of difficulties in assessing the physical
significance of the classical black hole action in its
contribution to the Euclidean
path integral.
However, the reasons for considering the implantation
of a black hole in a finite cavity, or ``box'', go well beyond the resolution
of these initial difficulties and in fact the spatial cut-off provided by the
cavity has been recognized as being crucial for making sense of black hole
thermodynamics in general, quite independent of the path integral
approach. For example, when one comes to consider the back-reaction of the
radiation on the spacetime geometry of the black hole, the system comprised
of black hole plus radiation must be put into a finite cavity, lest the
radiation in a spatial region that is too large collapses onto the hole,
thereby producing a larger one \cite{York85}.
Related to this (but much more
general) is the fact that the usual thermodynamic limit in which one invokes
the infinite system volume limit does not exist for equilibrium,
self-gravitating systems
at a finite temperature. This follows since the system is unstable to
gravitational collapse, or recollapse, if a black hole is already present.
This, in practice, presents no problem since
physically, one only requires that the system
can in {\em principle} become
sufficiently large so that fluctuations become negligible. This peculiarity
of gravitational thermodynamics will play an important role in the present
paper.
While the Euclidean path integral approach is well-defined and allows one
to obtain the same value of the entropy as required by the thermodynamic
analogy, the Bekenstein-Hawking entropy, it does not shed any light on the
so-called dynamic origin of the entropy, nor does it explain the
``microphysics'' giving rise
to the macroscopic thermodynamic functions, such
as the internal energy, heat capacity, the equation-of-state, etc., that
characterize the black hole. This state of affairs has spawned numerous
efforts to understand the dynamical, or statistical mechanical, origins of
black hole thermodynamics with particular emphasis paid to an explanation
of the dynamical origin of
black hole entropy as for example in Refs. \cite{York83} and
\cite{Zurek}-\cite{Barvinsky}.
In contrast to on-going efforts devoted to identifying the black hole's
fundamental degrees of
freedom, we wish to take a model-oriented approach and promote
a phenomenological $analogy$
between black holes and liquids. The analogy will be established at the level
of thermodynamics.
In the present paper, we seek what might
be termed an effective {\em atomic} picture of black hole
thermodynamics. By this we mean that it may be possible to reproduce (some)
black hole thermodynamics in terms of microscopic properties of
an interacting fluid or
gas. The components of this analogue fluid are massive point particles
interacting mutually via a pairwise additive potential. If such a
correspondence is possible, we will have effected a mapping between the
inherently geometrical degrees of freedom part and parcel of the Euclidean
approach, and the ``atomic'' variables actually
appearing in standard partition
functions for fluids. The geometric quantities include
metric,
curvature, manifolds and boundaries. The so-called atomic
quantities include the
particles, their momenta and positions, and their interaction
potentials. The correspondence is established via the black hole's
thermodynamic functions, as derived from the standard Euclidean
path integral approach, using these as given input. The task is then to
characterize a liquid or (dense) gas whose microscopic properties (as encoded
for example by the
potential, pressure, pair-correlation function)
can be ``tuned'', or adjusted suitably, so as
to reproduce mathematically the same set of black hole state functions.
If the program so described is successful, then one has in effect, replaced
integration over metrics by an integration over a multi-particle classical
phase space while the gravitational action is replaced by a particle
hamiltonian, containing a (non-relativistic) kinetic energy and potential
energy term.
In the next Section we review and comment on the essential features of
black hole thermodynamics as derived from the Euclidean path integral in
the saddle point approximation. The black hole energy, entropy, equation of
state and compressibility are displayed and their qualitative features are
revealed through various limits and graphical representations. The way in which
we establish a connection between liquids and black holes is taken up in
Section III. The key in building a ``liquid'' model
is provided by the fundamental
equations employed in the study of the atomic dynamics
of simple liquids and the
atomic picture of the thermodynamics of the liquid state. These fundamental
relations
equate the macroscopic (thermodynamic) to the microscopic (internal structure,
potential energy) properties of fluids. These relations are derived as rigorous
consequences of statistical mechanics applied to fluids and (dense) gases.
A particular type of fluid is singled
out the moment we identify the macroscopic variables of the fluid
with those of the
black hole. The points of contact between the black hole and fluid are set up
via their respective internal energies and compressibilities. The analog fluid is
identified to the extent that we can write down its pair-potential and
two-body correlation function.
The, necessarily, bounded spatial extent of the black hole ensemble is crucial
in allowing us to solve for the liquid's microscopic parameters exactly.
These are calculated in closed form as well as presented graphically.
The ultimate purpose of establishing such a mapping is the double benefit
to be gained in being able to relate black hole
physics to the molecular dynamics of
fluids. Recent work of a similar spirit includes the possible
correspondence between black holes and quantized vortices
in superfluids \cite{Volovik}
and a connection between fluid surface tension and black hole
entropy \cite{Callaway}.
A summary is given in Section III. Absolute units $(G=c=\hbar=k_B=1)$ are used
throughout except where restoration of conventional units may be helpful.
\section{Black hole thermodynamics in brief.}
In deriving gravitational thermodynamics from an Euclidean path integral
\begin{equation}
Z(\beta) = \int d{\mu}[g,\phi]\, e^{-I[g,\phi]} = e^{-\beta F},
\end{equation}
one expects the dominant contribution to the canonical partition function to
come from those metrics $g$ and matter
fields $\phi$ that are near a background metric
$g^{(0)}$ and background field $\phi^{(0)}$, respectively. These background fields are obtained from
solutions of the classical field equations. The classical contribution
to $Z$ is obtained by evaluating $Z$ at
the point $(g^{(0)}, \phi^{(0)})$ in which
case one obtains the familiar relation
\begin{equation}
\beta F = I[g^{(0)}, \phi^{(0)}] = I[g^{(0)}],
\end{equation}
where we have taken $\phi^{(0)} = 0$ in the last equality. This provides
the free energy $F$ of the gravitational system
in the saddle-point approximation.
The action $I$ is the first-order Euclidean Einstein action including a
subtraction term necessary to avoid runaway solutions.
The action
appropriate to a black hole in a spherical cavity of radius $r$ is given by
\cite{York86}
\begin{equation}
I = I_1 - I_{subtract},
\end{equation}
where
\begin{equation}
I_1 = -\frac{1}{16 \pi}\int^{r}_{2M} \int^{\beta_*}_0 d^4x \, \sqrt{g}\, R
+ \frac{1}{8 \pi} \oint_{S^1 \times S^2} d^3x \, \sqrt{\gamma}\, Tr({\cal K})
\end{equation}
The action receives in general both volume and boundary contributions.
The Euclidean four-space metric is
\begin{equation}
g_{\mu \nu} = {\rm diag} \left((1-\frac{2M}{r}),(1-\frac{2M}{r})^{-1},
r^2, r^2 \sin^2 \theta \right).
\end{equation}
For this metric, the volume contribution vanishes identically.
The boundary at $r = const.$ is the product of $S^1 \times S^2$ of the
periodically identified Euclidean time with the two-sphere of area
$A = 4\pi r^2$. The period of Euclidean time, identified with the
$S^1$-coordinate, is
$\beta_* = 8\pi M$. The trace of the boundary extrinsic curvature is
denoted by $Tr({\cal K})$ and $\gamma$ is the induced 3-metric on the
boundary. Finally, $I_{subtract}$ is $I_1$ evaluated on a flat spacetime
having the same boundary $S^1 \times S^2$.
It is important to remember that for the canonical ensemble
the mass parameter $M$ appearing in
these formulae is not simply a
constant but is instead a specific function of
the cavity radius $r \geq 0$ and the cavity wall
temperature $T(r)\geq 0$ \cite{York85,York86}. This can
be verified by inverting the expression for the blue-shifted temperature in
equation
(11) below and solving for $M = M(r,T)$. The relation so obtained is a
cubic equation in $M$.
When $rT < \sqrt{27}/{8 \pi}=(rT)_{min} \approx 0.207 $, there are no real
solutions of this equation.
On the other hand, when $rT \geq
(rT)_{min} $, there exist two real and non--negative
branches given by
\begin{eqnarray}
M_2(r,T) &=& \frac{r}{6} \left[1 + 2 \cos (\frac{\alpha}{3}) \right],\\
M_1(r,T) &=& \frac{r}{6} \left[1 - 2 \cos (\frac{\alpha + \pi}{3}) \right],\\
\cos (\alpha) & = & 1 - \frac{27}{32 \pi^2 r^2 T^2}, \\
0 & \leq & \alpha \leq \pi.
\end{eqnarray}
This shows that the Schwarzschild mass is in fact
double-valued in the canonical ensemble.
One has that $M_2 \geq M_1$, with equality holding at $rT = (rT)_{min}$.
The heavier mass branch, $M_2$, is the thermodynamically
stable solution because it leads to the lowest free energy, Eq. (2), and is the
one we shall be considering in the remainder of this work.
Calculating the action $I$ from (3) and (4) yields $(I_{subtract} = -\beta r)$
\begin{equation}
I = 12 \pi M^2 - 8 \pi M r + \beta r,
\end{equation}
where
\begin{equation}
\beta = T^{-1}(r) = 8\pi M \left( 1 - \frac{2M}{r} \right)^{1/2},
\end{equation}
is the inverse local temperature and is the proper length of the $S^1$
component of the boundary.
Employing $I$ and the saddle-point approximation $\beta F = I$, it is a
straightforward exercise to calculate the thermodynamic state functions
associated with a black hole in the canonical ensemble. In so doing, it is
useful to note the following two identities
\begin{eqnarray}
\left( \frac{\partial M}{\partial r} \right)_{\beta} & = & -
\frac{ \frac{M^2}{r^2} } { (1 - \frac{3M}{r}) }, \\
\left( \frac{\partial M}{\partial \beta} \right)_{A} & = & \frac{1}{8 \pi}
\frac{ (1 - \frac{2M}{r})^{1/2} }{ (1 - \frac{3M}{r}) },
\end{eqnarray}
which may be deduced from the expression for the inverse
local temperature (11).
The black hole's internal, or thermal, energy is
\begin{equation}
E = -\left( \frac{\partial \ln Z}{\partial \beta} \right)_{A} =
\left( \frac{\partial I}{\partial \beta} \right)_{A} = r -
r \left(1 - \frac{2M}{r} \right)^{1/2}.
\end{equation}
The entropy $S$ is
\begin{equation}
S = \beta \left( \frac{\partial I}{\partial \beta} \right)_{A} - I =
4\pi M^2 ,
\end{equation}
while the surface pressure $\sigma$ is
\begin{equation}
\sigma = -\left( \frac{\partial F}{\partial A} \right)_{T} =
\frac{1}{8\pi r} \left[ \frac{\left(1 - \frac{M}{r}\right)}
{\left(1 - \frac{2M}{r}\right)^{1/2}}
- 1 \right].
\end{equation}
Another quantity of special interest in what is to follow, is the black hole
isothermal compressibility, $\kappa_T (A)$, which again can be calculated
using the standard prescription,
\begin{eqnarray}
\kappa_T (A)& = & -\frac{1}{A} \left( \frac{\partial A}{\partial \sigma}
\right)_T = 16\pi r \left(\frac{r}{M}\right)^3
\left(1 - \frac{3M}{r} \right)\left(1-\frac{2M}{r}\right)^{3/2}
/ \nonumber \\
\Bigl\{ 1 & + & \left( \frac{r}{M} \right)^3 \left( \frac{3M}{r}-1 \right)
[ \left(1-\frac{2M}{r}\right)^{3/2} - 1
+ \frac{3M}{r} - \frac{3M^2}{r^2} ] \Bigr\}.
\end{eqnarray}
Although at face
value these functions appear to have a complicated dependence on $r$ and $T$,
they are actually quite simple, owing to their dependence on the
slowly varying ratio $M/r$. To gain some insight into the
behavior of these functions, it
is useful to examine some limiting cases, namely when $ (i)\, rT \rightarrow
\infty$ and when $(ii)\, rT \rightarrow (rT)_{min}$. The limit $(i)$ is understood to mean either that $r \rightarrow \infty$ or $T \rightarrow \infty$, or
both, simultaneously. The second limit $(ii)$ actually
defines a hyperbola in the
$r-T$ plane along which the two independent limits
$r \rightarrow 0 \,(T \rightarrow \infty)$
or $r \rightarrow \infty \, (T \rightarrow 0)$ can be taken.
The mass function takes the
form $M(r,T) \approx \frac{r}{2}(1 - \frac{1}{4\pi^2 r^2T^2})$, and
$M(r,T) = \frac{r}{3}$, respectively. Physically, these limits indicate that
for all allowed values of $r$ and $T$, the cavity wall radius always lies
between the black hole's event horizon $(r = 2M)$ and the unstable circular
photon orbit $(r = 3M)$.
The behavior of the black hole
internal energy with respect to these limits is
\begin{equation}
(i) \qquad E \rightarrow r - \frac{1}{(4\pi T)},
\end{equation}
and
\begin{equation}
(ii) \qquad E = \frac{\sqrt{3} -1}{\sqrt{3}} r.
\end{equation}
$E$ is essentially a positive linear function of $r$, depending only
very weakly on the temperature for large values of $rT$. For $rT = (rT)_{min}$,
$E$ is strictly linear in $r$, or inversely proportional to $T$,
since of course, in
this latter limit, $r \sim 1/T$. Note that on this hyperbola,
$E \rightarrow 0$ for
$r \rightarrow 0$ (or $T \rightarrow \infty$). The equation for the
surface pressure is also an equation of state for the black hole
pressure since it is expressed as a function of
the cavity radius $r$, which gives a measure of the system
size, and the boundary temperature $T$: $\sigma = \sigma(r,T)$.
Using the limiting forms of the mass function, one can show that
the asymptotic limit of the surface pressure is
\begin{equation}
(i) \qquad \sigma \rightarrow \frac{T}{4} - \frac{1}{8\pi r},
\end{equation}
for $rT \rightarrow \infty$, so that the pressure increases with the
temperature, depending only very weakly on the system size.
When evaluated along the limit hyperbola, one obtains
\begin{equation}
(ii) \qquad \sigma = \left( \frac{2\sqrt{3}}{3} -1 \right)
\frac{1}{8\pi r};
\end{equation}
in other words, in this regime, the pressure increases as the cavity
radius (or area) decreases. Because of the reciprocal relation between $r$ and
$T$ along the hyperbola, this is equivalent to increasing pressure with
increasing temperature. Such qualitative behavior is familiar from the
ideal gas. Finally, the limiting forms of the isothermal compressibility
are
\begin{equation}
(i) \qquad \kappa_T(A) \rightarrow -16\pi r < 0,
\end{equation}
and
\begin{equation}
(ii) \qquad \kappa_T(A) = 0.
\end{equation}
These two latter limits deserve some special comment. First, note that
the black hole isothermal compressibility is generally negative. This
is an unfamiliar property in regards to
conventional thermodynamic systems. Indeed,
standard textbook arguments prove that $\kappa_T \geq 0$, irrespective of
the nature of the substance comprising the system. However, a key step
in those proofs assumes that quantities such as the temperature and pressure
are {\em intensive}, that is, independent of the size of the system and
constant throughout its interior. Such is most emphatically
$not$ the case for
gravitating systems such as black holes, where in fact, the temperature
and pressure are not intensive quantities
but are instead scale dependent.
An equilibrium self-gravitating
object does not have a spatially constant temperature. This is a
consequence of the principle of equivalence which implies that temperature
is red- or blue-shifted in the same manner as the frecuency of photons in a
gravitational field. Secondly, for
values $rT = (rT)_{min}$, the compressibility vanishes identically.
This qualitative behavior is familiar from the classical picture of a solid
at $T=0$ (no density fluctuations $\Longrightarrow$ zero compressibility).
In the Figures 1-3, we have plotted $E,\sigma$ and $\kappa_T$ in the
$r-T$ plane subject to the condition $rT \geq (rT)_{min}$. As indicated in
Fig. 1, the black hole energy is a positive increasing function in $r$ and is
fairly insensitive to changes in the temperature for values of $T \geq 0.4$.
The flat region at zero level corresponds to the locus of excluded
points satisfying
$rT < (rT)_{min}$ and is therefore not to be considered as part of the graph
as such.
The cavity wall surface pressure, shown in Fig. 2, is a
positive increasing function of $T$ and varies slowly in $r$ for $r \geq 0.5$.
The flat null region represents the same locus of points
as in the prior graph. Finally,
the black hole isothermal compressibility is a
{\em negative definite function} for
all $rT > (rT)_{min}$, decreasing for increasing $r$ and relatively constant
with respect to changes in the temperature.
Other functions that may be calculated include the specific heats at
constant area and at constant pressure, respectively, as well as the
adiabatic compressibility, but these are of no direct interest for the
present consideration.
Finally, to complete this brief overview of black hole thermodynamics, we
need to identify the effective spatial dimension of the system. It will not
have escaped the reader's attention that the above functions have been
defined and calculated in terms of the cavity wall area $A$, rather than in
terms of the cavity volume. Spatial volume is not well defined in the
presence of a black hole, whereas the area is, and it is the latter which
provides the correct means for measuring the size of the system \cite{York86}.
That the wall area is the proper extensive variable to use is confirmed by
considering the black hole's thermodynamic identity. The explicit
calculation of the following partial derivatives
\begin{equation}
\left( \frac{\partial E}{\partial S} \right)_A = \frac{1}{8\pi M}
\left( \frac{\partial E}{\partial M} \right)_A = \frac{1}{8\pi M}\left(1 -
\frac{2M}{r}\right)^{-1/2} \equiv T,
\end{equation}
and
\begin{equation}
\left( \frac{\partial E}{\partial A} \right)_S = \frac{1}{8\pi r}
\left( \frac{\partial E}{\partial r} \right)_M = \frac{1}{8\pi r}
\left[ 1 -\frac{\left( 1 - \frac{M}{r} \right)}
{ \left(1 -\frac{2M}{r} \right)^{1/2} }
\right] \equiv -\sigma.
\end{equation}
proves that
\begin{equation}
dE = T\, dS - \sigma \, dA
\end{equation}
is an exact differential. In other words,
the energy when expressed in terms of its proper
independent variables, $E=E(S,A)$, is integrable.
We remark that all the above functions may be considered as functions either
of the cavity radius or the wall area, since obviously, $r = \sqrt{A/(4\pi)}$
and $dA = 8\pi r dr$.
\section{A Liquid Model for Black Holes.}
The starting point for attempting to model black hole thermodynamics in
terms of liquids is the statistical mechanical treatment of fluids. There,
it is known how to relate the various macroscopic
thermodynamic properties of liquids (energy, pressure, temperature)
to the
internal, microscopic features such as the interaction/intermolecular
potential and the pair-correlation function. This latter function provides a
measure of the local structure of the fluid or gas.
The typical approach to the
study of the liquid state starts with (a perhaps imperfect) knowledge of the
interatomic force law and the measured short range order (obtained
experimentally via X-ray or neutron scattering experiments).
One then attempts to
infer the macroscopic or thermodynamic behavior of the liquid on the basis
of this microscopic information. By marked contrast, here we shall turn the
reasoning around and solve for the local ``structure'' and the ``interatomic''
potential of an analog (and possible fictitious!)
fluid from explicit knowledge of black hole
thermodynamics. Two very important ingredients which will allow this
inverted procedure to be carried out in closed form are (i) the fact that
the thermal ensemble of black holes is spatially bounded and (ii) the fact
that we know the spatial dimension of this ensemble to be $d=2$.
The model fluid we shall deduce will be described in classical
terms. Let us use our knowledge of the spatial dimensionality of the black
hole ensemble at the outset.
To make a thermodynamic correspondence between black hole and fluid
means we seek a two-dimensional fluid whose partition function
over the $2N$-dimensional phase space is given
by (restoring the dependence on $\hbar$)
\begin{equation}
Z_N(\beta) = \frac{1}{(2\pi \hbar )^{2N} N !}
\int d\{p_i\} \, \int d\{r_i\} \, e^{-{\cal E}/kT}
\end{equation}
and where the total, nonrelativistic energy of a system of $N$ interacting point
particles of mass $m$ in $d=2$ is
\begin{equation}
{\cal E}(\{p_i\}, \{r_i\}) = \sum_{i \neq j}^{2N} \frac{p_i^2}{2m} +
U(\{r_i\}).
\end{equation}
Here, $U(\{r_i\})$ is the potential energy of the particle system
which we assume to be
pairwise additive:
\begin{equation}
U = \frac{1}{2} \sum^{N}_{i \neq j} \phi(|{\bf r}_i - {\bf r}_j|).
\end{equation}
This is always a reasonable assumption provided the fluid constituents have
no internal structure that couples to the potential.
{}From the theory of liquids, it is well known that an equation of state for the
isothermal compressibility $\kappa_T$ and the internal energy $E$
can be calculated
in terms of the
pair-potential $\phi$ and the pair-correlation function $g$ \cite{Goodstein}-
\cite{Ishihara},
to whit (restoring the dependence on $k_B$),
\begin{equation}
(a): \qquad
\rho \, k_BT \kappa_T = \rho \int_{system} d^2{\bf \tilde r}\,
\left[ g(\tilde r) -1\right ] + 1,
\end{equation}
and
\begin{equation}
(b): \qquad
E = Nk_BT + \frac{1}{2}N \rho \, \int_{system} d^2{\bf \tilde r}\,\,
\phi(\tilde r) g(\tilde r),
\end{equation}
where $\rho = \frac{N}{A}$ is the two-dimensional particle density, $T$ is
the fluid temperature, and
the radial distribution function $g$
is defined via
\begin{equation}
\rho \, g(r) = \frac{1}{N}\langle \sum_{i \neq j} \delta ({\bf r} + {\bf r}_i
- {\bf r}_j ) \rangle,
\end{equation}
the angular brackets denote the average computed using the grand
canonical ensemble.
Before we go on to use these relations, we remark that the
equation of state ($a$) is exact while the expression for the internal energy ($b$)
makes use of the pairwise summability of the total potential energy.
They are valid for any single-component,
monatomic system in
thermodynamic equilibrium (gas, liquid or solid)
whose energy is expressible in the form (28)
with a pairwise additive potential (29).
Although these expressions
are derived primarily for their application to the
liquid state, they can also be applied to the study of solids. The
only modification would be that $g$ and $\phi$ depend on the full vector
coordinate ${\bf r}$
(magnitude and direction). For liquids, however, the results are
isotropic so it is enough to write $g(r)$ and $\phi(r)$. For the
present consideration, modelling a liquid which is capable of
reproducing certain aspects of black hole thermodynamics is carried
out once we identify the $\kappa_T$ and $E$ in ($a$) and ($b$) with those
of the black hole.
The idea of representing a black hole at finite temperature
as a thermal fluid is novel and therefore deserves careful explanation.
A black hole is but one example of a thermodynamic system having, among other
things, a well-defined temperature, energy and compressibility. On the other
hand, any equilibrium many-body system with hamiltonian given in Eq.(28)
and Eq.(29) has a well-defined energy and compressibility which can be
calculated in terms of an associated $g$ and $\phi$.
When we {\it formally} identify
the $\kappa_T$ and $E$ appearing there with those belonging to the black
hole, we are simply demanding that these particular thermodynamic functions be
reproducible in terms of the internal variables of a certain classical
many-body system. This is not to say that these variables actually represent the true degrees of freedom of a black hole.
The identification must be carried out in a consistent way.
First, the temperature $T$ appearing in (30) and (31) is the uniform cavity
wall temperature. Since ($a$) and ($b$) are to decribe a liquid, the temperature
of that liquid must be identified with this temperature: $T_{liquid} = T$.
Note that the temperature of the liquid is {\em intensive}. That is, the
temperature of the cavity wall of the black hole ensemble is
identified with the temperature of the bulk fluid.
Next, the density $\rho$ of the
fluid is simply the number of ``atoms'' per unit area of fluid. For the
black hole, both $E$ and $\kappa_T$ depend
explicitly on the cavity radius $r$, reflecting the
fact that the black hole ensemble is spatially finite. This means that the
integrations in ($a$) and ($b$) are to be carried out over
a fluid of bounded spatial extent.
The integrations over the $system$ are bounded.
Since the integrands are
functions only of $r$ and $T$, we can therefore write
\begin{equation}
\int_{system} d^2{\bf r} = \int_0^r { r}\,
d{ r}\int^{2\pi}_0
d\theta.
\end{equation}
It is natural to take the length scale of the liquid coincident with that of
the cavity containing the black hole; any other choice would introduce
a second, and arbitrary, length scale into the problem.
Taking $E$ and $\kappa_T$ from (14) and (17) as input, the
relations ($a$) and ($b$)
yield two equations in the two unknowns
$g$ and $\phi$.
We can easily solve for these microscopic functions in terms of
the macroscopic functions and their first derivatives. To do so, we
make use of (33) and differentiate both sides of the relations in
($a$) and ($b$) with respect to $r$. The results of this operation are that
\begin{equation}
\phi(r,T) g(r,T) = \frac{4r}{N^2}\left[\left(\frac{\partial E}{\partial r}
\right)_T
+ \frac{2}{r} \left( E - Nk_BT \right) \right],
\end{equation}
and
\begin{equation}
g(r,T) - 1 = \frac{2r}{N} \left[ k_BT \left(
\frac{\partial \rho \kappa_T}{\partial r} \right)_T + \frac{2}{r} \left(
\rho k_B T \kappa_T - 1 \right) \right].
\end{equation}
By explicit construction,
these give the pair correlation function and the inter-particle
potential of the model fluid whose energy and isothermal
compressibility are identical with those of the black hole.
Moreover, these two functions depend on the two \underline{independent}
variables $r$ and $T$. Since we can vary them independently, we have actually
obtained $\phi$ and $g$ as functions of their arguments for all $T \geq 0$ and $r \geq 0$,
subject only to the restraint that the product $rT$ always be greater than
or equal to $(rT)_{min}$.
By way of a
trivial but illustrative example, consider the
ideal gas in two-dimensions whose equation of
state is $pA = Nk_B T$. Then $E = Nk_BT$ and $\kappa_T
= -\frac{1}{A} (\frac{\partial p}{\partial A})_T =
1/(\rho k_BT)$.
Inserting these into the above relations
immediately yields $g(r) = 1$ and
$\phi(r) = 0$, which is also a solution of the pair of
equations (30) and (31).
As is to be expected, the ideal gas has no
structure (it is uniform: homogeneous and isotropic) and lacks
interatomic interactions (by definition). Therefore, any deviation in either
$g$ and or $\phi$ with respect to these limits may be considered as deviations
from an ideal gas.
It is of interest to consider the limiting forms of the pair correlation
function and potential energy for the black hole; these may be deduced
easily from the associated
limits calculated above for $E$ and $\kappa_T$, Eqs.(18,19) and Eqs.(22,23).
When $rT >> (rT)_{min}$, the pair correlation function goes as
\begin{equation}
(i) \qquad g(r,T) \sim 1 - \frac{8k_B T}{r} - \frac{4}{N}.
\end{equation}
In particular, for fixed temperature, $g(r,T) \rightarrow 1 - \delta$, as
$r \rightarrow \infty$
where $\delta = 4/N$ is small for $N$ large.
In normal simple liquids, $g(r)$ has
the asymptotic limit $g(r) \rightarrow 1$ (compare to the ideal gas limit) and
deviations from this value represent molecular correlations
(or anti-correlations). When evaluated along the boundary
hyperbola $rT = (rT)_{min}$ we get,
\begin{equation}
(ii) \qquad g(r,T) = 1 - \frac{4}{N},
\end{equation}
a constant independent of $r$ and $T$. The corresponding
limits for the two-body potential energy may be worked out and yield
\begin{equation}
(i) \qquad \phi(r,T) \sim \frac{4r}{N^2}\left(3 - \frac{2Nk_BT}{r} \right)/
\left(1 - \frac{8k_B T}{r} - \frac{4}{N} \right).
\end{equation}
For fixed temperature, $\phi \sim r$. When $rT = (rT)_{min}$,
\begin{equation}
(ii) \qquad \phi(r,T) = \frac{4r}{N^2} \left[(3 - \sqrt{3}) - \frac{2Nk_BT}{r}
\right]/\left(1 - \frac{4}{N}\right).
\end{equation}
The black hole pair correlation function is calculated and presented in Fig. 4.
For fixed $T$ and small $r$, this function is negative, then increases,
becoming positive and approaches unity from below as $r \rightarrow \infty$.
This behavior is also revealed in the one dimensional plot of $g(r,T)$ for
the value $T = 0.5$ in Fig. 5. What can we make of this
behavior in $g$ and what physical interpretation can it admit? For this, let us
turn to the meaning of $g(r)$. Imagine we select a particular particle of the
fluid, whose average density is $\rho$,
and fix our origin at that point. Then, the number of particles $dN$
contained within the (two-dimensional) spherical shell of thickness
$dr$ centered at $r=0$ is
\begin{equation}
dN = \rho\, g(r) 2\pi r\, dr.
\end{equation}
Here we see that $g$ gives a measure of the deviation from perfect homogeneity
($g = 1$). Evidently, $ g < 0 \iff dN < 0$ in that shell. On the other hand, a
negative value for $dN$ is the signature for the phenomenon of
charge-screening, i.e., it indicates the presence of {\it holes} in the
neighborhood of our reference particle at $r=0$. Thus, it would appear that
the analogue fluid which could model some aspects of black hole thermodynamics
should have something to do with a charged fluid or plasma.
These latter systems are
defined as a collection of identical point charges, of equal
mass $m$ and charge $e$, embedded in a uniform background of charge (the dielectric) obeying
classical statistical mechanics. If one adds a given charge to the plasma, the
plasma density is locally depleted so as to neutralize the impurity charge.
This is the well-known phenomenon of charge screening. The depletion shows up
as an underdensity of particles (or an overdensity of holes), and is reflected
in a $g<0$ near the origin, that is, where the impurity charge is located.
Calculations of the pair-correlation function
for degenerate electron plasmas at metallic densities yield functions exhibiting the same general qualititative features as those in Figures 4 and 5
\cite{March}. In addition, screening is known to be a characteristic property of interactions like the electromagnetic interaction, where there exist two species of charges; they have the property that renormalization effects induce corrections that make the $effective$ charge decrease with distance. This, in itself, is not surprising because as is well known there exists an analog of a black--hole with an electromagnetic membrane; an analogy that in the literature is called the ``membrane paradigm" ~\cite{thorneprice}. What we find here is a different manifestation of this analogy, this time through the dynamical and statistical properties of the liquid.
The weakly temperature dependent potential is scaled by $N^2$ and plotted
in Figure 6. Again, recall the physical part of the graph consists of those
points $r$ and $T$ satisfying $rT > (RT)_{min}$. The potential is seen to
be a positive increasing function of $r$. The limit calculated above in
Eq.(38) shows the growth is essentially linear. Apart from the ``glitch''
near $T \approx 0.2$ the potential is practically independent of the
temperature.
\section{Conclusions.}
It is worth emphasizing that the analogue fluid selected to account
for the black hole compressibility and internal energy was ``engineered''
at the fluid's atomic level. As there is no corresponding ``atomic''
level for the black hole, the bridge between the thermodynamics of the
black hole and liquid is established via thermodynamic state functions.
Surprisingly, only two state functions are needed in order to specify
completely the ``atomic potential" and local structure of the analogue
fluid. However, as we have seen, we can only be sure that the fluid will reproduce
the correct compressibility and internal energy. That is, only
partial aspects of black hole thermodynamics will be reproducible, since,
evidently, there exist other state functions that characterize a
black hole, namely, its entropy, pressure, specific heats, etc. which must be calculable by a more complete ``microscopic" description of the black hole (see below). The analogy with the liquid leads to a screening effect that can be understood in terms of a connection with the membrane paradigm.
The partial rendering of black hole thermodynamics in terms of atomic
fluid elements achieved here points to the possibility of directly effecting
a mapping between the black hole variables (mass $M$ and cavity radius
$r$, or cavity wall temperature $T(r)$) and the internal variables
of an analogue model which might serve to reproduce all of
black hole thermodynamics. Evidently, this would amount to a formal
correspondence at the level of the degrees-of-freedom and so bypass
the need to call into play the macroscopic state functions. A concrete
example of such a mapping between two entirely distinct systems
is that established recently between
a Newtonian cosmology of pointlike galaxies and spins in a
three-dimensional Ising model \cite{PM-etal}. The degree-of-freedom
mapping problem is well
worth pursuing as intriguing deep connections between gravitation,
thermodynamics and information theory have been hinted at recently
\cite{BrownYork}. Another hint is supplied by Wheeler's depiction of
the Bekenstein bit number as a ``covering'' of the event horizon by
a binary string code representing the information contained in the
black hole \cite{Wheeler1,Wheeler2}. It may well be possible to go beyond
these provocative hints and actually establish a rigorous connection
between black holes, computation, information theory and complexity.
We hope to report on these developments in a separate paper.
| proofpile-arXiv_065-352 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The source GRS 1758--258 was discovered in the hard X--ray/soft
$\gamma$--ray energy range with the SIGMA/GRANAT coded mask
telescope (Sunyaev et al. 1991). GRS 1758--258 is of particular
interest since, together with the more famous source 1E 1740.7--
2942, it is the only persistent hard X--ray emitter ($E>100$ keV) in
the vicinity of the Galactic Center (Goldwurm et al. 1994). Both
sources have peculiar radio counterparts with relativistic jets
(Mirabel et al. 1992a; Rodriguez, Mirabel \& Mart\`i 1992; Mirabel
1994) and might be related to the 511 keV line observed from
the Galactic Center direction (Bouchet et al. 1991). Despite the
precise localization obtained at radio wavelengths, an optical
counterpart of GRS 1758--258 has not been identified
(Mereghetti et al. 1992; Mereghetti, Belloni \& Goldwurm 1994a).
Simultaneous ROSAT and SIGMA observations, obtained in the
Spring of 1993, indicated the presence of a soft excess
(Mereghetti, Belloni \& Goldwurm 1994b). This spectral
component ($E<2$ keV) was weaker in 1990, when the hard X--ray
flux ($E>40$ keV) was in its highest observed state. On the basis of
its hard X--ray spectrum, GRS 1758--258 is generally considered
a black hole candidate (Tanaka \& Lewin 1995; Stella et al.
1995). The possible evidence for a soft spectral component
anticorrelated with the intensity of the hard ($>40$ keV) emission
supports this interpretation.
No detailed studies of GRS 1758--258 in the "classical" X--ray
range have been performed so far. Here we report the first
observations of this source obtained with an imaging instrument
in the $0.5-10$ keV energy range.
\section{Data Analysis and Results}
The observation of GRS 1758--258 took place between 1995
March 29 22:39 UT and March 30 15:38 UT. The ASCA satellite
(Tanaka, Inoue \& Holt 1994) provides simultaneuos data in four
coaligned telescopes, equipped with two solid state detectors
(SIS0 and SIS1) and two gas scintillation proportional counters
(GIS2 and GIS3).
We applied stringent screening criteria to reject periods of high
background, and eliminated all the time intervals with the
bright earth within 40 degrees of the pointing direction for the
SIS data (10 degrees for the GIS), resulting in the net exposure
times given in Table 1.
\begin{tabular}{|l|c|c|}
\hline
\multicolumn{3}{|c|}{TABLE 1}\\
\hline
\hline
&Exposure Time (s)&Count Rate (counts/s)\\
SIS0&9,471&5.310$\pm$0.024\\
SIS1&9,455&4.220$\pm$0.022\\
GIS2&12,717&4.507$\pm$0.022\\
GIS3&11,949&5.155$\pm$0.025\\
\hline
\end{tabular}
\subsection{GIS Data}
Figure 1 shows the image obtained with the GIS2 detector. Most
of the detector area is covered by stray light due to the bright
source GX5--1, located outside the field of view, at an off--axis
angle of about 40 arcmin. Fortunately, GRS 1758--258 lies in a
relatively unaffected region of the detector, which allows us to
estimate the contamination from GX5--1 as explained below.
The source counts were extracted from a circle of 6 arcmin
radius centered at the position of GRS 1758--258, and rebinned
in order to have a minimum of 25 counts in each energy
channel. Due to the present uncertainties in the ASCA response
at low energies, we only considered photons in the 0.8--10 keV
range. The background spectrum was extracted from the
corresponding regions of observations of empty fields provided
by the ASCA Guest Observer Facility. The contribution to the
background due to GX5--1 is mostly concentrated in a circular
segment with area $\sim$36 arcmin2 indicated with A in figure
1. Its spectrum was estimated by the difference of regions A
and B, and added to the background. A similar procedure was
followed to extract the GIS3 net spectrum.
Using XSPEC (Version 9.0) we explored several spectral models
by simultaneously fitting the data sets of both GIS instruments.
The best fit was obtained with a power law with photon index
$1.66\pm 0.03$ and column density $N_H=(1.42\pm 0.04)\times 10^{22}$ cm$^{-2}$
(reduced $\chi^2= 1.013$ for 372 d.o.f., errors at 90\% confidence
intervals for a single interesting parameter). Other models based
on a single spectral component (e.g. blackbody, thermal
bremsstrahlung) gave unacceptable results, with the exception
of the Comptonized disk model of Sunyaev \& Titarchuk (1980).
However, the limited energy range of the ASCA data alone, does
not allow in the latter case to pose interesting constraints on the
fit parameters.
The GIS instruments have a time resolution of 0.5 s or 62.5 ms,
according to the available telemetry rate. Most of our data had
the higher time resolution. Using a Fourier transform technique,
and after correction of the times of arrivals to the solar system
barycenter, we performed a search for periodicities. No coherent
pulsations in the 0.125--1000 s period range were found. For the
hypothesis of a sinusoidal modulation we can set an upper limit
of $\sim$5\% to the pulsed fraction.
\subsection{SIS Data}
Both SIS instruments were operated in the single chip mode,
which gives a time resolution of 4 s and images of a square
11x11 arcmin2 region (Figure 2). Most of the SIS data (83\%)
were acquired in "faint mode" and then converted to "bright".
This allows to minimize the errors due to the echo effects in the
analog electronics and to the uncertainties in the dark frame
value (Otani \& Dotani, 1994). The inclusion of the data directly
acquired in bright mode resulted in spectra of lower quality
(significant residuals in the 2--3 keV region). We therefore
concentrated the spectral analysis on the faint mode data. The
source counts (0.6--10 keV) were extracted from circles with a
radius of 3 arcmin, and the resulting energy spectra (1024 PI
energy channels) rebinned in order to have at least 25 counts in
each bin. We subtracted a background spectrum derived during
our observation from an apparently source free region of the
CCDs (see figure 2). This background is higher than that obtained
from the standard observations of empty fields, probably due to
the contamination from GX5--1. It contributes $\sim$4\% of the
extracted counts. We verified that the derived spectral
parameters do not change significantly if we use the blank sky
background file, or even if we completely neglect the
background subtraction.
By fitting together the data from the two SIS we obtained
results similar to those derived with the GIS instruments. In
particular, a power law spectrum gives photon index $\alpha=1.70\pm 0.03$
and $N_H=(1.55\pm 0.03) \times 10^{22}$ cm$^{-2}$, with a reduced $\chi^2$ of
1.031 (872 d.o.f.).
No prominent emission lines are visible in the spectrum of GRS
1758--258 (as already mentioned, some features in the region
around 2 keV are probably due to instrumental problems, they
appear stronger when the bright mode data and the
corresponding response matrix are used). Upper limits on the
possible presence of an iron line were computed by adding a
gaussian line centered at 6.4 keV to the best fit power law
model and varying its parameters (intensity and width) until an
unacceptable increase in the c2 was obtained. The 95\% upper
limit on the equivalent width is $\sim$50 eV for a line width of
$\sigma=0.1$ keV and increases for wider lines (up to $\sim$110 eV for
$\sigma=0.5$ keV).
Also in the case of the SIS, a search for periodicities (limited to
period greater than 8 s) resulted only in upper limits similar to
the GIS ones.
\section{Discussion}
The soft X--ray flux observed with ROSAT in 1993 (Mereghetti et
al. 1994b) was higher than that expected from the extrapolation
of the quasi--simultaneous SIGMA measurement ($E>40$ keV),
indicating the presence of a soft spectral component with power
law photon index $\sim$3 below $\sim$2 keV. Clearly, such a
steep, low--energy component is not visible in the present ASCA
data, which are well described by a single flat power law. The
corresponding flux of $4.8\times 10^{-10}$ ergs cm$^{-2}$ s$^{-1}$
(in the 1--10 keV
band, corrected for the absorption) is within the range of values
measured in March--April 1990 (Sunyaev et al. 1991), when the
source was in its highest observed state. This fact is consistent
with the presence of a prominent soft component only when the
hard X--ray flux is at a lower intensity level.
Though a single power law provides an acceptable fit to the
ASCA data, we also explored spectral models consisting of two
different components: a soft thermal emission plus a hard tail.
For instance, with a blackbody plus power law, we obtained a
good fit to both the SIS and GIS data with $kT\sim 0.4-0.5$ keV
and photon index $\sim 1.4-1.5$ ($\chi^2 \simeq 0.98$). Obviously
such a power law must steepen at higher energy to be
consistent with the SIGMA observations. In fact a Sunyaev--Titarchuk
Comptonization model can equally well fit the ASCA
hard tail and provide an adequate spectral steepening to match
the high energy data (see Figure 3). Good results were also
obtained when the soft thermal component was fitted with
models of emission from accretion disks (e.g. Makishima et al.
1986, Stella \& Rosner 1984). In all cases the total flux in the soft
component amounts only to a few percent of the overall (0.1--
300 keV) luminosity. However, the low observed flux, coupled
to the high accretion rates required by the fitted temperatures,
implies an unplausible large distance for GRS 1758--258 and/or
very high inclination angles (note that there is no evidence so
far of eclipses or periodic absorption dips which could hint to a
high inclination system). A possible alternative solution is to
invoke a significant dilution of the optically thick soft
component by Comptonization in a hot corona. A very rough
estimate shows that, in order to effectively remove photons
from the thermal distribution, a scattering opacity of
$\tau_{es}\sim 2-5$ is required.
Our ASCA observation provides the most accurate measurement
of the absorption toward GRS 1758--258 obtained so far.
Obviously the derived value is slightly dependent on the
adopted spectral model. However, values within at most 10\% of
$1.5\times 10^{22}$ cm$^{-2}$ were obtained for all the models (one or two
components) fitting the data. This column density is consistent
with a distance of the order of the Galactic center and similar to
that of other sources in the galactic bulge (Kawai et al. 1988),
but definitely smaller than that observed with ASCA in 1E
1740.7--2942 (Churazov et al. 1996).
The information on the galactic column density, coupled to the
optical/IR data, can yield some constraints on the possible
companion star of GRS 1758--258 (see Chen, Gehrels \& Leventhal
1994). A candidate counterpart with $I\sim$19 and $K\sim$17
(Mereghetti et al. 1994a) lies within $\sim$2" of the best radio
position (Mirabel et al. 1992b). Other infrared sources present in
the X--ray error circle (10" radius) are fainter than $K\sim 17$
(Mirabel \& Duc 1992). Using an average relation between $N_H$
and optical reddening (Gorenstein 1975), we estimate a value of
$A_V\sim 7$, corresponding to less than one magnitude of
absorption in the K band (Cardelli, Clayton \& Mathis 1989).
Thus, for a distance of the order of 10 kpc, the K band absolute
magnitude must be fainter than $M_K\sim 1$. This limit clearly
rules out supergiant or giant companion stars, as well as main
sequence stars earlier than type A (Johnson 1966), thus
excluding the possibility that GRS 1758--258 is in a high mass
binary system.
The flux of GRS 1758--258 measured with the SIS instruments
corresponds to a 1--10 keV luminosity of $4.5\times 10^{36}$
ergs s$^{-1}$ (for a distance of 10 kpc). A reanalysis of archival data from
TTM/MIR, XRT/Spacelab--2 and EXOSAT (Skinner 1991), showed
that GRS 1758--258 had a similar intensity also in 1985 and in
1989. An earlier discovery had been prevented only by
confusion problems with GX5--1, much brighter than GRS 1758--
258 below $\sim$20 keV. Subsequent hard X--ray observations
with SIGMA (Gilfanov et al. 1993, Goldwurm et al. 1994)
repeatedly detected GRS 1758--258 with a hard spectrum
extending up to $\sim$300 keV. It is therefore clear that GRS
1758--258, though variable by a factor of $\sim$10 on a
timescale of months, is not a transient source.
\section{Conclusions}
The ASCA satellite has provided the first detailed data on GRS
1758--258 in the 1--10 keV region, allowing to minimize the
confusion problems caused by the vicinity of GX5--1, that
affected previous observations with non imaging instruments.
The possible black hole nature of GRS 1758--258, inferred from
the high energy data (Sunyaev et al. 1991, Goldwurm et al.
1994), is supported by the ASCA results. The power law
spectrum, extending up to the hard X--ray domain is similar to
that of Cyg X--1 and other black hole candidates in their low (or
hard) state. Furthermore, our stringent limits on the presence of
periodic pulsations and accurate measurement of interstellar
absorption make the possibility of a neutron star accreting from
a massive companion very unlikely. The lack of iron emission
lines in the SIS data has to be confirmed by more stringent
upper limits to rule out, e.g., the presence of a reflection
component as proposed for Cyg X--1 (Done et al. 1992). For
comparison, the iron line recently observed with ASCA in Cyg X--
1 has an equivalent width of only 10--30 eV (Ebisawa et al.
1996).
The prominent soft excess observed with ROSAT in 1993, when
the hard X--ray flux was in a lower intensity state, was absent
during our observation. The source was in a hard spectral state,
with a possible soft component accounting for $\sim$5\% of the
total luminosity at most. A similar soft component
($kT\sim 0.14$ keV), but contributing a larger fraction of the
flux, has been observed in Cyg X--1 and attributed to emission
from the accretion disk (Balucinska--Church et al. 1995). If the
soft component in GRS 1758--258 originates from the disk,
strong dilution is required. An optically thick hot cloud
embedding the innermost part of the disk is an attractive
hypothesis. To test the viability of this model, a detailed fit to
simultaneous data over a broad energy range, as available, e.g.,
with SAX in the near future, is required.
\clearpage
| proofpile-arXiv_065-353 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In 1992 de Vega and Woynarovich constructed the first example of a spin chain
with alternating spins of the values $s=\frac{1}{2}$ and $s=1$ \cite{devega}
on the basis of the well-known XXZ($\frac{1}{2}$) model. We call this model
XXZ($\frac{1}{2},1$).
Later on a lot of interesting generalizations have been presented
\cite{aladim1,aladim2,martins}. After de Vega {\it et al}
\cite{devega1,devega2} we have studied the XXZ($\frac{1}{2},1$) model in two
subsequent publications \cite{meissner,doerfel}. In our last paper
\cite{doerfel} we determined the ground state for different values of the two
couplings $\bar{c}$ and $\tilde{c}$ (for the details see section 3 of that
paper). Disregarding two singular lines we have found four regions in the
($\bar{c},\tilde{c}$)-plane which can be divided into two classes. The division
is made with respect to the occurance of finite Fermi zones for Bethe ansatz
roots. The two regions with infinite fermi zones only are well studied
\cite{devega,meissner} in the framework of Bethe ansatz. On that basis we
consider the finite-size corrections for the ground state and its lowest
excitations using standard techniques \cite{eckle,woy,hamer}. It is remarkable
that they allow to obtain an explicit answer only in the conformally invariant
cases, which are contained in the two regions considered. The results can
easily be compared with the predictions of conformal invariance.
The paper is organized as follows.
Definitions are reviewed in section 2. In section 3 we
calculate the finite-size corrections for both couplings negative. The same is
done in section 4 for positive couplings. Here it was necessary to set
$\bar{c}=\tilde{c}$ to obtain explicit answers. Section 5 contains the
interpretation of the results and our conclusions.
\section{Description of the model}
We consider the Hamiltonian of a spin chain of length $2N$ with $N$ even
\begin{equation}\label{ham}
{\cal H}(\gamma) = \bar{c} \bar{\cal H}(\gamma) + \tilde{c} \tilde{\cal H}
(\gamma).
\end{equation}
The two Hamiltonians can (implicitly) be found in paper \cite{devega}, they
both contain a two-site and a three-site coupling part. Their explicit
expressions are rather lengthy and do not provide any further insights. They
include a XXZ-type anisotropy parametrized by $e^{i \gamma}$, we restrict
ourselves to $0<\gamma<\pi/2$. The isotropic limit XXX($\frac{1}{2},1$) is
contained in \cite{aladim1}. The two real coupling constants $\bar{c}$ and
$\tilde{c}$ dominate the qualitative behaviour of the model. The interaction
favours antiparallel orientation of spins, for equal signs of the couplings
its character resembles ordinary $XXZ$ model. A new kind of competition comes
in for different signs of couplings where the ground state is still singlet
but with a much more involved structure.
The Bethe ansatz equations (BAE) determining the solution of the model are
\begin{equation}\label{bae}
\fl \left( \frac{\sinh(\lambda_j+i\frac{\gamma}{2})}{\sinh(\lambda_j-i
\frac{\gamma}{2})}
\frac{\sinh(\lambda_j+i\gamma)}{\sinh(\lambda_j-i\gamma)} \right)^N =
-\prod_{k=1}^{M}\frac{\sinh(\lambda_j-\lambda_k+i\gamma)}{\sinh(\lambda_j-
\lambda_k-i\gamma)},\qquad j=1\dots M.
\end{equation}
One can express energy, momentum and spin projection in terms of BAE roots
$\lambda_j$:
\begin{eqnarray}\label{en}
E = \bar{c} \bar{E} + \tilde{c} \tilde{E},
\nonumber\\
\bar{E} = - \sum_{j=1}^{M} \frac{2\sin\gamma}
{\cosh2\lambda_j - \cos\gamma},
\nonumber\\
\tilde{E} = - \sum_{j=1}^{M}
\frac{2\sin2\gamma}{\cosh2\lambda_j - \cos2\gamma},
\end{eqnarray}
\begin{equation}\label{mom}
P =\frac{i}{2}\sum_{j=1}^{M} \left\{ \ln \left(\frac{\sinh(\lambda_j+i\frac{
\gamma}{2})}{\sinh(\lambda_j-i\frac{\gamma}{2})} \right) +
\ln \left( \frac{\sinh(\lambda_j+i\gamma)}{\sinh(\lambda_j-i\gamma)} \right)
\right\},
\end{equation}
\begin{equation}\label{spin}
S_z = \frac{3N}{2} - M.
\end{equation}
We have defined energy and momentum to vanish for the ferromagnetic state.
The momentum operator was chosen to be half of the logarithm of the 2-site
shift operator \cite{aladim1} which is consistent with taking the length of the
system as $2N$ instead of $N$.
\section{Calculation of finite-size corrections for negative couplings}
In section 3 of paper \cite{doerfel} we have carried out a detailed analysis of
the thermodynamic Bethe ansatz equations (TBAE) at zero temperature and
obtained the ground state.
We found a large antiferromagnetic region in the ($\bar{c},\tilde{c}$)-plane
(depending on $\gamma$) where the ground state is formed by roots with
imaginary parts $\frac{\pi}{2}$, the so-called ($1,-$) strings. The Fourier
transform of their density is given by \cite{meissner}
\begin{equation}\label{(1,-)dens}
\hat{\rho}_0(p)=\frac{1+2\cosh(p\gamma/2)}{2\cosh(p(\pi-\gamma)/2)}.
\end{equation}
Depending on the signs of $\tilde{c}$ and $\bar{c}$ the region is described by
the connection of three parts:
\begin{enumerate}
\item[a)]
\begin{eqnarray*}
\tilde{c}\leq0,\bar{c}\leq0;
\end{eqnarray*}
\item[b)]
\begin{eqnarray*}
\tilde{c}<0,\bar{c}>0\\
\frac{\bar{c}}{|\tilde{c}|}\leq\frac{1}{2\cos\tilde{\gamma}} \qquad \mbox{for}
\qquad 0<\gamma\leq\frac{2\pi}{5}\\
\frac{\bar{c}}{|\tilde{c}|}\leq2\cos\tilde{\gamma} \qquad \mbox{for} \qquad
\frac{2\pi}{5}\leq\gamma<\frac{\pi}{2};
\end{eqnarray*}
\item[c)]
\begin{eqnarray}\label{phase}
\tilde{c}>0,\bar{c}<0\nonumber\\
\frac{|\bar{c}|}{\tilde{c}}\geq\frac{8\cos^3\tilde{\gamma}}{4\cos^2\tilde{
\gamma}-1} \qquad \mbox{for} \qquad 0<\gamma\leq\frac{\pi}{3}\nonumber\\
\frac{|\bar{c}|}{\tilde{c}}\geq\frac{2}{\cos\tilde{\gamma}} \qquad \mbox{for}
\qquad \frac{\pi}{3}\leq\gamma<\frac{\pi}{2}.
\end{eqnarray}
\end{enumerate}
Here for shortness we have introduced
\begin{equation}
\tilde{\gamma}=\frac{\pi\gamma}{2(\pi-\gamma)}.
\end{equation}
We shall now calculate the finite-size corrections for the ground state and its
excitations. In paper \cite{meissner} the structure of excitations in the
framework of BAE roots was obtained for $\tilde{c}<0,\bar{c}<0$. Our results
immediately apply to the whole region (\ref{phase}), because we had to ensure
only that the ground state consists of ($1,-$) strings which follows from TBAE.
Because we are interested in the lowest excitations only, we disregard the
bound states \cite{meissner} and consider those excitations given by holes in
the ground-state distribution which are located right (or left) from the real
parts of all roots. The number of those holes we call $H^+$ ($H^-$). We follow
the standard techniques developed in \cite{eckle} and \cite{woy}.
For transparency we employ the notations of \cite{hamer} as much as possible.
We decompose
\begin{equation}
\sigma_N=\rho_0^{(1)} + \rho_0^{(2)} + \Delta\sigma_N
\end{equation}
where the upper index describes the two terms on the RHS of formula
(\ref{(1,-)dens}). The basic equations are then
\begin{eqnarray}\label{DE}
\fl
\frac{\Delta E_N}{2N}\equiv e_N = \bar{c} \pi \int_{-\infty}^{\infty} d\lambda
\rho_0^{(1)}(\lambda) \left\{ \frac{1}{N} \sum_k \delta(\lambda-\lambda_k)
- \sigma_N(\lambda) \right\}\nonumber\\
+ \tilde{c} \pi \int_{-\infty}^{\infty} d\lambda
\rho_0^{(2)}(\lambda) \left\{ \frac{1}{N} \sum_k \delta(\lambda-\lambda_k)
- \sigma_N(\lambda) \right\}
\end{eqnarray}
and
\begin{equation}\label{Dsigma}
\Delta\sigma_N (\lambda) = - \int_{-\infty}^{\infty} d\mu \bar{p}(\lambda-\mu)
\left\{ \frac{1}{N} \sum_k \delta(\lambda-\lambda_k) - \sigma_N(\lambda)
\right\}
\end{equation}
where the Fourier transform of the kernel $\bar{p}(\lambda)$ is given by
\begin{equation}\label{kernel}
\bar{P}(\omega) = - \frac{\sinh((\pi/2-\gamma)\omega)}{2\sinh(\omega\gamma/2)
\cosh(\omega(\pi/2-\gamma/2))}.
\end{equation}
The summation on the RHS of (\ref{DE}) and (\ref{Dsigma}) is carried out over
the real parts of all the roots (without the holes). Using Euler-MacLaurin
formula equations (\ref{DE}) and (\ref{Dsigma}) are rewritten as usual, e.g.
\begin{eqnarray}\label{Dsigma_rew}
\fl
\sigma_N(\lambda)-\rho_0^{(1)}(\lambda)-\rho_0^{(2)}(\lambda) =
\nonumber\\
\int_{\Lambda^+}^{\infty}d\mu\sigma_N(\mu)
\bar{p}(\lambda-\mu) - \frac{1}{2N}\bar{p}(\lambda-\Lambda^+) + \frac{1}{12N^2}
\frac{1}{\sigma_N(\Lambda^+)}\bar{p}(\lambda-\Lambda^+)
\nonumber\\
\left( + \int^{\Lambda^-}_{-\infty}d\mu\sigma_N(\mu)
\bar{p}(\lambda-\mu) - \frac{1}{2N}\bar{p}(\lambda-\Lambda^-) - \frac{1}{12N^2}
\frac{1}{\sigma_N(\Lambda^-)}\bar{p}(\lambda-\Lambda^-) \right)\nonumber\\
\end{eqnarray}
Here $\Lambda^+$ ($\Lambda^-$) is the real part of the largest (smallest) root.
For $\lambda\geq\Lambda^+$ the part in round brackets can be omitted and
equation (\ref{Dsigma_rew}) after a shift converts into a standard Wiener-Hopf
problem to be solved. In the expression for $\Delta E_N$ both parts have to be
kept, so we need the solution for $\lambda\geq\Lambda^+$ and $\lambda\leq
\Lambda^-$, which are simply related (but not equal) by symmetry.
For the solution with $\lambda\geq\Lambda^+$ we define as usual
\begin{equation}\label{Xpm}
X_{\pm}(\omega)=\int_{-\infty}^{\infty} e^{i\omega\lambda}\sigma_N^{\pm}
(\lambda+\Lambda^+) d\lambda,
\end{equation}
\begin{eqnarray}\label{spm}
\sigma_N^{\pm}(\lambda+\Lambda^+)=\left\{
\begin{array}{ll}
\sigma_N(\lambda+\Lambda^+) & \mbox{for}\quad\lambda \quad{> \atop <} \quad 0\\
0 & \mbox{for}\quad\lambda \quad{< \atop >} \quad 0\\
\end{array}
\right. .
\end{eqnarray}
After Fourier transformation equation (\ref{Dsigma_rew}) takes the form
\begin{eqnarray}\label{e_ft}
X_-(\omega) + (1-\bar{P}(\omega))(X_+(\omega)-\bar{C}(\omega))=
\bar{F}_+(\omega)+\bar{F}_-(\omega)-\bar{C}(\omega)
\end{eqnarray}
where we have marked all given functions of our problem by a bar.
$\bar{F}_{\pm}(\omega)$ are defined as above using instead of $\sigma_N$ the
sum $\rho_0^{(1)}+\rho_0^{(2)}$. Further
\begin{equation}\label{bar_C}
\bar{C}(\omega)=\frac{1}{2N} + \frac{i\omega}{12 N^2 \sigma_N(\Lambda^+)}.
\end{equation}
Now we have to factorize the kernel
\begin{equation}\label{k}
[1-\bar{P}(\omega)] = \bar{G}_+(\omega)\bar{G}_-(\omega)
\end{equation}
with $\bar{G}_{\pm}(\omega)$ holomorphic and continuous in the upper and lower
half-plane respectively. Noticing that
\begin{equation}\label{new_kernel}
\bar{P}(\omega,\gamma)=K(\omega,\pi-\gamma)
\end{equation}
where $K$ is the analogous function in paper \cite{hamer} we take the
factorization from there.
\begin{eqnarray}\label{fac1}
\fl
\bar{G}_+(\omega)=\sqrt{2\gamma}\Gamma\left(1-\frac{i\omega}{2}\right)
e^{\bar{\psi}(\omega)}\left[ \Gamma\left(\frac{1}{2}-\frac{i(\pi-\gamma)\omega}
{2\pi} \right) \Gamma\left( \frac{1}{2}-\frac{i\gamma\omega}
{2\pi}\right) \right]^{-1} = \bar{G}_-(-\omega),
\end{eqnarray}
\begin{eqnarray}\label{fac2}
\bar{\psi}(\omega)=\frac{i\omega}{2}\left[\ln\left(\frac{\pi}{\gamma}\right)
- \frac{\pi-\gamma}{\pi} \ln\left(\frac{\pi-\gamma}{\gamma}\right)\right].
\end{eqnarray}
It is chosen to fulfill
\begin{eqnarray}\label{asy}
\bar{G}_+(\omega)\stackrel{|\omega|\to\infty}{\sim}1+\frac{\bar{g}_1}{\omega}
+\frac{\bar{g}_1^2}{2\omega^2} + {\cal O}\left(\frac{1}{\omega^3}\right)
\end{eqnarray}
where
\begin{equation}\label{g1}
\bar{g}_1=\frac{i}{12}\left(2+\frac{\pi}{\pi-\gamma}-\frac{2\pi}{\gamma}\right)
\end{equation}
After the necessary decomposition
\begin{equation}\label{dec}
\bar{G}_-(\omega)\bar{F}_+(\omega) = \bar{Q}_+(\omega)+\bar{Q}_-(\omega)
\end{equation}
equation (\ref{e_ft}) has the desired form
\begin{eqnarray}\label{form}
\fl
\frac{X_+(\omega)-\bar{C}(\omega)}{\bar{G}_+(\omega)}-\bar{Q}_+(\omega)=
\bar{Q}_-(\omega)-\bar{G}_-(\omega)\left[X_-(\omega)+\bar{C}(\omega)
-\bar{F}_-(\omega)\right] \equiv \bar{P}(\omega)
\end{eqnarray}
leading to an entire function $\bar{P}(\omega)$ given by its asymptotics.
\begin{eqnarray}\label{bar_p}
\bar{P}(\omega)=\frac{i\bar{g}_1}{12 N^2 \sigma_N(\Lambda^+)} - \frac{1}{2N}
-\frac{i\omega}{12 N^2 \sigma_N(\Lambda^+)}.
\end{eqnarray}
Equation (\ref{form}) yields the solution for $X_+(\omega)$:
\begin{eqnarray}\label{sol}
X_+(\omega)=\bar{C}(\omega)+\bar{G}_+(\omega)\left[\bar{P}(\omega)+\bar{Q}_+
(\omega)\right].
\end{eqnarray}
for our purposes it is sufficient to put
\begin{eqnarray}\label{f+}
\bar{F}_+(\omega)=\frac{e^{\pi\Lambda^+/(\pi-\gamma)}}{\pi-i\omega(\pi-\gamma)}
(1+2\cos\tilde{\gamma})
\end{eqnarray}
and hence
\begin{equation}\label{bar_q+}
\bar{Q}_+(\omega)=\frac{\bar{G}_+(i\pi/(\pi-\gamma))e^{-\pi\Lambda^+/
(\pi-\gamma)}(1+2\cos\tilde{\gamma})}{\pi-i\omega(\pi-\gamma)}.
\end{equation}
Next we must determine by normalization the value of the integral
\begin{eqnarray*}
\int_{\Lambda^+}^{\infty}\sigma_N(\lambda)d\lambda=z_N(\infty)-z_N(\Lambda^+).
\end{eqnarray*}
After a thorough analysis we found for our case that the relation
\begin{equation}\label{holes}
\frac{H}{2}=\frac{H^++H^-}{2}=\left[ \nu S + \frac{1}{2} \right]
\end{equation}
holds where $\nu=\frac{\gamma}{\pi}$. Nevertheless we shall not claim formula
(\ref{holes}) for all possible states \cite{woy}, especially we expect effects
like those described in \cite{juettner} for higher excitations. Then we have
\begin{eqnarray}\label{z}
z_N(\pm\infty)-z_N(\Lambda^{\pm})&=&\pm\frac{1}{N}\left( \frac{1}{2}+\nu S_z
\pm \frac{H^+-H^-}{2} \right)\nonumber\\
&\equiv& \pm\frac{1}{N}\left( \frac{1}{2} \pm \Delta^{\pm} \right)
\end{eqnarray}
yielding the important equation
\begin{equation}\label{imp1}
\fl
\frac{\bar{G}_+(i\pi/(\pi-\gamma))e^{-\pi\Lambda^+/(\pi-\gamma)}
(1+2\cos\tilde{\gamma})}{\pi} = \frac{1}{2N} - \frac{i\bar{g}_1}{12N^2\sigma_N
(\Lambda^+)} + \frac{1}{N} \frac{1}{\sqrt{2\nu}}\Delta^+.
\end{equation}
The other normalization equation is obviously
\begin{equation}\label{imp2}
\fl
\sigma_N(\Lambda^+)=\frac{\bar{g}_1^2}{24N^2\sigma_N(\Lambda^+)} +
\frac{i\bar{g}_1}{2N} + \frac{\bar{G}_+(i\pi/(\pi-\gamma))}{\pi-\gamma}
e^{-\pi\Lambda^+/(\pi-\gamma)}(1+2\cos\tilde{\gamma}).
\end{equation}
Now we can proceed in the usual way keeping in mind the changes arising
especially from the last two equations.
\begin{eqnarray}\label{DE_1}
\fl
\frac{\Delta E_N}{2N}= -\frac{\pi}{\pi-\gamma}(\bar{c}+2\tilde{c}\cos
\tilde{\gamma}) \bar{G}_+\left(\frac{i\pi}{\pi-\gamma}\right)
\left[ \bar{P}\left(\frac{i\pi}{\pi-\gamma}\right) + \bar{Q}_+
\left(\frac{i\pi}{\pi-\gamma}\right) \right] e^{-\pi\Lambda^+/(\pi-\gamma)}
\nonumber\\
+ \left( \Lambda^+\leftrightarrow\Lambda^-\right).
\end{eqnarray}
After some algebra using equations (\ref{imp1}) and (\ref{imp2}) this turns out
as
\begin{eqnarray}\label{DE_2}
\fl
\frac{\Delta E_N}{2N}= -\frac{\pi^2}{\pi-\gamma}\frac{(\bar{c}+2\tilde{c}\cos
\tilde{\gamma})}{1+2\cos\tilde{\gamma}}\times\nonumber\\
\times
\left\{ \left( -\frac{1}{24N^2} + \frac{1}{4N^2\nu}(\Delta^+)^2 \right)
+ \left( -\frac{1}{24N^2} + \frac{1}{4N^2\nu}(\Delta^-)^2 \right)\right\}.
\end{eqnarray}
Finally, for further interpretation we put it in the form
\begin{eqnarray}\label{DE_3}
\fl
\frac{\Delta E_N}{2N}= -\frac{2\bar{c}+4\tilde{c}\cos\tilde{\gamma}}
{1+2\cos\tilde{\gamma}}\frac{\pi}{\pi-\gamma}
\left\{ -\frac{\pi}{6}\frac{1}{4N^2} + \frac{2\pi}{4N^2}\left(\frac{S_z^2\nu}
{2}+\frac{\Delta^2}{2\nu} \right) \right\}.
\end{eqnarray}
with $\Delta=(H^+-H^-)/2$ as an integer number.
The momentum correction $\Delta P_N$ is obtained from relation (\ref{DE}) after
substituting the hole energy
$\varepsilon_h=-2\bar{c}\pi\rho_0^{(1)}-2\tilde{c}\pi\rho_0^{(2)}$ by the hole
momentum $p_h(\lambda)=\frac{1}{2}\arctan(\sinh(\pi\lambda)/(\pi-\gamma))+
\arctan(\sinh(\pi\lambda)/(\pi-\gamma))/\cos\tilde{\gamma})+$ const (see
\cite{doerfel}).
Comparing the asymptotics for large $\lambda$ of both $\varepsilon_h(\lambda)$
and $p_h(\lambda)$ gives the speed of sound and helps to shorten the
calculation of $\Delta P_N$. Therefore
\begin{equation}\label{DP}
\frac{\Delta P_N}{2 N} = \frac{\pi}{2}\left\{ \frac{1}{4 N^2 \nu}
\left[ (\Delta^-)^2 - (\Delta^+)^2 \right] \right\} + \mbox{const}.
\end{equation}
We are not interested in the constant term, being some multiple of $\pi$.
Finally
\begin{equation}\label{DP_res}
\Delta P_N = -\frac{2\pi}{2N} S_z \Delta.
\end{equation}
The interpretation of our result will be given in section 5. We stress once
more, that to obtain equations (\ref{DE_3}) and (\ref{DP_res}) it was not
necessary to put $\bar{c}=\tilde{c}$. The coupling constants are only constrained
to stay in region (\ref{phase}).
\section{Calculation of finite-size corrections for positive couplings}
Now we consider region $\bar{c}>0,\tilde{c}>0$ and rely on the analysis of
paper \cite{devega}.
The ground state is given by two densities, $\sigma_N^{(1/2)}(\lambda)$ for the
real roots and $\sigma_N^{(1)}(\lambda)$ for the real parts of the ($2,+$)
strings. One has
\begin{equation}\label{dens}
\sigma_{\infty}^{(1/2)}(\lambda) = \sigma_{\infty}^{(1)}(\lambda) =
\frac{1}{2\gamma\cosh(\pi\lambda/\gamma)} \equiv s(\lambda).
\end{equation}
The physical excitations are holes in those distributions. As in section 3 we
consider only holes situated right (or left) from all roots. With the usual
technique and the results of \cite{devega} we have obtained after some lengthy
but straightforward calculations the basic system for the density corrections
\begin{eqnarray}\label{dens_corr}
\Delta\sigma_N^{(1/2)}(\lambda) =
&-&\int_{-\infty}^{\infty}d\mu\:s(\lambda-\mu)
\left\{\frac{1}{N}\sum_{j=1}^{M_1}\delta(\mu-\xi_j)-\sigma_N^{(1)}(\mu)\right\}
\nonumber\\
\Delta\sigma_N^{(1)}(\lambda) =
&-&\int_{-\infty}^{\infty}d\mu\:s(\lambda-\mu)\left\{ \frac{1}{N}\sum_{i=1}^
{M_{1/2}} \delta(\mu-\lambda_i) - \sigma_N^{(1/2)}(\mu) \right\}
\nonumber\\
&-&\int_{-\infty}^{\infty}d\mu\:r(\lambda-\mu)
\left\{\frac{1}{N}\sum_{j=1}^{M_1}\delta(\mu-\xi_j)-\sigma_N^{(1)}(\mu)\right\}
.
\end{eqnarray}
We have denoted the real roots by $\lambda_i$ (their number is $M_{1/2}$) and
the real parts of the strings by $\xi_j$ (number $M_1$). The function
$r(\lambda)$ is given via its Fourier transform
\begin{equation}\label{q}
R(\omega)=\frac{\sinh(\omega(\pi-3\gamma)/2)}{2\sinh(\omega(\pi-2\gamma)/2)
\cosh(\omega\gamma/2)}.
\end{equation}
The energy correction takes the form
\begin{eqnarray}\label{en_corr}
\fl
\frac{\Delta E_N}{2N} =
&-&\pi\bar{c} \int_{-\infty}^{\infty}d\lambda s(\lambda) \left\{ \frac{1}{N}
\sum_{i=1}^{M_{1/2}}\delta(\lambda-\lambda_i)-\sigma_N^{(1/2)}(\lambda)\right\}
\nonumber\\
\fl
&-&\pi\tilde{c} \int_{-\infty}^{\infty}d\lambda s(\lambda)\left\{\frac{1}{N}
\sum_{j=1}^{M_1}\delta(\lambda-\xi_j) - \sigma_N^{(1)}(\lambda) \right\}.
\end{eqnarray}
Once more we shall follow \cite{hamer}. The maximal (minimal) real roots we
call $\Lambda^{\pm}_{1/2}$ and for the strings we use $\Lambda^{\pm}_{1}$
respectively. Instead of one $C(\omega)$ we have now $C_1(\omega)$ and
$C_{1/2}(\omega)$ generalized in an obvious way. The same applies to $F(\omega)$.
The main mathematical problem is the factorization of a matrix kernel
\begin{eqnarray}\label{fact_mat}
\left( 1-K(\omega) \right)^{-1} = G_+(\omega) G_-(\omega) \qquad \mbox{with}
\nonumber\\
G_-(\omega)=G_+(-\omega)^T
\end{eqnarray}
(see \cite{devega1}) and
\begin{eqnarray}\label{k_mat}
K(\omega)=\left(
\begin{array}{cc}
0 & S(\omega) e^{-i\omega\left(\Lambda_1^+-\Lambda_{1/2}^+\right)} \\
S(\omega) e^{i\omega\left(\Lambda_1^+-\Lambda_{1/2}^+\right)} & R(\omega)
\end{array}
\right).
\end{eqnarray}
$G_+$ is now a matrix function and $G_+^T$ stands for its transposition. The
two component vector $Q_+(\omega)$ is (see equation (\ref{dec}))
\begin{eqnarray}\label{q+_vec}
Q_+(\omega)=\frac{G_+(i\pi/\gamma)^T}{\pi-i\omega\gamma} \left(
\begin{array}{c}
e^{-\pi\Lambda_{1/2}^+/\gamma} \\
e^{-\pi\Lambda_{1}^+/\gamma}
\end{array}
\right).
\end{eqnarray}
As usual we define the constant matrices $G_1$ and $G_2$ by
\begin{equation}\label{const_mat}
G_+(\omega) \stackrel{|\omega|\to\infty}{\longrightarrow}
1 + G_1 \frac{1}{\omega} + G_2 \frac{1}{\omega^2}
+ {\cal O}\left(\frac{1}{\omega^3}\right)
\end{equation}
and as before one has
\begin{equation}
G_2=\frac{1}{2}G_1^2.
\end{equation}
The two component vector $P(\omega)$ is then
\begin{eqnarray}\label{p_vec}
P(\omega) =
\left(
\begin{array}{c}
-\frac{1}{2N}-\frac{i\omega}{12N^2\sigma_N^{(1/2)}(\Lambda_{1/2}^+)}\\
-\frac{1}{2N} - \frac{i\omega}{12N^2\sigma_N^{(1)}(\Lambda_1^+)}
\end{array}
\right)
+ G_1
\left(
\begin{array}{c}
\frac{i}{12N^2\sigma_N^{(1/2)}(\Lambda_{1/2}^+)} \\
\frac{i}{12N^2\sigma_N^{(1)}(\Lambda_1^+)}
\end{array}
\right)
\end{eqnarray}
and therefore the shifted densities are expressed in the form
\begin{eqnarray}\label{f_dens}
\left(
\begin{array}{c}
X^+_{1/2}(\omega) \\
X^+_{1}(\omega)
\end{array}
\right) =
\left(
\begin{array}{c}
C_{1/2}(\omega) \\
C_{1}(\omega)
\end{array}
\right) + G_+(\omega)\left[ P(\omega)+Q_+(\omega) \right].
\end{eqnarray}
Now it is necessary to find the analogue of equation (\ref{z}) for the two
counting functions. Here it would be necessary to consider different cases
depending on the fractions of $\nu S_z / N$ or $2\nu S_z / N$. From our
experience we know that the result of the finite-size corrections does not
depend on those fractions, while relations like (\ref{holes}) obviously do.
Being interested only in the former we shall proceed as easy as possible and
consider only the case with vanishing fractions.
\begin{equation}\label{z1}
z_N^{(1/2)}(\pm\infty)-z_N^{(1/2)}(\Lambda_{1/2}^{\pm})=
\pm\frac{1}{N}\left( \frac{1}{2} - \nu S_z + H_{1/2}^{\pm} \right),
\end{equation}
\begin{equation}\label{z2}
z_N^{(1)}(\pm\infty)-z_N^{(1)}(\Lambda_{1}^{\pm})=
\pm\frac{1}{N}\left( \frac{1}{2} - 2\nu S_z + H_{1}^{\pm} \right).
\end{equation}
Easy counting leads to expressions for the numbers of the holes
\begin{eqnarray}\label{h_num}
H_{1}=2 S_z
\nonumber\\
H_{1/2}=2 S_z + 2 M_1 -N.
\end{eqnarray}
We expect their modifications for non-vanishing fractions. We stress that both
numbers are even.
Equation (\ref{imp1}) is now more complicated
\begin{eqnarray}\label{imp1_new}
\fl
\frac{G_+(i\pi/\gamma)^T}{\pi} \left(
\begin{array}{c}
e^{-\pi\Lambda_{1/2}^+/\gamma} \\
e^{-\pi\Lambda_{1}^+/\gamma}
\end{array}
\right) = G_+^{-1}(0)B^+ +
\left(
\begin{array}{c}
\frac{1}{2N} \\
\frac{1}{2N}
\end{array}
\right) - i G_1
\left(
\begin{array}{c}
\frac{1}{12N^2\sigma_N^{(1/2)}(\Lambda_{1/2}^+)} \\
\frac{1}{12N^2\sigma_N^{(1)}(\Lambda_1^+)}
\end{array}
\right)
\end{eqnarray}
with the definitions
\begin{eqnarray}\label{b+-}
\fl
B^{\pm} =
\left(
\begin{array}{c}
B_1^{\pm} \\
B_2^{\pm}
\end{array}
\right) = \frac{1}{N}
\left(
\begin{array}{c}
-\nu S_z + H_{1/2}^{\pm} \\
-2\nu S_z + H_1^{\pm}
\end{array}
\right) =
\frac{1}{N}
\left(
\begin{array}{c}
S_z -\nu S_z + M_1 - \frac{N}{2} \pm \Delta^{(1/2)} \\
S_z -2\nu S_z \pm \Delta^{(1)}
\end{array}
\right)
\end{eqnarray}
and
\begin{equation}\label{delta}
\Delta^{(i)} = \frac{H_i^+-H_i^-}{2}.
\end{equation}
The other normalization condition is obviously
\begin{eqnarray}\label{imp2_new}
\fl
\left(
\begin{array}{c}
\sigma_N^{(1/2)}(\Lambda^+_{1/2}) \\
\sigma_N^{(1)}(\Lambda^+_{1})
\end{array}
\right) = \frac{G_1^2}{2}
\left(
\begin{array}{c}
\frac{1}{12N^2\sigma_N^{(1/2)}(\Lambda_{1/2}^+)} \\
\frac{1}{12N^2\sigma_N^{(1)}(\Lambda_1^+)}
\end{array}
\right) + iG_1
\left(
\begin{array}{c}
\frac{1}{2N} \\
\frac{1}{2N}
\end{array}
\right) + \frac{G_+(i\pi/\gamma)^T}{\gamma}
\left(
\begin{array}{c}
e^{-\pi\Lambda_{1/2}^+/\gamma} \\
e^{-\pi\Lambda_{1}^+/\gamma}
\end{array}
\right).\nonumber\\
\end{eqnarray}
After combining equations (\ref{imp2_new}) and (\ref{imp1_new}) we obtain from
equation (\ref{en_corr})
\begin{eqnarray}\label{en_corr_new}
\fl
\frac{\Delta E_N}{2N} = \frac{\pi}{\gamma}
\left(
\begin{array}{c}
\bar{c} e^{\pi\Lambda_{1/2}^+/\gamma} \\
\tilde{c} e^{\pi\Lambda_{1}^+/\gamma}
\end{array}
\right)^T G_+\left(\frac{i\pi}{\gamma}\right) \left[ - \frac{1}{2}
\left(
\begin{array}{c}
\frac{1}{2N} \\
\frac{1}{2N}
\end{array}
\right) + \left( \frac{1}{2} i G_1 + \frac{\pi}{\gamma} \right)
\left(
\begin{array}{c}
\frac{1}{12N^2\sigma_N^{(1/2)}(\Lambda_{1/2}^+)} \\
\frac{1}{12N^2\sigma_N^{(1)}(\Lambda_1^+)}
\end{array}
\right) \right.
\nonumber\\
\left.
+ \frac{1}{2} G_+^{-1}(0) B^+ \right]
+ (\Lambda^+\leftrightarrow\Lambda^-,B^+ \leftrightarrow B^-).
\end{eqnarray}
This result is valid for any positive $\bar{c}$ and $\tilde{c}$. As in paper
\cite{devega1} no further progress can be made unless the factorization is
explicitly known which is not the case up to now. For $\bar{c}=\tilde{c}=c$
(conformal invariance) the problem simplifies and the final answer can be
obtained.
After some lengthy calculations one arrives at
\begin{eqnarray}\label{final1}
\fl
\frac{\Delta E_N}{2N} = \frac{\pi^2 c}{\gamma} \left[ -\frac{1}{12N^2}
+ \frac{1}{2} \left( \left( B_1^+ - B_2^+ \right)^2 + B_1^+B_2^+ -
(B_2^+)^2 \frac{\pi-3\gamma}{2(\pi-2\gamma)} \right) \right] +
(B^+ \leftrightarrow B^-)\nonumber\\
\end{eqnarray}
which can be brought into the form
\begin{eqnarray}\label{final2}
\fl
\frac{\Delta E_N}{2N} = \frac{2\pi c}{\gamma} \left\{ -\frac{\pi}{6}
\frac{2}{4N^2} + \frac{2\pi}{4N^2} \left[ \frac{1}{4}(1-2\nu)S_z^2
+ \frac{1}{4} \left( H_{1/2} - \frac{H_1}{2} \right)^2
\right. \right.
\nonumber\\
\left. \left.
+(\Delta^{(1/2)})^2 - \Delta^{(1/2)}\Delta^{(1)} + \frac{1}{2}
(\Delta^{(1)})^2 \frac{1-\nu}{1-2\nu} \right] \right\}.
\end{eqnarray}
For $\bar{c}\not=\tilde{c}$ we expect the result to be much more complicated
but of the same order $1/N^2$.
As above the momentum correction is given from relation (\ref{en_corr}) after
replacing $\varepsilon_h^{(1/2)}=2\bar{c}\pi s(\lambda)$ by
$p_h^{(1/2)}=\arctan e^{\pi\lambda/\gamma}$ and
$\varepsilon_h^{(1)}=2\tilde{c}\pi s(\lambda)$ by
$p_h^{(1)}=\arctan e^{\pi\lambda/\gamma}$. The values of the hole momenta are
taken from \cite{devega} where an additionel factor $\frac{1}{2}$ must be
introduced to take into account our definition of momenta (\ref{mom}).
As in section 3 comparing the asymptotics for $\bar{c}=\tilde{c}$ gives the
speed of sound and together with equation (\ref{final1}) the momentum
correction
\begin{eqnarray}\label{mom_corr}
\fl
\frac{\Delta P_N}{2N} = \frac{\pi}{2} \left\{ \frac{1}{2}\left[
\left( B_1^- - B_2^- \right)^2 - \left( B_1^+ - B_2^+ \right)^2
\right. \right.
\nonumber\\
\left. \left. + B_1^-B_2^-
- B_1^+B_2^+ - ( (B_1^-)^2-(B_1^+)^2 ) \frac{\pi-3\gamma}{2(\pi-2\gamma)}
\right] \right\} + \mbox{const}.
\end{eqnarray}
Disregarding the constant (multiple of $\pi$) we have
\begin{eqnarray}\label{mom_corr2}
\Delta P_N = \frac{\pi}{2} \left\{ -\Delta^{(1/2)} \left( H_{1/2}-\frac{H_1}{2}
\right) -\Delta^{(1)}\left( \frac{H_1}{2}-\frac{H_{1/2}}{2}\right) \right\}.
\end{eqnarray}
\section{Conclusions}
In sections 3 and 4 we have determined the finite-size corrections of our model
for two different cases. Equations (\ref{DE_3}) and (\ref{DP_res}) give the
result for region (\ref{phase}), while equations (\ref{final2}) and
(\ref{mom_corr2}) are valid for $\bar{c}=\tilde{c}=c>0$. In both cases we have
conformal invariance. That seems to be the reason why during the calculations a
lot of terms cancel each other and the result is considerably simple.
From the asymptotics of $\varepsilon_h$ and $p_h$ we find
\begin{equation}\label{sos1}
v_s = - \frac{2\pi}{\pi-\gamma} \frac{\bar{c}+2\tilde{c}\cos\tilde{\gamma}}
{1+2\cos\tilde{\gamma}} > 0
\end{equation}
for the speed of sound and from relation (\ref{DE_3}) the value of $1$ for the
central charge.
Formula (\ref{sos1}) generalizes the former result obtained for equal (but
negative) couplings \cite{doerfel}. Analogously follows
\begin{equation}\label{sos2}
v_s = \frac{2c\pi}{\gamma}
\end{equation}
and the central charge equals $2$ \cite{devega}.
For completeness we mention the heat capacities per site at low temperature
given by
\begin{equation}\label{hc1}
C = \frac{c_v T \pi}{3 v_s}
\end{equation}
where we have denoted the central charge by $c_v$ to avoid confusions.
Therefore
\begin{equation}\label{hc2}
C = - \frac{1+2\cos\tilde{\gamma}}{\bar{c}+2\tilde{c}\cos\tilde{\gamma}}
\frac{(\pi-\gamma)T}{6}
\end{equation}
and
\begin{equation}\label{hc3}
C = \frac{\gamma T}{3 c}
\end{equation}
in agreement with former results \cite{doerfel}. Formula (\ref{hc2})
generalizes our calculations for $\bar{c}=\tilde{c}$.
The dimensions $x_n$ and the spins $s_n$ of the primary operators follow from
formulae (\ref{DE_3}) and (\ref{DP_res}).
\begin{equation}\label{po1}
x_n = \frac{S_z^2 \nu}{2} + \frac{\Delta^2}{2\nu}
\end{equation}
\begin{equation}\label{sop1}
s_n = S_z |\Delta|
\end{equation}
for negative coupling. It is interesting to compare this with the result for
the XXZ($\frac{1}{2}$) model, where $\nu$ is simply replaced by $1-\nu$.
Formulae (\ref{po1}) and (\ref{sop1}) have to be understood in the way that for
general excited states (arbitrary holes and complex roots) $S_z$ and $\Delta$ are
replaced by more complicated integer numbers \cite{woy}.
For positive couplings
\begin{eqnarray}\label{po2}
\fl
x_n = \frac{1}{4}(1-2\nu)S_z^2 + \frac{1}{4} \left( H_{1/2}-\frac{H_1}{2}
\right)^2 + \left( \Delta^{(1/2)} \right)^2 - \Delta^{(1/2)}\Delta^{(1)}
+ \frac{1}{2} \left( \Delta^{(1)} \right)^2 \frac{1-\nu}{1-2\nu}.
\end{eqnarray}
When $\nu\to0$ the first two terms agree with paper \cite{martins} while the
other terms are connected with the asymmetry of the state which was not
considered there. The dimension of a general primary operator depends on four
integer numbers. The second of them measures an asymmetry between the number of
holes among real roots or strings respectively. Once more, for more complicated
states the integers in equation (\ref{po2}) are replaced by other ones
depending on the concrete structure of the state. We mention that relation
(\ref{po2}) can be "diagonalized" to resemble the expression of two models both
of central charge $1$.
\begin{eqnarray}\label{decomp}
\fl
x_n = \frac{1}{2} \frac{(1-2\nu)}{2} S_z^2 + \frac{1}{2}
\frac{\left( H_{1/2}-H_1/2 \right)^2}{2} + \frac{1}{2} 2\Delta_1^2
+ \frac{1}{2} \frac{2}{1-2\nu} \Delta_2^2
\end{eqnarray}
with
\begin{eqnarray}\label{exp}
\Delta_1 = \Delta^{(1/2)} - \frac{\Delta^{(1)}}{2},
\nonumber\\
\Delta_2 = \frac{\Delta^{(1)}}{2}.
\end{eqnarray}
Expression (\ref{decomp}) becomes even more symmetric if one remembers the
first equation of (\ref{h_num}) and the definition (\ref{delta}). Then twice a
certain number of holes is linked with its appropriate asymmetry.
From formula (\ref{mom_corr2}) the spins of the primary operators are
\begin{equation}\label{sop2}
s_n = \left| \Delta^{(1/2)} \left( H_{1/2} - \frac{H_1}{2} \right)
+ \Delta^{(1)} \left( \frac{H_1}{2} - \frac{H_{1/2}}{2} \right) \right|
\end{equation}
and after using (\ref{exp})
\begin{equation}\label{sop3}
s_n = \left| \Delta_1 \left( H_{1/2} - \frac{H_1}{2} \right)
+ \Delta_{2} S_z \right|
\end{equation}
with the same symmetry as relation (\ref{decomp}).
Finally, we shall determine the magnetic suszeptibilities per site at zero
temperature and vanishing field from our finite-size results. That can be done,
because the states we have considered include those with minimal energy for a
given $S_z$ (magnetization) \cite{yangyang}. Differentiating twice the energy
with respect to $S_z$ gives the inverse susceptibility.
Hence
\begin{equation}\label{sus1}
\chi = \frac{\pi-\gamma}{4\pi\gamma} \left( -\frac{1+2\cos\tilde{\gamma}}
{\bar{c}+2\tilde{c}\cos\tilde{\gamma}} \right) = \frac{1}{v_s}\frac{1}{2\gamma}
\end{equation}
and
\begin{equation}\label{sus2}
\chi = \frac{1}{2c\pi} \frac{1}{\pi-2\gamma}=\frac{1}{v_s}\frac{1}{\pi-2\gamma}
\end{equation}
for the two cases respectively in agreement with earlier results
\cite{martins,devega1,doerfel}. Expression (\ref{sus1}) for general couplings
in region (\ref{phase}) had not been derived before.
\section*{Acknowledgment}
One of us (St M) would like to thank H. J. de Vega for helpful discussions.
\section*{References}
| proofpile-arXiv_065-354 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
Studies of $0^{+} \rightarrow 0^{+}$ superallowed $\beta$ decays are
compelling because of their simplicity.
The axial-vector decay strength
is zero for such decays, so
the measured $ft$ values are directly related to the weak vector
coupling constant through
the following equation:
\begin{equation}
ft = \frac{K}{G_{\mbox{\tiny V}}^{\prime 2} \langle M_{\mbox{\tiny F}} \rangle^2} ,
\label{eq:ft}
\end{equation}
\noindent where $K$ is a known constant, $G_{\mbox{\tiny V}}^{\prime}$ is the
effective vector coupling constant and $\langle M_{\mbox{\tiny F}} \rangle$ is the Fermi
matrix element between analogue states. Eq.~(\ref{eq:ft}) is only
accurate at the few-percent level since it omits calculated
correction terms. Radiative corrections, $\delta_{R}$, modify the
decay rate by about 1.5\% and charge-dependent corrections,
$\delta_{C}$, modify the ``pure" Fermi matrix element by about 0.5\%.
Thus, Eq.~(\ref{eq:ft}) is transformed\cite{Ha90} into the
equation:
\begin{equation}
{\cal F} t = ft (1 + \delta_{R}) (1 - \delta_{C}) =
\frac{K}{G_{\mbox{\tiny V}}^{\prime 2} \langle M_{\mbox{\tiny F}} \rangle^2} .
\label{eq:Ft}
\end{equation}
Accurate experimental data on $Q_{EC}$-values, half-lives and branching
ratios combined with the two correction terms then permit precise
tests of the Conserved Vector Current hypothesis, via the constancy
of ${\cal F} t$ values irrespective of the $0^{+} \rightarrow 0^{+}$ decay
studied.
These data also yield a value for $G_{\mbox{\tiny V}}^{\prime}$, which in combination
with the weak vector coupling constant for the purely leptonic muon
decay,
provide a value for $V_{ud}$, the up-down quark mixing element
of the CKM matrix. Together with the smaller elements, $V_{us}$ and
$V_{ub}$, this matrix element provides a stringent test of the
unitarity of the CKM matrix. Any violation of unitarity would
signal the need for physics beyond the Standard Model, such as
extra $Z$ bosons or the existence of right-handed currents.
\section{The experiments}\label{sec:expts}
The required experimental data on $0^{+} \rightarrow 0^{+}$ beta decays
fall into three categories: {\it (i)} $Q_{EC}$ values, {\it (ii)}
half-lives and, {\it (iii)} branching ratios. In order to be useful,
they need to be determined to an accuracy of about 40 parts per million
(ppm) for $Q_{EC}$ and 200 ppm for the other two, a challenge that
requires ingenuity and rigorous procedures. At present, nine superallowed
beta emitters meet this requirement. Fig.~\ref{fig:fig1} shows the
degree to which the necessary experimental data is known in these
nine cases. Specific examples of precise $Q_{EC}$-value, half-life
and branching-ratio measurements,
with their associated problems and techniques, are
given below.
\begin{figure}[t]
\centerline{
\epsfxsize=5.0in
\epsfbox{fig1.ps}
}
\fcaption{Contributions to the ${\cal F} t$-value uncertainty from the
uncertainties in the $Q_{EC}$-values, half-lives and branching
ratios of the nine precisely known $0^{+} \rightarrow 0^{+}$
superallowed beta emitters. The arrows indicate values that
exceed the scale of the plot.}
\label{fig:fig1}
\end{figure}
\subsection{$Q_{EC}$ values} \label{subsec:qec}
Most precision $Q_{EC}$-value measurements employ $(p,n)$ or
$(^{3}He,t)$ reactions, the inverse beta-decay process, which provide
a simple and direct relation to the beta-decay energy. Such reaction
$Q$ values can be determined for the nine well-known superallowed
beta emitters since they all have stable daughter nuclides that will
form the target material in the reaction studies. In three cases,
$^{26m}$Al, $^{34}$Cl and $^{42}$Sc,
the nuclide with one less proton than the
$\beta$-decay parent is stable (or long lived) as well.
Measurements of both the
$(p,\gamma )$ and $(n,\gamma )$ reaction $Q$-values with targets
comprised of these stable nuclides also provide a direct relation to the
$\beta$-decay $Q_{EC}$ value.
In all these measurements the main difficulty lies in calibrating the
projectile and/or ejectile energies. The Auckland group has
frequently used the $(p,n)$ reaction and eliminated the need for an
ejectile calibration by measuring the sensitive and rapid onset of
neutron emission just at the reaction threshold\cite{Ba92}. They
calibrate their proton energy by passing the beam through a magnetic
spectrometer, with narrow entrance and exit slits, before it
impinges on the target. The spectrometer is calibrated before and after
runs with beams of surface ionized K or Cs from an ion source that
is inserted before the spectrometer at a point traversed
by the proton beam.
The source extraction voltage is adjusted until
the alkali beam follows the same path through the spectrometer as
the proton beam and then the energy of the latter
can easily be deduced from
the applied extraction voltage. The spectrometer is NMR stabilized
and never changed between or during runs and calibrations. The
small final adjustment of the proton beam energy to map out the
$(p,n)$ reaction threshold is done by applying a bias on the target.
The threshold measurements and the calibrations thus all revert to
precise readout of voltages.
At Chalk River the complications of precise, absolute $Q_{EC}$-value
measurements have been avoided by measuring instead the differences
in $Q_{EC}$ values between superallowed beta emitters\cite{Ko87}.
The target material is a mixture of two beta-decay daughter nuclides,
the $(^{3}He,t)$ reaction is used and the outgoing tritons are
analysed by a Q3D spectrometer. Triton groups originating from both
types of target nuclei are therefore present in the spectrometer
focal plane. As in the Auckland measurements the target can be
biased, in this case by up to 150 kV. When a bias of $X$ volts is
applied to the target the incoming, doubly-charged $^{3}$He
projectiles are retarded by $2X$ eV whereas the outgoing,
singly-charged
tritons are reaccelerated by $X$ eV. The net effect of the target bias
is a shift of the triton group by $X$ eV along the focal plane.
In these types of $Q_{EC}$-value difference measurements the target
bias is adjusted until the shifted position of a triton peak
from one target component exactly coincides with the unshifted
position of a triton group from the other component. When matched,
the trajectories of both triton groups through the spectrometer
are identical and a detailed knowledge of the spectrometer optics
is not required. The $Q_{EC}$-value difference determination
is then reduced to measuring the target bias. If the two
selected, matched triton groups do not correspond to the final product
nuclei being in their ground states,
then a knowledge of the excitation energies
of the reaction products is also required.
The $Q_{EC}$-value difference measurements
nicely complement the absolute $Q_{EC}$-value measurements and
together they result in a precise $Q_{EC}$-value grid with more than
one piece of information for each superallowed $\beta$ emitter,
a situation
that is especially valuable when probing data for possible
systematic biases.
\subsection{Half-lives} \label{subsec:hls}
Measurements of half-lives appear deceptively simple but when high
precision is required they are fraught with possible biases.
Precise half-life measurements of short-lived ($\sim$1 s) nuclides
pose interesting and unique difficulties\cite{Ha92}. Many samples
need to be counted, with high initial rates, in order to obtain
adequate statistics. The purity of the radioactive samples must be
ensured, so they need to be prepared with on-line isotope separators.
This is a major technical challenge because of the low
volatility and short
half-life of many of the superallowed beta emitters. The detector
must be capable of high-rate counting and have a high efficiency.
It must also have a very low-energy threshold so as to minimize its
sensitivity to possible, minor gain changes and pile up.
At high rates the necessary
correction for deadtime becomes the most worrisome aspect of the
counting electronics. At Chalk River\cite{Ha92} the samples are
prepared with an isotope separator equipped with a high temperature
He-jet ion source. The detector used is a $4\pi$ gas counter with
a 92\% efficiency for positrons. An instantly retriggerable gate
generator is used to provide a well-defined, non-extendable pulse
width, which introduces a dominant known deadtime at least five times
longer than any preceeding electronics deadtime.
An accurate result is not guaranteed even if the problems with sample
preparation, detector and electronics have all been addressed because
of the possible bias introduced by the analysis procedure, a source
of potential problems that is often overlooked. The procedure
employed is based on Poisson statistics, but the counting conditions
do not strictly satisfy the Poisson criteria in that the sample size
is relatively small, often less than $10^4$ atoms, and is counted with
high efficiency until only background is visible. Furthermore,
the variance for each data point is not the Poisson variance because
of the perturbations introduced by dead time,
nor is it easily calculable even though the
dead-time losses themselves are properly accounted for. Numerous
samples are counted in a typical experiment. Normally, one might
expect to obtain the final result by averaging the individual
half-lives obtained from a decay analysis of each sample or by
adding together the data from all samples and then performing
a decay analysis, but neither method is strictly correct..
The analysis procedure employed by the Chalk River group uses
modified Poisson variances and is based on a simultaneous fit of up
to 500 decay curves with a single common half-life, but with individual
intensity parameters for each decay curve. Because an exact treatment
is not possible, we have evaluated the bias introduced by our analysis
simplifications by using hypothetical data generated to simulate closely
the experimental counting conditions.
The analysis of the hypothetical data should return the same
half-life from which they were generated.
Our analysis has been proven
to be correct at the 0.01\% level. Our tests with the event generator
have also shown that different analysis procedures, based on
reasonable assumptions, may bias the outcome by more
than 0.1\%, well outside the accuracy required for a $0^{+} \rightarrow
0^{+}$ superallowed beta emitter half-life.
\subsection{Branching ratios} \label{subsec:brs}
The last experimental quantity required is the precise magnitude
of the superallowed branch for each $0^{+} \rightarrow 0^{+}$ emitter.
For eight of the nine well-known cases (excluding $^{10}$C) the
superallowed branch is the dominant one by far and other branches,
if known, are well below the 1\% level. The non-superallowed
branches seen so far have either been allowed Gamow-Teller transitions
to $1^{+}$ states in the daughter or been non-analogue transitions to
excited $0^{+}$ states in the daughter. The latter transitions,
although very weak, are of special interest because their magnitudes
are related to the size of one of the necessary calculated
charge-dependent corrections. Studies of such non-analogue transitions
are the only way so far to test the predictive power of those
calculations.
For the eight well known $0^{+} \rightarrow 0^{+}$ cases where the
superallowed decay branch is dominating, a precise measurement of
its intensity is achieved by a sensitive, but less precise,
measurement of the weak, competing branches. The difficulty in
observing these branches stems from the intense background generated
by the prolific and energetic positrons from the superallowed
branch. At Chalk River a sensitive counting technique has been
developed where events in an HPGe detector, used to observe the
$\gamma$ rays resulting from non-superallowed beta transitions
to excited states in the daughter, are tagged by the direction of
coincident positrons seen in plastic scintillators. The majority
of unwanted events in the HPGe detector originate from positrons heading
towards the detector since they may interact with it directly or
through bremsstrahlung and annihilation-in-flight processes. Such
events are removed by a condition that HPGe events must be coincident
with positrons heading away from that detector. This condition leads to a
dramatic reduction of the background produced by positrons from the
dominant, superallowed ground-state branch, which is not
accompanied by $\gamma$ rays, whereas events from the excited-state
branches, which are accompanied by subsequent $\gamma$ rays, are
still efficiently recorded. The decays of six superallowed beta
emitters have been investigated with this technique, the results
for four of them have been published\cite{Ha94} so far. Gamow-Teller
transitions were observed in three cases and non-analogue transitions
in two cases. Very stringent upper limits for similar branches were
determined for the cases where none was observed.
\section{The theoretical corrections}\label{sec:theo}
The charged particles involved in a nuclear beta decay interact
electromagnetically with the field from the nucleus as well as with
each other. These interactions modify the decay when compared to
a case where only the weak force is involved and they thus need to
be accounted for by theoretical corrections. For positron decay
the electromagnetic interaction between the proton involved and the
nuclear field, an effect absent in a bare nucleon decay, results in
a charge-dependent correction, $\delta_{C}$, to the Fermi matrix
element. The similar interaction between the emitted positron
and the nuclear field is already accounted for in the calculation
of the statistical rate function, $f$. The interactions between the
charged particles themselves and their associated bremsstrahlung
emissions leads to a series of radiative corrections to the bare
beta-decay process. It has been found advantageous to group the
radiative corrections into two classes, those that are nuclear-structure
dependent, denoted $\delta_{R}$, and those that
are not, denoted $\Delta_{R}$.
The charge-dependent correction, $\delta_{C}$, arises from the fact
that both Coulomb and charge-dependent nuclear forces act to destroy
the isospin symmetry between the initial and final states in superallowed
beta decay. The odd proton in the initial state is less bound than the
odd neutron in the final state so their radial wavefunctions differ
and the wavefunction overlap is not perfect. Furthermore, the degree
of configuration mixing between the final state and other, excited $0^{+}$
states in the daughter is slightly different from the similar
configuration mixing in the parent, again resulting in an imperfect
overlap. As was mentioned in Sec.~\ref{subsec:brs}
the configuration-mixing
predictions have been tested against data on non-analogue $0^{+}
\rightarrow 0^{+}$ transitions. There is good agreement between
the data and the most recent calculations\cite{Ha94,TH95,To95,OB95}.
The radial wavefunction difference correction has been calculated with
Woods-Saxon wavefunctions and the shell model\cite{To95}, the
Hartree-Fock method and the shell model\cite{OB95} and, most
recently, the Hartree-Fock method and the Random Phase
Approximation\cite{SGS96}. In general, the three types of
calculations exhibit similar trends in the predicted values as a
function of the mass of the emitter, but the absolute values of $\delta_C$
from ref.\cite{To95} differ on average from that of ref.\cite{OB95}
by 0.07\%.
The nuclear-structure dependent radiative correction, $\delta_{R}$,
depends on the energy released in the beta decay and consists of a
series of terms of order $Z^{m} \alpha^{n}$ (with $m < n$) where
$Z$ is the proton number of the daughter nucleus and $\alpha$ is
the fine-structure constant. The first three terms of this converging
series have been calculated\cite{TH95}. To them is added a
nuclear-structure dependent, order $\alpha$, axial-vector
term, denoted by $(\alpha / \pi ) C_{NS}$ in ref.\cite{TH95},
to form the total
correction, $\delta_{R}$.
The nuclear-structure independent radiative correction, $\Delta_{R}$,
is dominated by its leading logarithm, $(2 \alpha /\pi ) {\rm ln}
(m_{\mbox{\tiny Z}} / m_p)$, where $m_p$ and $m_{\mbox{\tiny Z}}$ are the masses
of the proton and $Z$-boson.
It also incorporates an axial-vector term\cite{TH95},
$(\alpha /2 \pi ) [{\rm ln}(m_p/m_{\mbox{\tiny A}} ) + 2 C_{{\rm Born}}]$,
whose principal uncertainity is the value assigned to the
low-energy cut-off mass, $m_{\mbox{\tiny A}}$. We adopt\cite{To95} a range
$ m_{\mbox{\tiny A}} /2 < m_{\mbox{\tiny A}} < 2 m_{\mbox{\tiny A}} $ with the central value given by the
$A_1$-resonance mass, $m_{\mbox{\tiny A}} = 1260$ MeV. The resulting
nucleus-independent radiative correction, $\Delta_{R}$,
then becomes
$(2.40 \pm 0.08)\%$.
\begin{table} [t]
\protect
\tcaption{Experimental results ($Q_{EC}$, $t_{1/2}$ and branching
ratio, $R$) and calculated corrections ($\delta_C$ and $\delta_R$)
for $0^{+} \rightarrow 0^{+}$ transitions.}
\label{tab:tabl1}
\footnotesize
\vspace{0.4cm}
\begin{center}
\begin{tabular}{cccccccc}
\hline \\[-3mm]
& $Q_{EC}$ & $t_{1/2}$ & $R$ & $ft$ & $\delta_C$ &
$\delta_R$ & ${\cal F} t$ \\
& (keV) & (ms) & (\%) & (s) & (\%) & (\%) & (s) \\
\hline \\[-3mm]
$^{10}$C & 1907.77(9) & 19290(12) & 1.4638(22) & 3040.1(51) &
0.16(3) & 1.30(4) & 3074.4(54) \\
$^{14}$O & 2830.51(22) & 70603(18) & 99.336(10) & 3038.1(18) &
0.22(3) & 1.26(5) & 3069.7(26) \\
$^{26m}$Al & 4232.42(35) & 6344.9(19) & $\geq$ 99.97
& 3035.8(17) &
0.31(3) & 1.45(2) & 3070.0(21) \\
$^{34}$Cl & 5491.71(22) & 1525.76(88) & $\geq$ 99.988
& 3048.4(19) &
0.61(3) & 1.33(3) & 3070.1(24) \\
$^{38m}$K & 6043.76(56) & 923.95(64) & $\geq$ 99.998
& 3047.9(26) &
0.62(3) & 1.33(4) & 3069.4(31) \\
$^{42}$Sc & 6425.58(28) & 680.72(26) & 99.9941(14)
& 3045.1(14) &
0.41(3) & 1.47(5) & 3077.3(24) \\
$^{46}$V & 7050.63(69) & 422.51(11) & 99.9848(13) & 3044.6(18) &
0.41(3) & 1.40(6) & 3074.4(27) \\
$^{50}$Mn & 7632.39(28) & 283.25(14) & 99.942(3) & 3043.7(16) &
0.41(3) & 1.40(7) & 3073.8(27) \\
$^{54}$Co & 8242.56(28) & 193.270(63) & 99.9955(6)
& 3045.8(11) &
0.52(3) & 1.39(7) & 3072.2(27) \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Results}\label{sec:resu}
The measured data on $Q_{EC}$-values, half-lives and branching ratios
for the nine precisely known $0^{+} \rightarrow 0^{+}$ emitters
as well as the calculated charge-dependent and radiative corrections
are given in Table~\ref{tab:tabl1}. The deduced ${\cal F} t$ values for
the nine cases are also shown in Fig.~\ref{fig:fig2}.
\begin{figure}[t]
\centerline{
\epsfxsize=4.0in
\epsfbox{fig2.ps}
}
\fcaption{${\cal F} t$ values for the nine well-known cases and the
best least-squares one-parameter fit.}
\label{fig:fig2}
\end{figure}
It is evident that the nine separate cases are in good agreement,
as is expected from CVC. The average ${\cal F} t$ value is $3072.3 \pm 1.0$~s,
with a reduced chi-square of 1.20. The constancy of the ${\cal F} t$ values
from the nine individual cases establishes that the CVC hypothesis,
as tested through nuclear beta decay, is accurate at the $4 \times
10^{-4}$ level.
The weak vector coupling constant $G_{\mbox{\tiny V}}^{\prime} = G_{\mbox{\tiny V}} (1 + \Delta_{R}
)^{1/2} = (K/2 {\cal F} t)^{1/2}$, deduced from superallowed decay is
$G_{\mbox{\tiny V}}^{\prime}/(\hbar c)^3 = (1.1496 \pm 0.0004) \times 10^{-5}$
GeV$^{-2}$, where the error on the average ${\cal F} t$ value has been
doubled\cite{To95} to include an estimate of the systematic
uncertainties in the calculated correction, $\delta_{C}$.
The $V_{ud}$ quark mixing element of the CKM matrix
is defined as $V_{ud} = G_{\mbox{\tiny V}} / G_{\mu}$, where $G_{\mu}$ is the
coupling constant deduced from the purely leptonic muon decay.
We arrive at $V_{ud} = 0.9740 \pm 0.0005$. With values of the
other two elements of the first row of the CKM matrix taken
from ref.\cite{PDG96} the unitarity test produces the following
result
\begin{equation}
\! \mid \! V_{ud} \! \mid \! ^2 +
\! \mid \! V_{us} \! \mid \! ^2 +
\! \mid \! V_{ub} \! \mid \! ^2 = 0.9972 \pm 0.0013 .
\label{eq:utnuc}
\end{equation}
\noindent The discrepancy with unity, shown in Fig.~\ref{fig:fig3},
is more than two standard deviations.
\begin{figure}[t]
\centerline{
\epsfxsize=5.0in
\epsfbox{fig3.ps}
}
\fcaption{Allowed regions of the vector coupling constant and
axial-vector coupling constant from nuclear superallowed beta
decays and neutron decay. The scale at the upper part of the figure
translates the $G_{\mbox{\tiny V}}$ scale into the corresponding unitarity sum
of the first row of the CKM matrix.}
\label{fig:fig3}
\end{figure}
Precision measurements of non $0^{+} \rightarrow 0^{+}$ superallowed
beta decays can also yield ${\cal F} t$ and $V_{ud}$ values. The decays
of $^{19}$Ne and the neutron have been studied extensively but because
their superallowed decay branches contain Gamow-Teller components,
beta-asymmetry measurements are required to separate them from the
Fermi components. Consequently the precision achieved so far in these
two cases is less than that achieved in the $0^{+} \rightarrow 0^{+}$
decay measurements. The first row unitarity test with a $V_{ud}$
value based on the most current $\beta$-decay asymmetry measurement
of $^{19}$Ne\cite{Jo96} is also shown in Fig.~\ref{fig:fig3}.
It is in good agreement with the $0^{+} \rightarrow 0^{+}$
data, which further supports the CVC hypothesis but still leaves a
unitarity problem.
The $^{19}$Ne data also yields a result for $G_{\mbox{\tiny A}} \langle M_{\mbox{\tiny GT}} \rangle
$, which can be used to deduce a value for the axial-vector coupling
constant, $G_{\mbox{\tiny A}}$. However, unlike the case of the neutron,
the Gamow-Teller matrix element, $\langle M_{\mbox{\tiny GT}} \rangle$, is not
explicitly given by theory but needs to be calculated, for example,
with the shell model. Consequently a very high precision is not
attainable for $G_{\mbox{\tiny A}}$ from $^{19}$Ne decay studies and only the
$G_{\mbox{\tiny V}}$ value is shown in Fig.~\ref{fig:fig3}.
The results for $G_{\mbox{\tiny V}}$ and $G_{\mbox{\tiny A}}$ from the neutron decay
studies\cite{TH95} are also shown in Fig.~\ref{fig:fig3}. With
$V_{ud}$ based on the neutron studies the unitarity test also fails,
but now the sum of the matrix elements is too large.
The current status is thus far from ideal. All three types of data,
nuclear superallowed beta decay, $0^{+} \rightarrow 0^{+}$ and non
$0^{+} \rightarrow 0^{+}$, and the decay of the neutron, result in a
failure to meet the unitarity condition. Only two of the three
types of data agree among themselves. However, it is worth
pointing out that the different types of data have their own
particular strengths and weaknesses. The strength of the
superallowed decay studies is the large number of cases, which dilute
the effect of any possible, erroneous measurement, and their consistency.
The weakness is the number, magnitude and complexity of the necessary,
calculated corrections. It is unlikely to expect a large change in the
${\cal F} t$ value deduced from superallowed beta emitters from further
experimental or theoretical work.
In contrast, the strength of the neutron decay studies is the
simplicity of the calculated corrections. The weakness is that
it is a single case with fewer measurements and, consequently a
greater susceptibility to one possible erroneous measurement.
(In fact, the two most recent $\beta$-asymmetry measurements
disagree.) The neutron case thus appears to have greater potential
but it is also the one most likely to see its ${\cal F} t$ value
move substantially from its present location.
| proofpile-arXiv_065-355 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus
-.2ex}{2.3ex plus .2ex}{\sectionfont}}
\makeatother
\def{Figure}{{Figure}}
\def{Table}{{Table}}
\makeatletter
\long\def\@makecaption#1#2{
\vskip 10pt
\setbox\@tempboxa\hbox{{\small\bf#1:} \small#2}
\ifdim \wd\@tempboxa >\hsize {\small\bf#1:} \small#2\par
\else \hbox to\hsize{\hfil\box\@tempboxa\hfil} \fi}
\makeatother
\def15 truecm{15 truecm}
\def13 truecm{13 truecm}
\def9.5 truecm{9.5 truecm}
\def5.3 truecm{5.3 truecm}
\def7.5 truecm{7.5 truecm}
\def6.6 truecm{6.6 truecm}
\def7.5 truecm{7.5 truecm}
\def6.6 truecm{6.6 truecm}
\font\teneufm=eufm10 \font\seveneufm=eufm7 \font\fiveeufm=eufm5
\newfam\eufmfam
\textfont\eufmfam=\teneufm
\scriptfont\eufmfam=\seveneufm
\scriptscriptfont\eufmfam=\fiveeufm
\def\eufm#1{{\fam\eufmfam\relax#1}}
\newcommand{{\rm const.}}{{\rm const.}}
\def\romanic#1{{\uppercase\expandafter{\romannumeral #1}}}
\def{\eufm Z}{{\eufm Z}}
\def{\overline p}{{\overline p}}
\def{\overline K}{{\overline K}}
\def{\overline\Lambda}{{\overline\Lambda}}
\def{\overline\Sigma}{{\overline\Sigma}}
\def{et al.}{{et al.}}
\def{\it Nucl. Phys. }{{\it Nucl. Phys. }}
\def{\it Z. Phys. }{{\it Z. Phys. }}
\def{\it Phys. Rep. }{{\it Phys. Rep. }}
\def{\it Phys. Rev. }{{\it Phys. Rev. }}
\def{\it Phys. Rev. Lett. }{{\it Phys. Rev. Lett. }}
\def{\it Phys. Lett. }{{\it Phys. Lett. }}
\def{\it Rep. Prog. Phys. }{{\it Rep. Prog. Phys. }}
\def{\it Ann. Phys. (NY) }{{\it Ann. Phys. (NY) }}
\def\menton{{{\it``Quark Matter 1990''},
Menton, France, {\it Nucl. Phys.} {\bf A525} (1991) }}
\def\tucson{{Proceedings of the {\it Meeting on Hadronic Matter in Collision,
Tucson, Arizona, 1988}, ed. P.A. Carruthers, J. Rafelski,
World Scientific, Singapore, 1989 }}
\def\quarkgluonplasma{{ {\it ``Quark--Gluon--Plasma''},
Advanced Series on Directions in High Energy Physics, Vol. 6,
ed. R. C. Hwa, World Scientific, Singapore, 1990}}
| proofpile-arXiv_065-356 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Figure Captions}
Fig.1: The diagrams which contribute to the effective potential
up to the two-loop level.
(a) The quark loop contribution $V_{\rm quark}$
without the explicit interaction.
(b) The two-loop diagram $V_{\mbox{\scriptsize\rm q-g}}$
including the quark-gluon interaction.
Here, the curly line with a black dot denotes the nonperturbative gluon
propagator in the DGL theory.
\noindent
Fig.2: The total effective potential $V_{\rm eff}$ as a function of the infrared
quark mass $M(0)$.
The nontrivial minimum appears at $M(0)\sim 0.4$GeV, which indicates
dynamical breaking of chiral symmetry.
\noindent
Fig.3: $V_{\rm quark}$, $V_{\rm conf}$, $V_{\rm Y}$ and $V_{\rm C}$ are is shown as a function
of $M(0)$. The confinement part $V_{\rm conf}$,
plays the dominant role through the lowering the effective potential.
\noindent
Fig.4: Integrands $v_{\rm quark}$, $v_{\rm conf}$, $v_{\rm Y}$ and $v_{\rm C}$
of effective potential are shown as functions of the Euclidean momentum $p^2$.
The confinement part $v_{\rm conf}$ is more significant than $v_{\rm Y}$ and $v_{\rm C}$
for all momentum region.
\end{document} | proofpile-arXiv_065-357 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
proofpile-arXiv_065-358 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
|
\section{Why should we study rare charm decays? }
At HERA recent measurements of the charm production cross section
in $e p$ collisions at an
energy $\sqrt{s_{ep}} \approx 300$ GeV yielded a value of about
$1 \mu$b \cite{dstar-gp}.
For an integrated luminosity of 250 pb$^{-1}$,
one expects therefore about $25 \cdot 10^7$ produced c$\overline{\mbox{c}}$ \ pairs,
mainly through the boson-gluon fusion process.
This corresponds to a total of about
$30 \cdot 10^7$ neutral $D^{o}$,
$10 \cdot 10^7$ charged $D^{\pm}$,
some $5 \cdot 10^7$ $D_S$,
and about $5 \cdot 10^7$ charmed baryons.
A sizable fraction of this large number of $D$'s is accessible
via decays within a HERA detector, and thus
should be used to improve substantially our knowledge on
charmed particles.
There are several physics issues of great interest.
This report will cover however only aspects related
to the decay of charmed mesons in rare decay channels, and
in this sense provides an update of the discussion
presented in an earlier workshop on HERA physics \cite{hera-91a}.
In the following we shall discuss these aspects, and
point out the theoretical expectations.
Based on experiences made at HERA with charm studies,
we shall present an estimate on the sensitivity
for the detailed case study of the search for the
rare decay $D^0 \rightarrow \mu^+ \mu^- $.
Other challenging aspects such as the production mechanism
and detailed comparisons with QCD calculations, or the use
of charmed particles in the extraction of proton and photon
parton densities, will not be covered here.
Possibly the most competitive future source of $D$-mesons is
the proposed tau-charm factory.
The continuing efforts
at Fermilab (photoproduction and hadroproduction experiments),
at CERN (LEP) and at Cornell(CESR),
which are presently providing the highest
sensitivities, are compared with the situation at HERA.
In addition, all these different approaches
provide useful and complementary information
on various properties in the charm system.
\section{Decay processes of interest}
\subsection{Leading decays }
The charm quark is the only heavy quark besides the b quark and can be used
to test the heavy quark symmetry \cite{rf-isgurw}
by measuring form factors or decay constants.
Hence, the $D$-meson containing a charmed quark is heavy as well
and disintegrates through a large number of decay channels.
The leading decays
$c \rightarrow s + q{\bar q}$ or
$c \rightarrow s + {\bar l} \nu$
occur with branching ratios of order a few \%
and allow studies of QCD mechanisms
in a transition range between high and very low energies.
Although experimentally very challenging, the search for
the purely leptonic decays
$D^{\pm} \rightarrow \mu^{\pm} \nu$ and an improved
measurement of $D_S^{\pm} \rightarrow \mu^{\pm} \nu$
should be eagerly pursued further,
since these decays
offer direct access to the meson decay constants $f_D$ and $f_{D_S}$,
quantities that can possibly be calculated accurately by lattice
gauge theory methods
\cite{rf-marti},\cite{rf-wittig}.
\subsection{ Singly Cabibbo suppressed decays (SCSD)}
Decays suppressed by a factor $\sin{\theta_C}$, the socalled
singly Cabibbo suppressed decays (SCSD),
are of the form
$c \rightarrow d u {\bar d}$ or
$c \rightarrow s {\bar s} u$.
Examples of SCSD, such as
$D \rightarrow \pi \pi$ or $ K \bar{K}$ have been observed
at a level of $10^{-3}$ branching ratio
(1.5 and 4.3 $\cdot 10^{-3}$, respectively)
\cite{rf-partbook}.
They provide information about the
CKM-matrix, and also are background
processes to be worried about in the search for rare decays.
\subsection{ Doubly Cabibbo suppressed decays and
$D^0 \longleftrightarrow {\bar D^0}$ mixing}
Doubly Cabibbo suppressed decays (DCSD) of the form
$c \rightarrow d {\bar s} u$ have
not been observed up to now\cite{rf-partbook},
with the exception
of the mode $BR(D^0 \to K^+ \pi^- )$ that has a branching
ratio of $(2.9 \pm 1.4) \cdot 10^{-4}$.
The existing upper bounds are at the level of a few $10^{-4}$,
with branching ratios expected at the level of $10^{-5}$.
These DCSD are particularly interesting from the QCD-point of view,
and quite a few predictions have been made\cite{rf-bigi}.
DCSD also act as one of the main background processes
to the $D^0 \leftrightarrow \bar{D^0} $ \ mixing and therefore must be well understood,
before the problem of mixing itself can be successfully attacked.
As in the neutral Kaon and B-meson system, mixing between the
$D^0$ and the $\bar{D^0}$ is expected to occur (with $\Delta C = 2$).
The main contribution is expected due to long distance effects, estimated
to be as large as about
$r_D \sim 5 \cdot 10^{-3}$
\cite{rf-wolf},
while the standard box diagram yields $r_D \sim 10^{-5}$
\cite{rf-chau}.
Here $r_D$ is the mixing parameter
$ r_D \simeq (1/2) \cdot ( \Delta M / \Gamma)^2 $, with contributions by the
DCSD neglected.
Recall that the DCSD poses a serious background source in case
only the
time-integrated spectra are studied. The two sources can however be
better separated,
if the decay time dependence of the events is recorded separately
(see e.g. \cite{rf-anjos}). More details on the prospect of
measuring mixing at HERA are given in \cite{yt-mixing}.
\subsection{ Flavour Changing Neutral Currents (FCNC)}
An important feature of the standard model is that {\it flavour
changing neutral currents (FCNC with $\Delta C =1$)}
only occur at the one loop level in the SM
{\it i.e.} through short distance contributions,
such as e.g. in penguin and box diagrams
as shown in figs.\ref{feyn-loop} and
\ref{feyn-penguin}.
These are transitions of the form
$s \rightarrow d + N$ or
$c \rightarrow u + N$, where $N$
is a non-hadronic neutral state such as $\gamma \ $ or $\ l{\bar l}$, and give
rise to the decays
$D \rightarrow \rho \gamma $, $D^0 \rightarrow \mu^+ \mu^- $, $D^+ \rightarrow \pi^+ \mu^+ \mu^- $ \ etc.
Although the relevant couplings are the same as those of leading decays,
their rates are very small as they are suppressed by
the GIM mechanism \cite{gim} and the unfavourable quark masses
within the loops.
The SM-prediction for the branching ratios are
of order $10^{-9}$ for $D^0 \to X l^+ l^-$ and
of $O(10^{-15})$ for $D^0 \to l^+ l^-$, due to additional
helicity suppression.
A summary of the expected branching ratios obtained from
calculations of the loop integrals (\cite{rf-willey}, \cite{rf-bigi},
\cite{hera-91a}, \cite{long-range})
using also the QCD- short distance
corrections available \cite{rf-cella} is given in
table \ref{tab-exp}.
However, FCNC are sensitive to new, heavy particles in the loops, and
above all, to new physics in general.
In addition to these short distance loop diagrams, there are contributions
from long distance effects, which might be even larger by several
orders of magnitude\cite{long-range}.
To mention are photon pole amplitudes
($\gamma$-pole)
and vector meson dominance (VMD) induced processes.
The $\gamma$-pole model (see fig.\ref{feyn-gpole})
in essence is a W-exchange decay with a
virtual photon radiating from one of the quark lines. The behaviour
of the amplitude depends on the spin state of the final state
particle (vector V or pseudoscalar P).
The dilepton mass distribution for
$D \to V l^+ l^-$ modes peaks at zero (small $Q^2$) since the photon
prefers to be nearly real. On the other hand, the pole amplitude
for $D \to P l^+ l^-$ decays vanishes for small dilepton masses
because $D \to P \gamma$ is forbidden by angular momentum
conservation.
The VMD model (see fig.\ref{feyn-gpole}b) proceeds through the
decay $D \to X V^0 \to X l^+ l^-$.
The intermediate vector meson $V^0$ ($\rho, \omega, \phi$)
mixes with a virtual photon which then couples to the lepton pair.
The dilepton mass spectrum therefore will exhibit poles at the
corresponding vector meson masses due to real $V^0$ mesons decaying.
Observation of FCNC processes at rates that exceed the
long distance contributions hence opens a window
into physics beyond the standard model.
Possible scenarios include leptoquarks or heavy neutral leptons
with sizable couplings to $e$ and $\mu$.
A measurement of such long distance contributions in the
charm sector is inherently
of interest, as it can be used to estimate similar effects
in the bottom sector \cite{long-d},
e.g. for the decay $b \to s \gamma$,
which was seen at the level of $2.3 \cdot 10^{-4}$.
A separation of short and long range contributions would allow
e.g. a determination of $\mid V_{td}/V_{ts} \mid$
from the ratio
$BR(B \to \rho \gamma) / BR(B \to K^* \gamma)$
and bears as such a very high potential.
\begin{figure}[ht]
\epsfig{file=feyn-loop.eps,width=9cm}
\caption{\it Example of an FCNC process in the standard model
at the loop level: $D^0 \rightarrow \mu^+ \mu^- $\ . }
\label{feyn-loop}
\end{figure}
\begin{figure}[ht]
\epsfig{file=feyn-box.eps,width=7.5cm}
\epsfig{file=feyn-penguin.eps,width=7.5 cm}
\caption{\it FCNC processes: short range contributions due to
box diagrams (a) or penguin diagrams (b).}
\label{feyn-penguin}
\end{figure}
\begin{figure}[ht]
\epsfig{file=feyn-gpole.eps,width=7.5cm}
\epsfig{file=feyn-vdm.eps,width=7.5cm}
\caption{\it FCNC processes : long range contributions due to
$\gamma$-pole amplitude (a) and vector meson dominance (b).}
\label{feyn-gpole}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Decay mode & Expected branching ratio \\
\hline
\hline
$ c \to u \gamma $ & $10^{-15} - 10^{-14}$ \\
$ D \to \rho \gamma $ & $10^{-7}$ \\
$ D \to \gamma \gamma $ & $10^{-10}$ \\
\hline
$ c \to u l {\bar l} $ & $5 \cdot 10^{-8}$ \\
$ D^+ \to \pi^+ e^+ e^- $ & $10^{-8}$ \\
$ D^0 \to \mu^+ \mu^- $ & $10^{-19}$ \\
\hline
\hline
\end{tabular}
\caption[Expectations for loop processes.]
{Expectations for branching ratios of loop processes
based on SM calculations, hereby assuming the
BR of both $D \to \rho \rho $ and
$D \to \rho \pi$ to be below $10^{-3}$.}
\label{tab-exp}
\end{center}
\end{table}
\subsection{ Forbidden decays }
Decays which are not allowed
to all orders in the standard model, the {\it forbidden} decays,
are exciting signals of new physics.
Without claim of completeness, we shall list
here some of the more important ones:
\begin{itemize}
\item Lepton number (L) or lepton family (LF) number violation (LFNV)
in decays such as $D^0 \to \mu e$, $D^0 \to \tau e$.
It should be strongly emphasized that decays of $D$-mesons test
couplings complementary to those effective in K- or B-meson decays.
Furthermore, the charmed quark is the only possible charge 2/3
quark which allows
detailed investigations of unusual couplings.
These are often predicted to occur in models with
i) technicolour \cite{rf-masiero};
ii) compositeness \cite{rf-lyons};
iii) leptoquarks \cite{rf-buchmu} \cite{rf-campb};
(see e.g. fig.\ref{feyn-x-s}a and b); this can include
among others non-SU(5) symmetric flavour-dependent
couplings (u to $l^{\pm}$, and d to $\nu$), which
would forbid decays of the sort $K_L \to \mu \mu, \ \mu e $, while
still allowing for charm decays;
iv) massive neutrinos (at the loop level) in an extended standard model;
v) superstring inspired phenomenological models
e.g. MSSM models with a Higgs doublet;
vi) scalar exchange particles that would manifest
themselves e.g. in decays of the form $D^0 \to \nu {\bar \nu}$.
\item Further models feature {\it horizontal} interactions,
mediated by particles connecting u and c or d and s quarks
(see e.g. fig.\ref{feyn-x-s}a).
They appear with similar
signatures as the doubly Cabibbo suppressed decays.
\item Baryon number violating decays, such as
$D^0 \to p e^-$ or $D^+ \to n e^-$. They
are presumably very much suppressed,
although they are not directly related to proton decay.
\item The decay
$ D \rightarrow \pi \gamma $ is absolutely forbidden by gauge invariance
and is listed here only for completeness.
\end{itemize}
\vspace{-1.cm}
\begin{figure}[ht]
\epsfig{file=feyn-x-s.eps,width=9.2cm}
\epsfig{file=feyn-x-t.eps,width=7.6cm}
\caption{\it FCNC processes or LFNV decays, mediated by
the exchange of a scalar particle X
or a particle H mediating ``horizontal interactions'', or a leptoquark LQ.}
\label{feyn-x-s}
\end{figure}
The clean leptonic decays make it possible to search for leptoquarks.
If they do not couple also to quark-(anti)quark pairs, they cannot cause
proton decay but yield decays such as
$K \rightarrow \bar{l_1} l_2 $ or
$D \rightarrow \bar{l_1} l_2 $.
In the case of scalar leptoquarks
there is no helicity suppression and consequently the
experimental sensitivity to such decays is enhanced.
Let us emphasize here again, that
decays of $D$-mesons are complementary to those of Kaons, since they probe
different leptoquark types.
To estimate the sensitivity we write the effective four-fermion coupling as
$(g^2_{eff}/M^2_{LQ})$, and obtain
\begin{equation}
\frac{ (M_{LQ}\ /\ 1.8\ TeV)}{g_{eff}}
\geq
\sqrt[4] {\frac{10^{-5}}{BR(D^0 \rightarrow \mu^+\mu^-) }}.
\label{mlq}
\end{equation}
Here $g_{eff}$ is an effective coupling and includes possible mixing effects.
Similarly, the decays $D^+ \rightarrow e^+ \nu$, $D^+ \rightarrow \pi^+ e^+ e^- $ \ can be used to set bounds
on $M_{LQ}$. With the expected sensitivity, one can probe heavy exchange particles
with masses in the $1 \ (TeV / g_{eff})$ range.
Any theory attempting to explain the hadron-lepton symmetry or the
``generation'' aspects of the standard model will give rise to new phenomena
connected to the issues mentioned here. Background problems make it quite
difficult to search for signs
of them at high energies; therefore precision experiments
at low energies (like the highly successful $\mu$-, $\pi$- or K-decay experiments)
are very suitable to probe for any non-standard phenomena.
\section{Sensitivity estimate for HERA}
In this section we present
an estimate on the sensitivity to detect the
decay mode $D^0 \rightarrow \mu^+ \mu^- $.
As was pointed out earlier, this is among the
cleanest manifestation of FCNC or LFNV processes
\cite{rf-willey}.
We base the numbers on our experience
gained in the analysis of the 1994 data, published in \cite{dstar-gp}.
There the $D$-meson decay is measured in the decay mode
$ D^{*+} \rightarrow D^{0} \pi^{+}_s\ ; \
D^0 \to K^{-}\pi^{+}$, exploiting
the well established $D^{*+}(2010)$ tagging
technique\cite{rf-feldma}.
In analogy, we assume
for the decay chain $ D^{*+} \rightarrow D^{0} \pi^{+}_s ;
D^{0} \rightarrow \mu^+ \mu^- $,
a similar resolution of $\sigma \approx 1.1$ MeV in
the mass difference
$ \Delta M = M(\mu^+ \mu^- \pi^+_s) - M(\mu^+ \mu^- ) $
as in \cite{dstar-gp}.
\noindent
In order to calculate a sensitivity for the $D^0 \rightarrow \mu^+ \mu^- $
decay branching fraction we make the following
assumptions:
\noindent
i) luminosity $L = 250\ pb^{-1} $;
ii) cross section
$\sigma (e p \to c {\bar c} X) \mid_{\sqrt s_{ep} \approx 300; Q^2< 0.01}
= 940\ nb $;
iii) reconstruction efficiency $\epsilon_{reconstruction} = 0.5 $;
iv) trigger efficiency
$\epsilon_{trigger} = 0.6 $; this is based
on electron-tagged events, and hence applies to
photoproduction processes only.
v) The geometrical acceptance $A$ has been properly calculated
by means of Monte Carlo simulation for both
decay modes $D^0 \rightarrow K^- \pi^+ $\ and $D^0 \rightarrow \mu^+ \mu^- $\ for a rapidity interval of
$\mid \eta \mid < 1.5 $. For the parton density functions
the GRV parametrizations were employed, and the
charm quark mass was assumed to be $m_c = 1.5$. We obtained \\
$A = 6 \%$ for $p_T(D^*) > 2.5$ (for $K^{-}\pi^{+}$ ) \\
$A = 18 \%$ for $p_T(D^*) > 1.5$ (for $K^{-}\pi^{+}$ ) \\
$A = 21 \%$ for $p_T(D^*) > 1.5$ (for $\mu^+ \mu^- $)
\noindent
A direct comparison with the measured decays $N_{K \pi}$
into $ K^{-}\pi^{+}$ \cite{dstar-gp} then yields the expected
number of events $N_{\mu \mu}$ and determines the branching ratio to
\vspace*{-0.5cm}
$$ BR(D^0 \to \mu^+ \mu^-) = BR(D^0 \to K^{-}\pi^{+}) \cdot
\frac{N_{\mu \mu}}{L_{\mu \mu}} \cdot \frac{L_{K \pi}}{N_{K \pi}}
\cdot \frac{A(p_T>2.5)}{A(p_T>1.5)} $$
Taking the numbers from \cite{dstar-gp}
$N_{K \pi} = 119$ corresponding to an integrated
luminosity of $L_{K \pi} = 2.77 \ pb^{-1}$,
one obtains
\vspace*{0.5cm}
\fbox{ $BR(D^0 \to \mu^+ \mu^-) = 1.1 \cdot 10^{-6} \cdot N_{\mu \mu}$ }
\noindent
In the case of {\it NO} events observed, an upper limit
on the branching ratio calculated by means of Poisson statistics
$(N_{\mu \mu} = 2.3)$, yields a value of
$BR(D^0 \to \mu^+ \mu^-) < 2.5 \cdot 10^{-6}$ at 90\% c.l.
In the case of an observation of
a handful events e.g. of $O(N_{\mu \mu} \approx 10$), one obtains
$BR(D^0 \to \mu^+ \mu^-) \approx 10^{-5}$.
This can be turned into an estimate for the mass of a potential
leptoquark mediating this decay according to eqn.\ref{mlq},
and yields a value of
$M_{LQ}/g_{eff} \approx 1.8$ TeV.
\begin{table}[tb]
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Mode & BR (90\% C.L.) & Interest & Reference \\
\hline
\hline
$ r= {({\bar D^0} \to \mu^+X)} \over {({\bar D^0} \to \mu^-X)} \ $ &
$1.2*10^{-2}$ &$\ $ $\Delta C = 2$, Mix $\ $ & BCDMS 85 \\
$ \ $ & $5.6*10^{-3}$ & $\Delta C = 2$, Mix & E615 86 \\
\hline
${(D^0 \to {\bar D^0} \to K^+\pi^-)} \over
{(D^0 \to K^+ \pi^- + K^- \pi^+ )}$ &
$4*10^{-2}$ &$\ $ $\Delta C = 2$, Mix $\ $ & HRS 86 \\
$\ $ & $ = 0.01^*$ & $\Delta C = 2$, Mix & MarkIII 86 \\
$\ $ & $1.4*10^{-2}$ & $\Delta C = 2$, Mix & ARGUS 87 \\
$ \ $ & $3.7*10^{-3}$ &$\ $ $\Delta C = 2$, Mix $\ $ & E691 88 \\
\hline
$D^0 \to \mu^+ \mu^-$ & $7.0*10^{-5}$ & FCNC & ARGUS 88 \\
$D^0 \to \mu^+ \mu^-$ & $3.4*10^{-5}$ & FCNC & CLEO 96 \\
$D^0 \to \mu^+ \mu^-$ & $1.1*10^{-5}$ & FCNC & E615 86 \\
\hline
$D^0 \to e^+ e^-$ & $1.3*10^{-4}$ & FCNC & MarkIII 88 \\
$D^0 \to e^+ e^-$ & $1.3*10^{-5}$ & FCNC & CLEO 96 \\
\hline
$D^0 \to \mu^{\pm} e^{\mp}$ & $1.2*10^{-4}$ & FCNC, LF & MarkIII 87 \\
$D^0 \to \mu^{\pm} e^{\mp}$ & $1.0*10^{-4}$ & FCNC, LF & ARGUS 88 \\
$D^0 \to \mu^{\pm} e^{\mp}$ & $(1.9*10^{-5})$ & FCNC, LF & CLEO 96 \\
\hline
$D^0 \to {\bar K}^0 e^+ e^-$ & $ 1.7*10^{-3} $ & \ & MarkIII 88 \\
$D^0 \to {\bar K}^0 e^+ e^-/\mu^+ \mu^-/\mu^{\pm} e^{\mp}$
& $ 1.1/6.7/1.*10^{-4} $ & FCNC, LF & CLEO 96 \\
$D^0 \to {\bar K}^{*0} e^+ e^-/\mu^+ \mu^-/\mu^{\pm} e^{\mp}$
& $ 1.4/11.8/1.*10^{-4} $ & FCNC, LF & CLEO 96 \\
$D^0 \to \pi^0 e^+ e^-/\mu^+ \mu^-/\mu^{\pm} e^{\mp}$
& $ 0.5/5.4/.9*10^{-4} $ & FCNC, LF & CLEO 96 \\
$D^0 \to \eta e^+ e^-/\mu^+ \mu^-/\mu^{\pm} e^{\mp}$
& $ 1.1/5.3/1.*10^{-4} $ & FCNC, LF & CLEO 96 \\
$D^0 \to \rho^0 e^+ e^-/\mu^+ \mu^-/\mu^{\pm} e^{\mp}$
& $ 1./4.9/0.5*10^{-4} $ & FCNC, LF & CLEO 96 \\
$D^0 \to \omega e^+ e^-/\mu^+ \mu^-/\mu^{\pm} e^{\mp}$
& $ 1.8/8.3/1.2*10^{-4} $ & FCNC, LF & CLEO 96 \\
$D^0 \to \phi e^+ e^-/\mu^+ \mu^-/\mu^{\pm} e^{\mp}$
& $ 5.2/4.1/0.4*10^{-4} $ & FCNC, LF & CLEO 96 \\
\hline
$D^0 \to K^+ \pi^-\pi^+\pi^-$ & $< 0.0015$ & DC & CLEO 94 \\
$D^0 \to K^+ \pi^-\pi^+\pi^-$ & $< 0.0015$ & DC & E691 88 \\
$D^0 \to K^+ \pi^-$ & $=0.00029$ & DC & CLEO 94 \\
$D^0 \to K^+ \pi^-$ & $< 0.0006$ & DC & E691 88 \\
\hline
\end{tabular}
\caption[Experimental limits on rare $D^0$-meson decays.]
{Experimental limits at 90\% c.l. on rare $D^0$-meson decays
(except where indicated by =).
Here L, LF, FCNC, DC and Mix denote
lepton number and lepton family number violation, flavour changing
neutral currents, doubly Cabibbo suppressed decays and mixing,
respectively.}
\label{tab-d}
\end{center}
\end{table}
\section{Background considerations}
\subsection{Background sources and rejection methods}
The most prominent sources of background originate from
i) genuine leptons from semileptonic B- and D-decays,
and decay muons from $K, \pi$ decaying in the detector;
ii) misidentified hadrons, {\it i.e.} $\pi, K$, from other
decays, notably leading decays and SCSD; and
iii) combinatorial background from light quark processes.
The background can be considerably suppressed by applying various
combinations of the following techniques:
\begin{itemize}
\item $D^*$-tagging technique \cite{rf-feldma}: \\
A tight window on the mass difference $\Delta M$ is the most
powerful criterium.
\item Tight kinematical constraints (\cite{rf-grab2},\cite{hera-91a}): \\
Misidentification of hadronic $D^0$ 2-body decays such as
$D^0 \rightarrow K^- \pi^+ $ ($3.8\%$ BR), $D^0 \rightarrow \pi^+ \pi^- $ ($0.1\%$ BR) and $D^0 \rightarrow K^+ K^- $ ($0.5\%$ BR)
are suppressed by more than an order of magnitude
by a combination of tight windows
on both $\Delta M$ and $M^{inv}_D$.
Final states containing Kaons can be very efficiently discriminated, because
the reflected $M^{inv}$ is sufficiently separated from the true signal
peak. However, this is not true for a pion-muon or pion-electron
misidentification.
The separation is slightly better between $D^0 \rightarrow e^+ e^- $\ and $D^0 \rightarrow \pi^+ \pi^- $.
\item Vertex separation requirements for secondary vertices: \\
Background from light quark production,
and of muons from K- and $\pi$- decays within the detector are
further rejected by exploiting the information of secondary vertices (e.g.
decay length separation, pointing back to primary vertex etc.).
\item Lepton identification (example H1) :\\
Electron identification is possible by using $dE/dx$ measurements
in the drift chambers{\rm, } the shower shape analysis in the calorimeter
(and possibly the transition radiators information).
Muons are identified with the instrumented
iron equipped with limited streamer tubes, with the
forward muon system, and in combination with
the calorimeter information.
The momentum has to be above $\sim 1.5$ to
$2 \ $ GeV/c to allow the $\mu$ to reach the instrumented iron.
Thus, the decay $D^0 \to \mu^+ \mu^-$ suffers from background contributions
by the SCSD mode $D^0 \to \pi^+ \pi^-$, albeit with a known
$BR = 1.6 \cdot 10^{-3} $; here
$\mu$-identification helps extremely well.
An example of background suppression using the particle ID
has been shown in ref.\cite{hera-91a},
where a suppression factor of order $O(100)$ has been achieved.
\item Particle ordering methods exploit the fact that
the decay products of the charmed mesons tend to
be the {\it leading} particles in the event (see e.g. \cite{dstar-dis}).
In the case of observed jets, the charmed mesons are
expected to carry a large fraction of the jet energy.
\item Event variables such as e.g. the total transverse energy
$E_{transverse}$ tend to reflect the difference in event topology
between heavy and light quark production processes, and hence
lend themselves for suppression of light quark background.
\end{itemize}
\subsection{Additional experimental considerations}
\begin{itemize}
\item Further possibilities to enhance overall statistics are the
usage of inclusive decays (no tagging), where the gain
in statistics is expected to be about
$\frac{ N(all D^0)}{ N(D^0 from D^*)} = 0.61 / 0.21 \approx 3$,
however on the the cost of higher background contributions.
\item In the decays $D^0 \to e e$ or $D^0 \to \mu e$ one expects
factors of 2 to 5 times better background rejection efficiency.
\item Trigger :
A point to mention separately is the trigger. To be able to
measure a BR at the level of $10^{-5}$, the event filtering
process has to start at earliest possible stage.
This should happen preferably at the
first level of the hardware trigger, because it will
not be feasible to store some $10^{+7}$ events on permanent
storage to dig out the few rare decay candidates.
This point, however, has up to now not yet been thoroughly
studied, let alone been implemented at the
hardware trigger level.
\end{itemize}
\begin{table}[tb]
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Mode & BR (90\% C.L.) & Interest & Reference \\
\hline
\hline
$D^+ \to \pi^+ e^+ e^-$ & $6.6*10^{-5}$ & FCNC & E791 96 \\
$D^+ \to \pi^+ \mu^+ \mu^-$ & $1.8*10^{-5}$ & FCNC & E791 96 \\
$D^+ \to \pi^+ \mu^+ e^-$ & $3.3*10^{-3}$ & LF& MarkII 90 \\
$D^+ \to \pi^+ \mu^- e^+$ & $3.3*10^{-3}$ & LF& MarkII 90 \\
\hline
$D^+ \to \pi^- e^+ e^+$ & $4.8*10^{-3}$ & L& MarkII 90 \\
$D^+ \to \pi^- \mu^+ \mu^+$ & $2.2*10^{-4}$ & L& E653 95 \\
$D^+ \to \pi^- \mu^+ e^+$ & $3.7*10^{-3}$ & L+LF& MarkII 90 \\
$D^+ \to K l l $ & similar & L+LF& MarkII 90 \\
\hline
$c \to X \mu^+ \mu^-$ & $1.8*10^{-2}$ & FCNC & CLEO 88 \\
$c \to X e^+ e^-$ & $2.2*10^{-3}$ & FCNC & CLEO 88 \\
$c \to X \mu^+ e^-$ & $3.7*10^{-3}$ & FCNC & CLEO 88 \\
\hline
$D^+ \to \phi K^+ $ & $1.3*10^{-4}$ & DC & E687 95 \\
$D^+ \to K^+ \pi^+ \pi^- $ & $=6.5*10^{-4}$ & DC & E687 95 \\
$D^+ \to K^+ K^+ K^- $ & $1.5*10^{-4}$ & DC & E687 95 \\
\hline
$D^+ \to \mu^+ \nu_{\mu}$ & $7.2*10^{-4}$ & $f_D$ & MarkIII 88 \\
\hline
\hline
$D_S\to \pi^- \mu^+ \mu^+$ & $4.3*10^{-4}$ & L& E653 95 \\
$D_S\to K^- \mu^+ \mu^+$ & $5.9*10^{-4}$ & L& E653 95 \\
$D_S \to \mu^+ \nu_{\mu}$ & $=9 *10^{-4}$ & $f_{D_S}=430$ & BES 95 \\
\hline
\end{tabular}
\caption[Experimental limits on rare $D^+$- and $D_s$-meson decays.]
{Selection of experimental limits at 90\% c.l.
on rare $D^+$- and $D_s$-meson decays\cite{rf-partbook}
(except where indicated by =).}
\label{tab-ds}
\end{center}
\end{table}
\section{Status of sensitivity in rare charm decays}
Some of the current experimental upper limits
at 90\% c.l. on the branching ratios of
rare $D$ decays are summarised in
tables ~\ref{tab-d} and \ref{tab-ds}
according to \cite{rf-partbook}.
Taking the two-body decay $D^0 \to \mu^+ \mu^-$ to be the
sample case, a comparison of the achievable sensitivity on
the upper limit on branching fraction
$B_{D^0 \to \mu^+ \mu^-}$ at 90\% c.l. is summarized
in table \ref{tab-comp} for different experiments,
assuming that NO signal
events are being detected (see \cite{rf-grab1}
and \cite{rf-partbook}).
Note that the sensitivity reachable at HERA is compatible with
the other facilities, provided the above assumed luminosity is
actually delivered. This does not hold for a
proposed $\tau$-charm factory, which - if ever built and performing
as designed - would exceed all other facilities by at least
two orders of magnitude (\cite{rf-rafetc}).
\noindent
The status of competing experiments at other facilities
is the following :
\noindent
\begin{itemize}
\item SLAC : $e^+e^-$ experiments : Mark-III, MARK-II, DELCO : stopped.
\item CERN : fixed target experiments : ACCMOR, E615, BCDMS, CCFRC : stopped. \\
LEP-experiments : previously ran at the $Z^0$-peak;
now they continue
with increased $\sqrt{s}$, but at a {\it reduced} $\sigma$ for such
processes;
\item Fermilab (FNAL) : the photoproduction experiments E691/TPS and
hadroproduction experiments E791 and E653 are
stopped, with some analyses being finished based on about
$O(10^5)$ reconstructed events. In the near
future highly competitive results are to be expected from
the $\gamma p$ experiments E687 and
its successor E831 (FOCUS), based on an statistics
of about $O(10^5)$ and an estimated $10^6$ reconstructed
charm events, respectively. But also the hadroproduction
experiment E781 (SELEX) is anticipated to reconstruct some $10^6$
charm events within a few years.
\item DESY : ARGUS $e^+e^-$ : stopped, final papers emerging now. \\
HERA-B : With a very high cross section of
$\sigma(p N \to c {\bar c}) \approx 30 \mu$b at
$\sqrt{s} = 39 $ GeV and an extremely high luminosity,
a total of up to $10^{12} \ \ c {\bar c}$-events may be
produced. Although no detailed studies exist so far,
a sensitivity of order $10^{-5}$ to $10^{-7}$ might be expected,
depending on the background rates.
\item CESR : CLEO is continuing steadily to collect data, and above all
is the present leader in
sensitivity for many processes (see table \ref{tab-d}).
\item BEPC : BES has collected data at $\sqrt{s}=4.03$ GeV (and 4.14 GeV),
and is continuing to do so; BES will become competitive as soon as
enough statistics is available, because
the background conditions are very favourable.
\item $\tau$-charm factory : The prospects for a facility
being built in China (Beijing) are uncertain.
If realized, this is going to be the
most sensitive place to search for rare charm decays.
Both, kinematical constraints (e.g. running at the $\psi''(3700)$)
and the missing background from non-charm induced processes
will enhance its capabilities.
\end{itemize}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
$\ $ & SPEAR & BEPC & E691 & LEP & $\tau-c F$ & CLEO & HERA \\
\hline
\hline
$\sigma_{cc}$ (nb) & 5.8 & 8.6 & 500 & 4.5 & 5.8 & 1.3 & $\sigma_{ep}=940$ \\
\hline
$L (pb^{-1}$) & 9.6 & 30 & 0.5 & 150 & $10^4$ & 3850 & 250\\
\hline
$N_D$ & $6*10^4$ & $3*10^5$ & $2.5*10^5$ & $10^6$
& $6*10^7$ & $5 * 10^6$ & $2.4*10^8$\\
\hline
$\epsilon \cdot A$ & 0.4 & 0.5 & 0.25 & 0.05 & 0.5 & 0.1 & 0.06 \\
\hline
$N_{BGND}$ & O(0) & O(1) & O(10) & O(10) & O(1) & O(1) & S/N$\approx$1 \\
\hline
$\sigma_{charm} \over \sigma_{total} $
& 1 & 0.4 & 0.01 & 0.1 & 1 & 0.1 & 0.1 \\
\hline
\hline
$B_{D^0 \to \mu^+ \mu^-}$ &
$1.2*10^{-4}$ & $2*10^{-5}$ & $4*10^{-5}$ &
$5*10^{-5}$ & $5*10^{-8}$ & $3.4 *10^{-5}$ & $2.5*10^{-6}$ \\
\hline
\end{tabular}
\caption[Comparison of sensitivity]
{Comparison of estimated sensitivity to the sample decay mode
$D^0 \to \mu^+ \mu^-$ for different facilities or experiments.}
\label{tab-comp}
\end{center}
\end{table}
\section{Summary}
$D$-meson decays offer a rich spectrum of interesting physics; their rare
decays may provide
information on new physics, which is complementary to the
knowledge stemming from $K$-meson and $B$-decays.
With the prospect of order a few times $10^8$
produced charmed mesons per year,
HERA has the potential to contribute substantially to this field.
Further competitive results can be anticipated from the fixed target
experiments at Fermilab or from a possible $\tau$-charm factory.
For the rare decay $D^0 \rightarrow \mu^+ \mu^- $ investigated here we
expect at least an order of magnitude improvement in sensitivity
over current results (see table given above) for a total integrated luminosity of
$\int L dt $ = 250 pb$^{-1}$, the limitation here being statistical.
An extrapolation to even higher luminosity is rather difficult
without a very detailed numerical simulation, because
at some (yet unknown) level the background processes will
become the main limiting factor for the sensitivity, rendering
sheer statistics useless.
For this, a good tracking resolution, excellent particle
identification (e, $\mu,\ \pi,\ K,\ {\rm p}$) and a high resolution for
secondary vertices is required
to keep the systematics under control, and either to
unambigously identify a signal of new physics, or to
reach the ultimate limit
in sensitivity.
| proofpile-arXiv_065-359 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{ Acknowledgment}
Discussion with B. M\"uller is acknowledged.
This work was supported by the National Scientific
Research Fund (Hungary), OTKA
No.T016206 and F019689. One of the author (P.L.) thanks for
the financial support obtained from the Soros Foundation and
from the "For the Hungarian Science" Foundation of the Magyar
Hitelbank (MHB).
| proofpile-arXiv_065-360 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The occurrence of interfaces is a very common phenomenon in extended
systems out of equilibrium.
These patterns appear, in systems with many equilibrium states, as
solutions linking two different equilibria, namely two phases \cite{Cross}.
When the time
evolution of the system shows the motion of an interface resulting in an
invasion of one phase into the other, the phenomenon under consideration is
usually called a front. These structures arise in various experimental systems
such as reaction-diffusion chemical systems \cite{Showalter},
alloy solidification \cite{Levine} and crystal growth
\cite{Elkinani}.
\bigskip
From a mathematical point of view, the description of the front dynamics
using
space-time continuous models is now complete, at least for several
one-dimensional Partial Differential
Equations \cite{Collet90}. Also, in discrete space and continuous time systems,
i.e.\ in Ordinary Differential Equations, the dynamics of these interfaces
is well understood \cite{Defontaines,Keener}. However, in space-time discrete
models such as the Coupled Map Lattice (CML), the study is not as
complete\footnote{The problem of travelling waves has
been investigated in other space-time discrete models such as the chain of
diffusively coupled maps for which the coupling is
different from that in CML's \cite{Afraimovich}.};
except for the one-way coupled model \cite{Carretero}.
CML's have been proposed as the simplest models of space-time discrete dynamics
with continuous states and they serve now as a paradigm in the framework of
nonlinear extended dynamical systems \cite{Kaneko93}.
\bigskip
The phenomenology of the interface dynamics differs between discrete
space and continuous space systems. In the former case,
varying the coupling strength induces a
bifurcation\footnote{This bifurcation is of saddle type and is accompanied
by a symmetry breaking \cite{Bastien95}.}
from a regime of standing interfaces to a regime of
fronts in which the propagation can be interpreted as a mode-locking
phenomenon \cite{Defontaines,Carretero}. These effects are assigned to the
discreteness of space and are well-known in solid-state physics
\cite{Aubry}.
\bigskip
In previous studies of the piece-wise affine bistable CML, the problem of
steady interfaces was solved and the fronts' structure was thoroughly
investigated by means of generalized transfer matrices
\cite{Bastien95,Laurent}.
Nevertheless, this technique did not allowed us to prove the existence of
fronts of any velocity, nor to understand clearly their dependence
on the parameters, in particular that of the fronts' velocity.
\bigskip
In the same CML, using techniques employed for the
piece-wise linear mappings
\cite{Bird,Coutinho96a}, we now prove the one-to-one correspondence between
the set of orbits that are defined for all times, i.e.\ the global orbits, and
a set of
spatio-temporal codes (Section 2 and 3). Our model is locally a
contraction, hence these orbits are shown to be linearly stable when they never
reach the local map's discontinuity (Section 4). Further, using the orbit-code
correspondence, the existence of fronts is stated and their velocity is
computed (Section 5).
In Section 6, the linear stability of fronts with rational velocity
is given for a large
class of initial conditions. In the following, we study the dynamics
of the propagating interfaces. In particular, their velocity is computed
for all the parameters (Section 7). Using these results,
the nonlinear stability, i.e.\ the stability of fronts with respect to
any kink initial condition, is proved in Section 8 using a method
similar to the Comparison Theorem \cite{Collet90}. This result holds
for any rational velocity provided the coupling is small enough.
We justify such a
restriction by the existence of non-uniform fronts for
large coupling which co-exist with the fronts. Finally, some
concluding remarks are made, in particular we emphasize the extension of these
results to certain $C^{\infty}$ local maps.
\section{The CML and the associated coding}
Let $M\in {\Bbb R}$ be fixed
and ${\cal M}=[-M,M]^{{\Bbb Z}}$ be the phase space of the CML
under consideration. The CML is the one-parameter family of maps
$$\begin{array}{rl}
F_{\epsilon}:&{\cal M} \longrightarrow {\cal M} \\
& x^t \longmapsto x^{t+1}
, \end{array}$$
where $x^t=\left\{x_s^t\right\}_{s\in{\Bbb Z}}$ is the state of the system at
time $t$.
This model is required to be representative of the reaction-diffusion dynamics.
Therefore the new state at time
$t+1$ is given by
\cite{Kaneko93}:
\begin{equation} \label{CML}
x_s^{t+1}\equiv (F_{\epsilon} x^t)_s=(1-\epsilon)f(x_s^t)+{\epsilon \over 2}
\left( f(x_{s-1}^t)+f(x_{s+1}^t) \right)\quad \forall s \in {\Bbb Z} .
\end{equation}
Here the coupling strength $\epsilon \in (0,1]$ and we
choose the local
map $f$ to be bistable and piece-wise affine \cite{Laurent}:
$$f(x)=\left\{
\begin{array}{ccc}
ax+(1-a)X_1&if&x<c\\
ax+(1-a)X_2&if&x\geq c,
\end{array} \right.$$
where $a\in (0,1)$ and $-M\leq X_1<c\leq X_2 \leq M$. These conditions ensure
the existence
of the two stable fixed points $X_1$ and $X_2$, the only attractors for $f$.
\bigskip
This local map reproduces qualitatively the
autocatalytic reaction in chemical systems \cite{Showalter}, or the
local force applied to the system's phase in the process of solidification
\cite{Levine}.
\bigskip
For the sake of simplicity we assume $X_1=0$ and $X_2=1$.
This is always possible by a linear transformation of the variable.
\bigskip
For a state $x^t$, the sequence
$\theta^t=\left\{\theta_s^t\right\}_{s\in{\Bbb Z}}\ $ defined by
$$\theta_s^t=\left\{
\begin{array}{ccc}
0&if&x_s^t<c\\
1&if&x_s^t\geq c,
\end{array} \right.$$
is called the {\bf spatio-temporal code} or the code unless it is
ambiguous.
\section{The orbit-code correspondence}
The study of the orbits, in particular those that exist for all the times
$t\in{\Bbb Z}$, can be achieved
using their code. In this section, we first compute explicitly the positive
orbits\footnote{i.e.\ the orbits for $t\geq 0$} for any initial
condition. Then we prove the one-to-one correspondence between the
global orbits and their code.
\bigskip
Notice that the local map can be expressed in terms of the code
$$f(x_s^t)=ax_s^t+(1-a)\theta_s^t.$$
By introducing this expression into the CML dynamics, one obtains a
linear non-homogeneous finite-difference equation for $x_s^t$
in which the code only appears in the non-homogeneous term.
Using the Green functions' method, this equation may be solved and the
solution, as a function of the
code $\left\{\theta_s^k\right\}_{s\in{\Bbb Z} ,t_0\leq k\leq t-1}$ and of
the initial condition $\{x_s^{t_0}\}_{s\in{\Bbb Z} }$, is given by
\begin{equation}\label{CONFIG}
x_s^t=\sum_{k=1}^{t-t_0}\sum_{n=-k}^{k}l_{n,k}\theta_{s-n}^{t-k}+
{a\over 1-a}\sum_{n=-(t-t_0)}^{t-t_0}l_{n,t-t_0}x_{s-n}^{t_0}
\quad \forall t>t_0\ {\rm and}\ s\in{\Bbb Z} ,
\end{equation}
where the coefficients $l_{n,k}$ satisfy the recursive relations
$$l_{n,k}=(1-a)\delta_{1,k}\left((1-\epsilon)\delta_{n,0}+{\epsilon\over 2}
(\delta_{n,1}+\delta_{n,-1})\right) \quad \forall n\in{\Bbb Z}\ {\rm and}\ k\leq
1,\footnote{$\delta_{n,k}$ is the Kroenecker's symbol}$$
and
$$l_{n,k+1}-a(1-\epsilon)l_{n,k}-{a\epsilon\over 2}\left(l_{n+1,k}+l_{n-1,k}
\right)=0 \quad \forall n\in{\Bbb Z}\ {\rm and}\ k\geq1.$$
From the latter, it follows that for $\epsilon\in (0,1)$
$$l_{n,k}=0\quad {\rm iff}\quad |n|>k\ {\rm or}\ n=k=0,$$
and, one can derive the bounds
\begin{equation}\label{BOUND}
0\leq l_{n,k}\leq (1-a)a^{k-1},
\end{equation}
and the normalization condition
$$\sum_{n,k\in{\Bbb Z} }l_{n,k}=1.$$
Further properties of these coefficients are given in Appendix \ref{LNK}.
\bigskip
The study is now restricted to the {\bf global orbits} $\{x^t\}_{t\in{\Bbb Z}}$.
That is to say, we consider the sequences
which belong to
$$\Lambda=\left\{ \{x^t\}_{t\in{\Bbb Z} }\in {\cal M}^{{\Bbb Z}}\ :\ F_{\epsilon}(x^t)=
x^{t+1}\ \forall t\in{\Bbb Z}\right\}.$$
For such orbits, taking the limit
$t_0\rightarrow -\infty$ and using the bounds (\ref{BOUND}) in (\ref{CONFIG})
leads to the relation:
\begin{equation} \label{ORBIT}
x_s^t=\sum_{k=1}^{+\infty}\sum_{n=-k}^{k}l_{n,k}\theta_{s-n}^{t-k}
\quad \forall s,t\in{\Bbb Z} ,
\end{equation}
which gives the correspondence between these orbits and their code
as we state now.
\bigskip
By the relation (\ref{ORBIT}), all the orbits in $\Lambda$ stay
in $[0,1]^{{\Bbb Z}}$. Hence their code can be uniquely computed by
\begin{equation} \label{CODE}
\theta_s^t=\lfloor x_s^t-c\rfloor +1 \quad \forall s,t \in {\Bbb Z},
\end{equation}
where $\lfloor .\rfloor$ stands for the floor function\footnote{
i.e.\ $\lfloor x\rfloor\in{\Bbb Z}$ and $x-1<\lfloor x\rfloor\leq x$}.
\bigskip
From (\ref{ORBIT}) and (\ref{CODE}) it follows that for
any orbit in $\Lambda$, its code must obey the following relation:
$$c-1\leq\sum_{k=1}^{+\infty}\sum_{n=-k}^{k}l_{n,k}\left(
\theta_{s-n}^{t-k}-\theta_s^t\right)<c
\quad \forall s,t\in{\Bbb Z}, $$
which is called the {\bf admissibility condition}. Then, conversely to the
preceding statement, for a sequence $\theta\in\{0,1\}^{{\Bbb Z}^2}$ that
satisfies this condition, there is a unique orbit in $\Lambda$, given by
(\ref{ORBIT}).
\bigskip
The spatio-temporal coding is an effective tool for the description of all
the global orbits of the piece-wise affine bistable CML
\cite{Coutinho96b}.
These orbits are important since they collect the physical phenomenology
of our model.
\section{The linear stability of the global orbits}
We prove in this section the stability of the orbits in $\Lambda$
with respect to small initial perturbations. Let the norm
$$\| x\|=\sup_{s\in{\Bbb Z} }|x_s |,$$
for $x\in {\cal M}$ and
$$\left({\rm L}x\right)_s=a\left((1-\epsilon)x_s+{\epsilon\over 2}
(x_{s+1}+x_{s-1})\right) \quad \forall s\in{\Bbb Z} ,$$
be the CML linear component. Notice that ${\rm L}$ is invertible on ${\cal M}$
if $\epsilon<{1\over 2}$ and
$$\|{\rm L}\|=\sup_{x\neq 0} {\|{\rm L}x\|\over \| x\|}=a.$$
The linear stability is claimed in
\begin{Pro}\label{STAB1}
Let $\left\{ x^t\right\}_{t\in{\Bbb N} }$ be an orbit such that
$$\exists \delta >0 \ :\ |x_s^t-c|>\delta a^t
\quad \forall t\geq 0,\ s\in{\Bbb Z} .$$
Then for any initial condition $y^0$ in a
neighborhood of $x^0$, i.e.\
$$\|y^0-x^0\| <\delta,$$
we have
$$\|y^t-x^t\| <\delta a^t\quad \forall t\geq 0.$$
\end{Pro}
Equivalently, $x^t$ and $y^t$ have the same code for all times.
\noindent
{\sl Proof:} The relation (\ref{CML}) with the present local map
can be written
in terms of the operator ${\rm L}$
\begin{equation}\label{LINDY}
x^{t+1}={\rm L}x^t+{1-a\over a}{\rm L}\theta^t
\end{equation}
where $\theta^t=\{\theta_s^t\}_{s\in{\Bbb Z} }$ is the code of $x^t$. Using
this relation, one shows by induction that the codes of the two orbits
remains equal for all the times; also using (\ref{LINDY}), the latter implies
the statement. $\Box$
\bigskip
Notice that this assertion is effective for the orbits in $\Lambda$ that never
reach $c$, since in this situation $\delta$ can be computed
(see Proposition \ref{STAB2} below for the case of fronts).
Further, because our system
is deterministic, when this proposition holds for
an orbit $\{x^t\}$ and a given initial condition $y$, it cannot hold
for a different orbit $\{\tilde x^t\}$ and the same initial condition $y$,
unless both these orbits converge to each other. Hence, using this statement,
one may be able to determine the (local) basin of attraction for any
orbit in $\Lambda$ that never reaches $c$.
\section{The existence of fronts}
We now apply the orbit-code correspondence to a particular class of
travelling wave orbits, namely
the fronts.
\begin{Def}\label{FRONT}
A {\bf front} with velocity $v$ is an orbit in $\Lambda$ given by
$$x_s^t=\phi(s-vt-\sigma)\quad \forall s,t\in{\Bbb Z}, $$
where $\sigma\in{\Bbb R}$ is fixed and, the {\bf front shape}
$\phi:{\Bbb R}\longrightarrow{\Bbb R}$, is a right continuous function which obey the
conditions
\begin{equation}\label{ADMI}
\left\{\begin{array}{ccc}
\phi(x)<c&{\rm if}&x<0\\
\phi(x)\geq c&{\rm if}&x\geq 0.
\end{array}\right.
\end{equation}
\end{Def}
\bigskip
The front shape has the following spatio-temporal behavior:
$$\lim_{x\rightarrow -\infty}\phi(x)=0\quad {\rm and}\quad
\lim_{x\rightarrow +\infty}\phi(x)=1.$$
In this way, the fronts are actually the travelling interfaces
as described in the introduction.
Moreover, for any front shape, the front changes by varying $\sigma$; but if
$v={p\over q}$ with $p$ and $q$ co-prime integers, there are only $q$ different
fronts that cannot be deduced from one another by space translations. On the
other hand, when $v$ is irrational, the
family of such orbits becomes uncountable. (Both these claims are deduced from
the proof of Theorem \ref{EXIST} below.)
\bigskip
The existence of fronts is stated in
\begin{Thm}\label{EXIST}
Given $(a,\epsilon)$ there is a countable nowhere dense set
${\rm G}_{a,\epsilon}\subset (0,1]$ such that, for any $c\in (0,1]\setminus
{\rm G}_{a,\epsilon}$, there exists a unique front shape. The
corresponding front velocity is a continuous function of the parameters with
range $[-1,1]$.
\end{Thm}
In other words, for any velocity $v$,
there is a parameter set for which the only fronts that
exist are those of velocity $v$.
\bigskip
Furthermore, for $c\in {\rm G}_{a,\epsilon}$, no front exists. But, the front
velocity can be extended to a continuous function for all the values of
the parameters.
\bigskip
Referring to a similar study of a circle's map
\cite{Coutinho96a}, we point out that, even when no front exists,
numerical simulations show convergence towards a ``ghost'' front.
Actually, by enlarging the linear stability result, this comment is proved
(see Proposition \ref{GHOST} below). A
ghost front is a front-like sequence in ${\cal M^{{\Bbb Z} }}$, that is not an
orbit of the CML and, for which the
(spatial) shape obeys, instead of (\ref{ADMI}), the conditions
$$
\left\{\begin{array}{ccc}
\phi(x)\leq c&{\rm if}&x<0\\
\phi(x)> c&{\rm if}&x\geq 0.
\end{array}\right.
$$
Ghosts fronts arise in this model because the local map is discontinuous.
Moreover, given the parameters, one can avoid this
ghost orbit by changing the value of the local map $f$ at $c$.
\bigskip
\noindent
{\sl Proof of Theorem \ref{EXIST}:}
\noindent
It follows from Definition \ref{FRONT} that if a front with velocity $v$
exists, the code associated with this front is given by
$$\theta_s^t=H(s-vt-\sigma)\quad \forall s,t\in{\Bbb Z} ,$$
where $H$ is the Heaviside function\footnote{
$H(x)=\left\{\begin{array}{ccc}
0&{\rm if}&x<0\\
1&{\rm if}&x\geq 0
\end{array}\right.$}.
Hence by (\ref{ORBIT}) the corresponding front shape has the following
expression:
\begin{equation}\label{SHAPE}
\phi(x)=\sum_{k=1}^{+\infty}\sum_{n=-k}^kl_{n,k}H(x-n+vk)\quad \forall x\in{\Bbb R}.
\end{equation}
Such a function is increasing (strictly if $v$ is irrational), right
continuous and its discontinuity
points are of the form $x=n-vk\ (k\geq 1,n\in{\Bbb Z} $ and $|n|\leq k)$. Now one
has to prove that, given $(a,\epsilon,c)$, there exists a unique front
velocity such that (\ref{SHAPE}) satisfy the conditions (\ref{ADMI}), i.e.\
there exists a unique $v$ such that the code $H(s-vt-\sigma)$ is admissible.
For this let
$$\eta(a,\epsilon,v)\equiv\phi(0)=\sum_{k=1}^{+\infty}
\sum_{n=-k}^{\lfloor vk\rfloor }l_{n,k},$$
where the continuous dependence on $a$ and $\epsilon$ is included in the
$l_{n,k}$ which are uniformly summable. It is immediate that
$\eta(a,\epsilon,v)$ is a strictly increasing ($\epsilon\neq 0$),
right continuous function of $v$.
Moreover, it is continuous on the irrationals and\footnote{
$g({p\over q}^-)={\displaystyle\lim_{h\rightarrow {p\over q},h<{p\over q}}}
g(h)$}
$$\eta(a,\epsilon,{p\over q})-\eta(a,\epsilon,{p\over q}^-)>0.$$
The conditions (\ref{ADMI}) impose
that $v$ be given by
\begin{equation} \label{VELOCITY}
\bar{v}(a,\epsilon,c)=\min\left\{v\in{\Bbb R} : \eta(a,\epsilon,v)\geq c\right\}.
\end{equation}
Actually, if $\bar{v}(a,\epsilon,c)$ is irrational then
$$\phi(x)<\phi(0)=c\ {\rm if}\ x<0.$$
If $\bar{v}(a,\epsilon,c)$ is rational we have
$$\phi(x)=\phi(0^-)=\eta(a,\epsilon,{p\over q}^-)\ {\rm if} \ -{1\over q}
\leq x<0,$$
hence either $\phi(0^-)<c$ and the front exists, or $\phi(0^-)=c$ and the
first condition in (\ref{ADMI}) is not satisfied. In the latter case,
given $(a,\epsilon)$ and $v={p\over q}$, there is a unique $c$ that realizes
$\phi(0^-)=c$. The countability of the values of $c$ for which there is no
front then follows.
\bigskip
In this way, we can conclude that for $a,\epsilon$ and ${p\over q}$ fixed,
there exists an interval of the parameter $c$ given by
$$\eta(a,\epsilon,{p\over q}^-)<c\leq \eta(a,\epsilon,{p\over q}),$$
for which the front shape uniquely exists and the front velocity is
$\bar{v}(a,\epsilon,c)={p\over q}$. Moreover, we have
$$
{\rm G}_{a,\epsilon}=\left\{\eta(a,\epsilon,{p\over q}^-)\ :\
{p\over q}\in (-1,1]\cap {\Bbb Q}\right\}.
$$
\bigskip
The continuity of $\bar{v}(a,\epsilon,c)$ is ensured by the following
arguments.
Given $\delta>0$, (\ref{VELOCITY}) implies that
$$\eta(a,\epsilon,\bar{v}(a,\epsilon,c)-\delta)<c\ {\rm and}\
\eta(a,\epsilon,\bar{v}(a,\epsilon,c)+\delta)>c.$$
Then for $(\tilde a,\tilde\epsilon,\tilde c)$ in a small neighborhood of
$(a,\epsilon,c)$, one has
$$\eta(\tilde a,\tilde\epsilon,\bar{v}(a,\epsilon,c)-\delta)<\tilde c\
{\rm and}\
\eta(\tilde a,\tilde\epsilon,\bar{v}(a,\epsilon,c)+\delta)>\tilde c,$$
and again from (\ref{VELOCITY})
$$\bar{v}(a,\epsilon,c)-\delta<\bar{v}(\tilde a,\tilde\epsilon,\tilde c)
\ {\rm and}\
\bar{v}(a,\epsilon,c)+\delta\geq\bar{v}(\tilde a,\tilde\epsilon,\tilde c),$$
hence $\bar{v}(a,\epsilon,c)$ is continuous. The range of
$\bar{v}(a,\epsilon,c)$ follows from the continuity and the values for
$\epsilon\neq 0$:
$$\eta (a,\epsilon,(-1)^-)=0,\
\eta(a,\epsilon,-1)>0\ {\rm and}\ \eta(a,\epsilon,1)=1.$$
$\Box$
\begin{Rem}
Notice that, from (\ref{VELOCITY}) and the properties of the function $\eta$,
$\bar{v}(a,\epsilon,c)$ is an increasing function of $c$ with range
$[-1,1]$ when $c$ varies from 0 to 1 ($\epsilon\neq 0$).
Further, it has the structure of a Devil's staircase.
\end{Rem}
This proof gives also a practical method for computing numerically the
front velocity by inverse plotting $\eta$ versus $v$ (see Figure 1). This
picture reveals that $\bar{v}(a,1,c)<1$ for some values of $c$. Notice also
that for $c\geq {1\over 2}$, the velocity is non negative. Moreover, the local
map
has to be sufficiently non-symmetric, i.e.\ $c$ must be sufficiently
different from ${1\over 2}$, to have travelling fronts. These two
comments can be stated from the inequality
$$\eta(a,\epsilon,0)>{1\over 2}.$$
\section{The linear stability of fronts with rational velocity}
In this section, we improve the linear stability result of the
orbits in $\Lambda$ for the fronts and, also for the ghost fronts.
\bigskip
We now consider the configurations $x\in {\cal M}$ for which the code is
given by the Heaviside function, namely the {\bf kinks}. Among the kinks, we
shall use the front's configurations\footnote{From now on, the
parameters' dependence is removed unless an ambiguity results and,
we denote by $x$ or by $x^0$ the initial condition of the orbit
$\{x^t\}_{t\in{\Bbb N}}$.}:
$$\left( R_{\sigma}\right)_s=
\sum_{k=1}^{+\infty} \sum_{n=-k}^k l_{n,k}
H(s-n+\bar{v}k-\sigma)\quad \forall s \in {\Bbb Z} ,$$
where $\bar{v}$ is given by (\ref{VELOCITY}). Further, let the interval
$${\rm I}_{p\over q}^0=\left(\eta({p\over q}^-),\eta({p\over q})\right),$$
and $\lceil .\ \rceil$ be the ceiling function\footnote{
i.e.\ $\lceil x\rceil\in{\Bbb Z}$ and $x\leq \lceil x\rceil<x+1$}.
The linear stability of fronts is claimed in the following statement.
\begin{Pro}\label{STAB2}
For $c\in {\rm I}_{p\over q}^0$, let $\delta=\min\left\{ c-\eta({p\over q}^-),
\eta({p\over q})-c\right\}$ and $x$ a kink initial condition such that
\begin{equation}\label{CONDIN}
\exists \sigma\in{\Bbb R} : |\left( R_{\sigma}\right)_{s+\lceil\sigma\rceil}
-x_{s+\lceil\sigma\rceil}|\leq\delta a^{-k}\quad\forall k\geq 0,\quad
\forall -2k-1\leq s\leq 2k,
\end{equation}
then $R_{\sigma}^t$ and $x^t$ have the same code for all times and
$$\lim_{t\rightarrow +\infty}\|R_{\sigma}^t-x^t\|=0.$$
\end{Pro}
\begin{Rem}\label{T0}
In practice, according to the present phase space, the condition (\ref{CONDIN})
has only to hold for
$0\leq k\leq t_0$ where $t_0$ is such that $(M+1)a^{t_0}<\delta$.
\end{Rem}
\noindent
{\sl Proof:} Since $x$ is a kink that satisfies (\ref{CONDIN}), we have
according to the nearest neighbors coupling of the CML:
$$\forall t\geq 0\
\left\{\begin{array}{ccl}
x_{s+\lceil \sigma\rceil}^t<c&{\rm if}&s\leq -t-1\\
x_{s+\lceil \sigma\rceil}^t\geq c&{\rm if}&s\geq t.
\end{array}\right.$$
Using these inequalities, the condition (\ref{CONDIN}) and the definition of
$\delta$, one shows by induction, that
$$\forall t\geq 0\ |(R_{\sigma}^t)_{s+\lceil\sigma\rceil}
-x_{s+\lceil\sigma\rceil}^t|\leq\delta a^{-k}\quad\forall k\geq 0,\quad
\forall -2k-1-t\leq s\leq 2k+t.$$
The latter induces that both the orbits have the same code for all times.
$\Box $
\bigskip
When it exists, let us define the ghost front of velocity ${p\over q}$ by
$\{R_{{p\over q}t+\sigma}\}_{t\in{\Bbb Z} }$. For such orbits, the linear
stability's statement is slightly different than for the fronts:
\begin{Pro}\label{GHOST}
For $c=\eta({p\over q}^-)$, let $\delta=\phi(-{1\over q})-\phi(-{2\over q})$,
and $x$ a kink initial condition such that
$$
\exists \sigma\in{\Bbb R} : 0<x_{s+\lceil\sigma\rceil}-
\left( R_{\sigma}\right)_{s+\lceil\sigma\rceil}
\leq\delta a^{-k}\quad\forall k\geq 0,\quad\forall -2k-1\leq s\leq 2k,$$
then
$$\lim_{t\rightarrow +\infty}\|R_{{p\over q}t+\sigma}-x^t\|=0.$$
\end{Pro}
The proof is similar to the preceding one noticing that for the ghost
front
$$R_{{p\over q}(t+1)+\sigma}={1-a\over a}{\rm L}\theta^t+
{\rm L}R_{{p\over q}t+\sigma},$$
where $\theta_s^t=H(s-\lceil {p\over q}t+\sigma\rceil)$ and that $\forall
t\geq 0$
$$x_{s+\lceil\sigma\rceil}^t>
(R_{{p\over q}t+\sigma})_{s+\lceil\sigma\rceil}\quad \forall s.$$
\section{The interfaces and their velocity}
In this section, we study the
dynamics and the properties of the code of the orbits for which the state is
a kink for all (positive) times.
\begin{Def}\label{INTERF}
An {\bf interface} is a positive orbit $\{x^t\}_{t \in {\Bbb N}}$
such that
$$\forall t\in {\Bbb N}\ \exists J(x^t)\in{\Bbb Z}\ :\
\left\{
\begin{array}{ccl}
x_s^t<c&{\rm if}&s\leq J(x^t)-1 \\
x_s^t\geq c&{\rm if}&s\geq J(x^t),
\end{array} \right.$$
where the sequence $\left\{J(x^t)\right\}_{t\in{\Bbb N}}$ is called the {\bf temporal
code}. The velocity of an interface is the limit
$$v=\lim_{t\rightarrow +\infty}{J(x^t)\over t},$$
provided it exists.
\end{Def}
Notice that the front's temporal code is given by
\begin{equation}\label{TEMPCO}
\lceil \bar{v}t+\sigma\rceil\quad \forall t\in {\Bbb Z},
\end{equation}
namely it is uniform \cite{Laurent}.
\bigskip
To compare the kinks, the following partial order is considered
\begin{Def}
Let $x,y\in {\cal M}$. We say that $x\prec y$ iff $x_s\leq y_s\quad \forall
s\in{\Bbb Z}$.
\end{Def}
As a direct consequence of our CML, using this definition, we have
\begin{Pro}\label{ITEOR}
(i) If $x\prec y$ then $F_{\epsilon}x\prec F_{\epsilon}y$.
\noindent
(ii) If $x\prec y$ are two kinks, then $J(x)\geq J(y)$.
\end{Pro}
Moreover, we can ensure a kink to be the initial condition of
an interface in the following ways
\begin{Pro}\label{CROIS}
Let $x$ be a kink initial condition.
If $x_s\leq x_{s+1}\ \forall s\in{\Bbb Z}$ or, if
$\epsilon\leq {2\over 3}$, then $\{x^t\}_{t\in{\Bbb N} }$ is an interface.
\end{Pro}
We will also use the asymptotic behaviour in space of the interfaces'
states:
\begin{Pro}
For an interface:
$$
\forall t\geq 0\ \left\{\begin{array}{ccccl}
|x_s^t|&\leq &a^tM&{\rm if}&s\leq J(x^0)-t-1\\
|1-x_s^t|&\leq &a^tM&{\rm if}&s\geq J(x^0)+t.
\end{array}\right.
$$
\end{Pro}
This result simply follows from the linearity of the CML and the inequality
\begin{equation}\label{INCRE}
|J(x^t)-J(x^0)|\leq t,
\end{equation}
which is a consequence of the nearest neighbors' coupling.
\bigskip
Combining the previous results, one can give some bounds for the temporal
code of an interface
\begin{Pro}\label{SIGMA}
If $c\in {\rm I}_{p\over q}^0$ and $\{x^t\}_{t\geq 0}$ is an interface, then
$$\exists \sigma_1,\sigma_2\in{\Bbb R}\ :\ \forall t\geq 0\
\lceil{p\over q}t+\sigma_1\rceil\leq J(x^t)\leq
\lceil{p\over q}t+\sigma_2\rceil .$$
\end{Pro}
\noindent
{\sl Proof:} We here prove the left inequality, the right one follows
similarly. If ${\displaystyle{p\over q}}=-1$,
then the statement holds by the inequality (\ref{INCRE}). Now, let $t_0$ be
given by Remark \ref{T0}. For ${\displaystyle{p\over q}}\neq -1$ we have
$$\left( R_0\right)_{-2t_0-1}>0.$$
Let $t_1$ be such that $a^{t_1}M\leq \left( R_0\right)_{-2t_0-1}$ and let
$\tilde\sigma_1=J(x^0)-t_1-2t_0-1$. Define
$$y_s=\left\{\begin{array}{ccl}
x_s^{t_1}&{\rm if}&s<\tilde\sigma_1-2t_0-1\\
\left( R_{\tilde\sigma_1}\right)_s&{\rm if}&\tilde\sigma_1-2t_0-1\leq s\leq
\tilde\sigma_1+2t_0\\
max\left\{x_{J(x^{t_1})}^{t_1},x_s^{t_1}\right\}&{\rm if}&
s>\tilde\sigma_1+2t_0.
\end{array}\right.$$
Let us check that $x^{t_1}\prec y$. For $\tilde\sigma_1-2t_0-1\leq s\leq
\tilde\sigma_1+2t_0$, $y_s\geq \left( R_0\right)_{-2t_0-1}\geq x_s^{t_1}$
using the monotony of $R_{\tilde\sigma_1}$ and the previous proposition.
\bigskip
Hence, by Proposition \ref{ITEOR}, we obtain $J(x^{t+t_1})\geq J(y^t)
\quad \forall
t\geq 0$. But $y$ is a kink that satisfies the condition (\ref{CONDIN}) in
the framework of Remark \ref{T0},
therefore the left inequality is stated.
$\Box $
\bigskip
Finally, the statement on the velocity of any interface, valid for all
the values of the parameters, is given by
\begin{Thm}\label{INTVE}
If $\{x^t\}_{t\in{\Bbb N}}$ is an interface then, its velocity exists and
$$\lim_{t\rightarrow +\infty} {J(x^t)\over t}=\bar{v}(a,\epsilon,c).$$
\end{Thm}
\noindent
{\sl Proof:} For any $\bar{v}$ given by (\ref{VELOCITY}), let
${\displaystyle{p_1\over q_1}}\leq \bar{v},
\ {\displaystyle {p_2\over q_2}}\geq \bar{v},\
c_1\in {\rm I}_{p_1\over q_1}^0$ and
$c_2\in {\rm I}_{p_2\over q_2}^0$ respectively.
If $c\in {\rm I}_{p\over q}^0$, then the statement clearly
follows from the previous proposition.
Moreover, by the monotony of the local map, we have
$$\forall x\in {\cal M}\quad
F_{\epsilon,c_2}x\prec F_{\epsilon,c}x\prec F_{\epsilon,c_1}x,$$
where the dependence on $c$ has been added to the CML. It clearly follows that
$$
{p_1\over q_1}\leq\liminf_{t\rightarrow +\infty}{J(F_{\epsilon,c}^tx)\over t}
\leq\limsup_{t\rightarrow +\infty}{J(F_{\epsilon,c}^tx)\over t}
\leq {p_2\over q_2}.
$$
The convergence is then ensured by choosing ${\displaystyle{p_1\over q_1}}$ and
${\displaystyle{p_2\over q_2}}$ arbitrarily close to $\bar{v}$. $\Box $
\bigskip
To conclude this section, we mention that all these results and, in particular
Theorem \ref{INTVE}, can be extended to the orbits for which the initial
condition satisfies
$$\lim_{s\rightarrow\pm\infty}H(x_s)-H(s)=0.$$
For such orbits, one can prove Theorem \ref{INTVE} for the following codes:
$$J_{inf}(x^t)=\min\{s: x_s^t\geq c\}\quad {\rm and}\quad
J_{sup}(x^t)=\max\{s: x_{s-1}^t< c\}.$$
Actually, we have
$$\lim_{t\rightarrow +\infty}{J_{inf}(x^t)\over t}=
\lim_{t\rightarrow +\infty}{J_{sup}(x^t)\over t}=\bar{v}.$$
\section{The nonlinear stability of fronts with rational velocity}
An extension of the linear stability result for the
fronts can be achieved,
also using the CML contracting property which appears in (\ref{LINDY}),
by proving that for all the kink initial conditions, the corresponding
interface's code is
identical to a front's code, at least after a transient.
\subsection{The co-existence of fronts and non-uniform fronts}
We shall see below that one can only claim the nonlinear
stability of fronts in some regions of the
parameters. We now justify such restrictions on the latter.
\bigskip
Let, a
{\bf non-uniform front}, be an interface in $\Lambda$ for which the temporal
code cannot be written with the relation (\ref{TEMPCO}). Using the admissibility condition, one can show the existence of such orbits given a
non-uniform code. In this way, we prove in Appendix \ref{NONUNI}, the
existence
of the non-uniform fronts with velocity ${1\over 2}$ and 0 respectively,
when $\epsilon$ is close to 1.
Since these orbits are interfaces, they co-exist with the fronts
of the same velocity by Theorem \ref{INTVE}. Moreover, it is possible
to state the linear stability of these orbits similarly to Proposition \ref{STAB2}. Hence, when the non-uniform fronts exist, the fronts do not
attract all the kink initial conditions.
\subsection{The nonlinear stability for weak coupling}
Here, we state the nonlinear stability's result for the fronts with
non negative rational velocity, in the
neighborhood of $\epsilon=0$. However, the following assertion can similarly be
extended to the fronts with negative velocity.
\bigskip
Let
$$\Delta_{p\over q}\equiv\eta({p\over q})-\eta({p\over q}^-)=
\sum_{k=1}^{+\infty}l_{kp.kq}$$
and, for $\theta\in (0,1)$ let the interval
$${\rm I}_{p\over q}^{\theta}=\left(\eta({p\over q}^-),
\eta({p\over q})-\theta\Delta_{p\over q}\right),$$
we have:
\begin{Thm}\label{GLOBST}
Given ${p\over q}\in {\Bbb Q}\cap [0,1]$, $a\in (0,1)$ and $\theta\in (0,1)$, there
exists $\epsilon_0>0$ such that for any kink initial condition $x$:
$$\forall \epsilon\in (0,\epsilon_0)\quad \forall c\in
{\rm I}_{p\over q}^{\theta}\quad \exists\sigma\in{\Bbb R}\ :\
\lim_{t\rightarrow +\infty}\|R_{\sigma}^t-x^t\|=0.$$
\end{Thm}
\begin{Rem}\label{VIT0}
For the velocity ${p\over q}=0$, the statement still hold for $\theta=0$.
\end{Rem}
Moreover for the velocity ${p\over q}=1$, using similar techniques , we have
proved that the theorem holds with $\theta=0$ and $\epsilon_0=1$ for all the
interfaces.
\subsection{Proof of the nonlinear stability}
In the proof of Theorem \ref{GLOBST} we assume $\epsilon<{2\over 3}$ then, by
Proposition
\ref{CROIS}, all the orbits $\{x^t\}_{t\in{\Bbb N}}$ under consideration are
interfaces.
\bigskip
In a first place, it is shown that the interfaces propagate forward provided
some conditions on the parameters are satisfied.
\begin{Pro}\label{PROP}
It exists $\epsilon_1>0$ such that for any
interface $\{x^t\}_{t\in{\Bbb N}}$ and any $c\geq {1\over 2}$, we have
$$\forall \epsilon\in (0,\epsilon_1)\quad \exists t_0\ :\ \forall
t\geq t_0\quad J(x^{t+1})\geq J(x^t).$$
\end{Pro}
\noindent
{\sl Proof:} According to the present phase space, all the orbits of the CML
are bounded in the following way:
$$\forall \delta>0\quad \exists t_0\ :\ \forall t\geq t_0\quad
-\delta <x_s^t<1+\delta\quad \forall s\in{\Bbb Z}.$$
In particular, for an interface:
$$x_{J(x^t)-1}^{t+1}<a(1-{\epsilon\over 2})c+{\epsilon\over 2}+
{a\epsilon\delta\over 2}\quad \forall t\geq t_0.$$
Hence, if $\epsilon$ is sufficiently small, the statement holds for all
$c\geq {1\over 2}$. $\Box $
\bigskip
Furthermore, we introduce a convenient configuration.
Then, after computing the code of the corresponding orbit, we show
a dynamical relation between the code of any interface and the code of this
particular orbit.
\bigskip
Let
$$\forall j\in{\Bbb Z}\quad\left( S_j\right)_s=
{\displaystyle \sum_{k=1}^{+\infty} \sum_{n=-k}^k}
l_{n,k} H(s-n-j)\quad \forall s \in {\Bbb Z}.$$
Notice that if $c\in (\eta(0^-),\eta(0)]$, $S_j$ is the front of
velocity 0. When $c\not\in (\eta(0^-),\eta(0)]$, this configuration
is a convenient tool. Actually, the temporal code for the orbit
$\{S_0^t\}_{t\in{\Bbb N}}$ is shown to be given by:
\begin{Pro}\label{THETA}
Given ${p\over q}>0$, $a\in (0,1)$ and $\theta\in [0,1)$, there exists
$\epsilon_0 >0$ such that
$$\forall \epsilon\in (0,\epsilon_0)\quad \forall c\in
{\rm I}_{p\over q}^{\theta}\quad J(S_0^t)=\lceil (t+1){p\over q}\rceil
\quad\forall t\geq 0.$$
Moreover
$$\inf_{s,t}|(S_0^t)_s-c|>0.$$
\end{Pro}
The proof is given in Appendix \ref{P-THETA}.
\bigskip
We now state the main property of an interface's code.
\begin{Pro}\label{PR85}
For an interface we have,
under the same conditions of the previous proposition
\begin{equation}\label{BORNE}
\exists n_0\quad \forall n\geq n_0\quad
J(x^{t+n})\leq J(S_0^{t-1})+J(x^n)\quad\forall t\geq 1.
\end{equation}
\end{Pro}
\noindent
{\sl Proof:} Using the relation (\ref{LINDY}) for $t=n-1$, we obtain the
relation between the interfaces' states and the configuration $S_j$:
$$x^n=S_{J(x^{n-1})}+{\rm L}\left(x^{n-1}-S_{J(x^{n-1})}\right).$$
By induction, the latter leads to
$$x^n=S_{J(x^{n-1})}
+{\displaystyle \sum_{k=m+1}^{n-1}} {\rm L}^{n-k}\left(S_{J(x^{k-1})}-
S_{J(x^k)}\right)+{\rm L}^{n-m}\left(x^m-S_{J(x^m)}\right)\quad
\forall n>m.$$
Then the monotony of $S_j$, the positivity of ${\rm L}$ and
Proposition \ref{PROP} induce the following inequality for $m$ large enough and
$n>m$
$$\tilde S_n\prec x^n,$$
where
$$\tilde S_n=S_{J(x^{n-1})}+{\rm L}^{n-m}\left(x^m-S_{J(x^m)}\right).$$
Consequently
$$J(x^{t+n})\leq J(\tilde S_n^t)\quad \forall t\geq 0.$$
\bigskip
Now, given $\delta={\displaystyle \inf_{s,t}|(S_0^t)_s-c|}$
according to the previous Proposition, let $n_0>m$ be such that
$$a^{n_0-m}\|x^m-S_{J(x^m)}\|<\delta.$$
It follows that
$$\|\tilde S_n-S_{J(x^{n-1})}\|<\delta \quad \forall n\geq n_0+1,$$
and hence, by Proposition \ref{STAB1}
$$\begin{array}{rl}
J(\tilde S_n^t)=&J(S_{J(x^{n-1})}^t)\\
=&J(S_0^t)+J(x^{n-1})\quad \forall t\geq 0,n\geq n_0+1.
\end{array}$$
$\Box$
\bigskip
Finally, we state the result which ensure a sequence to be uniform.
\begin{Lem}\label{ENS}
Let $\{ j_n\}_{n\in{\Bbb N} }$ be an integer sequence which satisfies the
conditions $\forall n\geq n_0$:
\noindent
(i) $j_{n+k}\leq\lceil {p\over q}k\rceil+j_n \quad \quad
\forall k\geq 0,$
\noindent
(ii) $j_n\geq \lceil {p\over q}n+\sigma\rceil\quad $ for a fixed
$\sigma\in{\Bbb R}$,
\noindent
then
$$\exists \gamma\in{\Bbb R}\ {\rm and}\ n_1\geq n_0 : \forall n\geq n_1\quad
j_n=\lceil {p\over q}n + \gamma\rceil .$$
\end{Lem}
The proof is given in Appendix \ref{P-ENS}.
\bigskip
Collecting the previous results, we can conclude that,
under the conditions of Proposition \ref{PROP} and
\ref{THETA}, the temporal code of any interface satisfies the condition
{\it (i)} of Lemma \ref{ENS} (Notice that Proposition \ref{PROP} only serves in
the proof of Proposition \ref{PR85}). Moreover, this code also
satisfies
the condition {\it (ii)} by Proposition \ref{SIGMA}. Hence, after a transient,
all the interfaces have a front's code. By (\ref{LINDY}), they consequently
converge to a front.
\bigskip
To conclude the proof of Theorem \ref{GLOBST} in the framework of Remark
\ref{VIT0}, we let $c\in{\rm I}_0^0$. Hence $J(S_0^t)=0\quad \forall t$.
If $c\geq {1\over 2}$, the statement holds using Proposition \ref{PROP} and the
relation (\ref{BORNE}). For $c<{1\over 2}$, one can prove the following
inequalities:
$$\exists t_0\ : \ \forall t\geq t_0\quad J(x^{t+1})\leq J(x^t),$$
similarly to Proposition \ref{PROP} and,
$$\exists n_1\ : \ \forall n\geq n_1\quad J(x^{n+1})\geq J(S_0)+J(x^n),$$
similarly to the relation (\ref{BORNE}).
\section{Concluding Remarks}
In this article, we have considered a simple space-time discrete model for
the dynamics of various nonlinear extended systems out of equilibrium. The
(piece-wise) linearity of this bistable CML allowed the construction of a
bijection
between the set of global orbits and the set of admissible
codes. When they do not reach the discontinuity, these orbits are linearly
stable. For
$\epsilon < {1\over 2}$, the CML is injective on ${\cal M}$, then $\Lambda$ can
be identified with the limit set
$$\bigcap_{t=0}^{+\infty}F_{\epsilon}^t({\cal M}),$$
which attracts all the orbits. These comments justify the study of the global
orbits and in particular, the study of fronts which occur widely in extended
systems.
\bigskip
The existence of fronts, with a parametric dependence for their velocity, has
been proved using the spatio-temporal coding. The velocity was shown to be
increasing with $c$. We have in addition checked numerically that $\bar{v}$ is
also an increasing function of
$\epsilon$ on $(0,{1\over 2})$ for any $a$ and on $(0,1)$ for any $a\in
(0,{1\over 2})$ or for any $a$ if $c$ is such that $\bar{v}(a,1,c)=1$.
Moreover, one can find some values of $a$ and $c$ for which the
front velocity does not (always) increase with $\epsilon$ (see Figure 2). The
spatio-temporal coding also serves to show the existence of other patterns
such as non-uniformly propagating interfaces (Appendix \ref{NONUNI}). When
these exist, they always co-exist with the fronts of the same velocity.
\bigskip
Furthermore, we have consider the more general dynamics of interfaces.
Using the temporal code, we have shown that all these orbits have the front's
velocity uniquely determined by the CML parameters.
\bigskip
The stability of fronts was also proved, firstly with respect to
initial conditions close to a front state in their "center", and secondly
with respect to any kinks for the fronts with non negative rational velocity,
assuming some restrictions on the parameters. Actually, the latter allows us to
avoid the existence of non-uniform fronts which would attract certain
interfaces.
\bigskip
Finally, notice that all the results stated for the orbits that never
reach the discontinuity can be extended to some CML's with a $C^{\infty}$
bistable local map.
Actually, one can modify the local map into a $C^{\infty}$ one, in the
intervals where the orbits in $\Lambda$ never go, without changing these
orbits. In other terms, all our results stated in open sets can
be extended to differentiable maps, in particular, the existence and the
linear stability of fronts and non-uniform fronts with rational velocity.
\bigskip
These last results show the robustness of fronts in these models. This
emphasizes the realistic nature of these models.
\bigskip
\noindent
{\bf Acknowledgments}
\noindent
We want to thank R.\ Lima, E.\ Ugalde, S.\ Vaienti and R.\ Vilela-Mendes for
fruitful discussions and relevant comments. We also acknowledge L.\ Raymond
for his contribution to the nonlinear stability's proof and, P.\ Collet for
attracting our interest to this piece-wise affine model.
\vfill\eject
| proofpile-arXiv_065-361 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Physical systems develop at finite temperature multiple
mass-scales. Decoupling theorems \cite{App75} ensure that effective
representations
can be found for the lighter degrees of freedom at scales asymptotically
separated from the heavier scales. The effective couplings can be related
to the parameters of the original system by matching values of the
same quantities calculated in the effective model and the original model
simultanously \cite{Wei80,Geo93,Bra95}.
In weakly coupled gauge theories (the class where the electroweak theory
belongs to) matching can be performed perturbatively.
The simplest is to match the renormalised Green's functions
computed in the two theories separately. This tacitly assumes the
renormalisability of the effective model. Though at large distances
local operators of lower scaling dimensions dominate indeed, it is important
to assess quantitatively the rate of suppression of higher dimensional
operators.
In finite temperature theories integration over non-static fields
yields for static $n$-point functions with typical momenta ${\cal O}(p)$,
expressions analytic in $p^2/M_{heavy}^2$, therefore the occuring
non-localities can always be expanded in a kind of gradient expansion.
We shall demonstrate, that in Higgs systems to
${\cal O}(g^4,\lambda g^2,\lambda^2$) accuracy, the non-local effects
in the effective Higgs-potential arising from non-static fields
can be represented {\it exactly} by correcting the
coefficient of the term quadratic in the Higgs-field.
Similar conclusions are arrived at when the contribution of non-static modes
to higher dimensional (non-derivative) operators is investigated.
The situation changes when a heavy {\it static} field is
integrated out. The contribution to the polarisation functions from this
degree of freedom is not analytic neither in $p^2/M_{heavy}^2$ nor
in $\Phi^2/M_{heavy}^2$. For the momentum dependence no gradient expansion
can be proven to exist.
Though the contribution to the effective Higgs-potential can be represented
approximately by an appropriately corrected local gauge-Higgs
theory, it does not automatically correspond to a systematic weak coupling
expansion. Assuming that the important range of variation of the Higgs
field is $\sim gT$, one proceeds with the expansion, and the validity of this
assumption is checked {\it a posteriori}. In case of strong first order
transitions this assumption is certainly incorrect, but seems to be justified
for the Standard Model with Higgs masses around 80GeV.
Non-analytic dependence on field variables is reflected in the
appearence of non-polynomial local terms in the reduced effective theory.
Consistent treatment of such pieces is an interesting result of our paper
(see the discussion of the U(1) case in \cite{Jak96b}).
The method of integration over specific (heavy static and non-static)
classes of fields in the path integral (Partial Path Integration, PPI),
instead of matching,
provides us with a direct method of inducing all non-local and non-polynomial
corrections automatically. Though the calculations are much more painful,
than those involved in matching, the value of PPI in our eyes is just the
possibility of assessing the range of applicability of the technically
simpler method.
The explicit expression of the non-local and/or non-polynomial
parts of the action and the impact of these
terms on the effective finite temperature Higgs-potential will be
investigated in the present paper for the SU(2) gauge-Higgs model.
This model is of central physical importance in investigating the nature of
the cosmological electroweak phase transition \cite{Sin95}. Earlier, PPI has
been applied to the O(N) model by one of us \cite{Jak96}.
A similar, but even more ambitious program is being pursued by Mack and
collaborators \cite{Ker95}. They attempt at performing PPI with the goal of
deriving in a unique procedure the perfect lattice action of the coarse
grained effective model for the light degrees of freedom.
Our calculation stays within the continuum perturbative framework.
PPI will be performed with two-loop accuracy for two reasons:
i) The realisation of PPI on Gaussian level does not
produce non-local terms, though non-polynomial local terms already do appear.
The consistent cut-off independence of the perturbation theory
with non-polynomial terms can be tested first in
calculations which involve also 2-loop diagrams.
ii) It has been demonstrated that the quantitative non-perturbative
characterisation of the effective SU(2) gauge-Higgs
model is sensitive to two-loop corrections of the cut-off (lattice
spacing) dependence of its effective bare couplings \cite{Kaj95,Kaj96}.
We would like to investigate possible variations of the relation of the
effective couplings to the parameters of the original theory when the applied
regularisation and renormalisation schemes change.
The integration over the non-static and of the heavy static fields will not
be separated, but is performed in parallel with help of the thermal
counterterm technique already applied in the PPI of the U(1) Higgs model
\cite{Jak96b}.
The paper is organised in the following way. In section 2 the basic
definitions are given and the operational steps involved in
the 2-loop PPI are listed. In Section 3 we are going to discuss the
contributions from fully non-static fluctuations. In Section 4 the
contribution of the diagrams involving also heavy static propagators is
discussed. In both sections particular attention is payed to the analysis of
the effective non-local interactions. In Section 5 the effective Higgs
potential is calculated from the 2-loop level effective
non-local and non-polynomial 3-d (NNA) action with 2-loop accuracy.
Also the local polynomial (LPA) and local non-polynomial (LNA) approximations
are worked out. The quantitative perturbative characterisation of the phase
transition and its comparison with other approaches will be presented
in Section 6. Our
conclusions are summarised in Section 7. In order to make the paper better
readable, the details of the computations discussed in Sections 3 to 5 are
relegated to various Appendices.
\section{Steps of perturbative PPI}
The model under consideration is the SU(2) gauge-Higgs model with one complex
Higgs doublet:
\begin{eqnarray}
&
{\cal L}={1\over 4} F_{mn}^aF_{mn}^a
+ {1\over 2} (\nabla_m\Phi)^{\dagger}(\nabla_m\Phi)
+ {1\over 2} m^2 \Phi^\dagger\Phi + {\lambda\over24}(\Phi^\dagger\Phi)^2
+ {\cal L}_{CT},\nonumber\\
&
F_{mn}^a =\partial_m A_n^a -\partial_n A_m^a + g \epsilon_{abc}
A_m^bA_n^c, \nonumber\\
&
\nabla_m =\partial_m - igA_m^a\tau_a.
\end{eqnarray}
The integrations over the non-static Matsubara modes and the static electric
component of the vector potential will be performed with two-loop accuracy.
In the case of the static electric potential resummed perturbation theory
is applied, simply realised
by adding the mass term $(Tm_D^2/2)\int d^3x\left(A_0^a({\bf x})\right)^2$
to the free action and compensating for it in the interaction piece.
The resummation discriminates between the static and non-static $A_0$
components, therefore in the Lagrangean density the replacement
\begin{equation}
A_0({\bf x},\tau )\rightarrow A_0({\bf x})+a_0({\bf x},\tau )
\end{equation}
should be done. (With lower case letters we always refer to non-static
fields.)
In the first step we are going to calculate the local (potential) part of the
reduced Lagrangean. For this a constant background is introduced into the
Higgs-multiplet:
\begin{eqnarray}
&
\Phi ({\bf x},\tau )=\Phi_0+\phi ({\bf x},\tau ),\nonumber\\
&
\phi ({\bf x},\tau )=\left(\matrix{\xi_4+i\xi_3\cr
i\xi_1-\xi_2\cr}\right),
\end{eqnarray}
where $\Phi_0$ was chosen to be the constant counterpart of $\xi_4$.
The dependence on the other static components is reconstructed by requiring
the O(4) symmetry of the potential energy.
In the second step the kinetic piece of the reduced Lagrangean is
investigated. In order to fix the coefficient of the conventional kinetic
piece to 1/2, one has to rescale the fields. The wave function
renormalisation constants should be calculated with
${\cal O}(g)$ accuracy. The derivative part of the scalar
action can be extended upon the requirement of spatial gauge invariance
to include the magnetic vector-scalar interaction into itself.
Higher derivative contributions to the effective action can be summarized
under the common name {\it non-local} interactions. They appear first in the
two-loop calculation of the polarisation functions of the different fields.
The exponent of the functional integration has the following explicit
expression:
\begin{eqnarray}
&
{\cal L}^{(cl)}+{\cal L}^{(2)}= & {1\over 2} m^2\Phi_0^2+{\lambda\over24}\Phi_0^4 \nonumber\\
&&+{1\over 2} a_m^a(-K)\left[\left(K^2+{g^2\over4}\Phi_0^2\right)\delta_{mn}-
\left(1-{1\over\alpha}\right)K_mK_n\right]a_n^a(K)\nonumber\\
&&+{1\over 2} A_0^a(-k)\left[k^2+{g^2\over4}\Phi_0^2+m_D^2\right]A_0^a(k)
\nonumber\\
&&+{1\over 2}\xi_H(-K)\left[K^2+m^2+{\lambda\over2}\Phi_0^2\right]\xi_H(K)
\nonumber\\
&&
+{1\over 2}\xi_a(-K)\left[K^2+m^2+{\lambda\over6}\Phi_0^2\right]\xi_a(K)
+c_a^\dagger(K)K^2c_a(K).
\end{eqnarray}
where $c$ denotes the ghost fields, and the notation $\xi_4\equiv\xi_H$ has
been introduced. Landau gauge fixing and the
Faddeev-Popov ghost terms are included.
The above expression implies the following propagators for the different fields
to be integrated out:
\begin{eqnarray}
& \langle a_m^a(P)a_n^b(Q) \rangle = {\displaystyle\delta_{ab}\over
\displaystyle P^2+m_a^2}
\left(\delta_{mn}-{\displaystyle P_mP_n\over\displaystyle P^2}\right)
\hat\delta(P+Q), & m_a^2={g^2\over 4}\Phi_0^2,\nonumber\\
& \langle A_0^a(P)A_0^b(Q) \rangle = {\displaystyle\delta_{ab}\over
\displaystyle P^2+m_{A0}^2}\hat\delta(P+Q),
& m_{A0}^2=m_D^2+{g^2\over 4}\Phi_0^2,\nonumber\\
& \langle \xi_H(P)\xi_H(Q) \rangle ={\displaystyle1\over
\displaystyle P^2+m_H^2}\hat\delta(P+Q) ,
& m_H^2=m^2+{\lambda\over2}\Phi_0^2,\nonumber\\
& \langle \xi_a(P)\xi_b(Q) \rangle ={\displaystyle\delta_{ab}\over
\displaystyle P^2+m_G^2}
\hat\delta(P+Q), & m_G^2=m^2+{\lambda\over6}\Phi_0^2,\nonumber\\
& \langle c_a^\dagger(P)c_b(Q) \rangle = -{\displaystyle\delta_{ab}\over
\displaystyle P^2}\hat\delta(P-Q).&
\end{eqnarray}
($\hat\delta$ denotes the finite temperature generalisation of Dirac's
delta-function.)
The interaction part of the Lagrangean density consists of two
parts. In the first set all non-quadratic vertices are collected:
\begin{eqnarray}
&{\cal L_I}= & igS_{abc}^{mnl}(P,Q,K)a_m^a(P)a_n^b(Q)a_l^c(K)\nonumber\\
&&+
3igS_{abc}^{mn0}(P,Q,k)
a_m^a(P)a_n^b(Q)A_0^c(k)\nonumber\\
&&+{g^2\over4}\left[(a_m^aa_m^a)^2-(a_m^aa_n^a)^2+2(A_0^aA_0^aa_i^ba_i^b
-A_0^aA_0^ba_i^aa_i^b)\right]\nonumber\\
&& +g^2(A_0^aa_0^aa_i^ba_i^b-A_0^aa_0^ba_i^aa_i^b)
\nonumber\\
&& -i{g\over2}a_m^a(P)\left((K-Q)_m\xi_a(Q)\xi_H(K)
+\epsilon_{abc}K_m\xi_b(Q)\xi_c(K)\right)\nonumber\\
&& -i{g\over2}A_0^a(p)\left((K-Q)_0\xi_a(Q)\xi_H(K)
+\epsilon_{abc}K_0\xi_b(Q)\xi_c(K)\right)\nonumber\\
&& +{g^2\over8}(a_m^aa_m^a+2A_0^aa_0^a+A_0^aA_0^a)(\xi_H^2+\xi_a^2)
+{g^2\over4}\Phi_0\xi_H(a_m^aa_m^a+2A_0^aa_0^a)\nonumber\\
&& +{\lambda\over24}(\xi_H^2+\xi_a^2)^2
+{\lambda\over6}\Phi_0\xi_H(\xi_H^2+\xi_a^2)
+ig\epsilon_{abc}P_mc_a^\dagger(P)a_m^b(Q)c_c(K)\nonumber\\
&&
+ig\epsilon_{abc}P_0c_a^\dagger(P)A_0^b(q)c_c(K),
\label{eq:pert}
\end{eqnarray}
where the symmetrised trilinear coupling is
\begin{equation}
S_{abc}^{mnl}(P,Q,K)={1\over6}\epsilon_{abc}\left[(P-Q)_l\delta_{mn}+
(Q-K)_m\delta_{nl}+(K-P)_n\delta_{lm}\right].
\end{equation}
The second piece is quadratic and corresponds to the $T=0$ and the thermal
counterterms:
\begin{eqnarray}
&{\cal L}_{CT}= & {1\over 2}(Z_\Phi\delta m^2+(Z_\Phi-1)m^2)\Phi_0^2
+(Z_\Phi^2Z_\lambda-1){\lambda\over24}\Phi_0^4\nonumber\\
&&
+{1\over 2}(Z_A-1)a_m^a\left(K^2\delta_{mn}-\left(1-{1\over\alpha}\right)K_mK_n
\right)a_n^a\nonumber\\
&& +{g^2\over8}(Z_AZ_\Phi Z_g^2-1)\Phi_0^2a_m^2
+{1\over 2} (Z_A-1)A_0^a k^2 A_0^a \nonumber\\
&&
+{g^2\over8}(Z_AZ_\Phi Z_g^2-1)\Phi_0^2A_0^2
-{1\over 2}m_D^2 A_0^2 \nonumber\\
&& +{1\over 2}\xi_H\left((Z_\Phi-1)(K^2+m^2)+Z_\Phi\delta m^2
+(Z_\Phi^2Z_\lambda-1){\lambda\over2}\Phi_0^2\right)\xi_H\nonumber\\
&& +{1\over 2}\xi_a\left((Z_\Phi-1)(K^2+m^2)+Z_\Phi\delta m^2
+(Z_\Phi^2Z_\lambda-1){\lambda\over6}\Phi_0^2\right)\xi_a\nonumber\\
&&
+(Z_c-1)c_a^\dagger K^2c_a.
\label{Lct}
\end{eqnarray}
The multiplicative and additive renormalisation constants are defined from
the relations between the renormalised and the bare couplings, as listed next:
\begin{eqnarray}
& g_B=Z_g g, & \lambda_B=Z_\lambda\lambda ,\nonumber\\
& A_B=Z_A^{1/2} A, & \Phi_B=Z_\Phi^{1/2} \Phi ,\nonumber\\
& c_B=Z_c c, & m_B^2=m^2+\delta m^2.
\end{eqnarray}
The quadratic approximation to the counterterms is sufficient
for our calculation, since they contribute only at one-loop.
The new couplings, emerging after the 4-d (T=0) renormalisation conditions
are imposed, are finite from the point of view of the 4-d
ultraviolet behavior, but should be considered to be the bare couplings
of the effective field theory, since they explicitly might depend on
the 3-d cut-off.
\section{Non-static fluctuations}
\subsection{The potential (local) term of the effective action}
The 1-loop contribution to the potential energy of a homogenous
$\Phi_0$-background is expressed through the standard sum-integral
(with the $n=0$ mode excluded from the sum)
\begin{equation}
I_4(m)=\int_K^\prime\ln (K^2+m^2).
\end{equation}
The meaning of the primed integration symbol and the characteristic features
of this integral are detailed in Appendix A.
Also the counterterms evaluated with the background $\Phi_0$ belong to the
1-loop potential. The complete 1-loop expression of the regularised bare
potential is given by
\begin{eqnarray}
V_{{CT\!\!\!\!\!\!\!\!\!\rule[0.5 ex]{1em}{0.5pt}}\hskip.5em}^{(1)} &=& {1\over 2}\Phi_0^2
\biggl[\left({9g^2\over 4} +\lambda\right){\Lambda^2\over 8\pi^2}
+\left({3 g^2\over 16} + {\lambda\over 12} \right) T^2\nonumber\\
&&
-\left( {3 g^2\over 2} + \lambda\right){\Lambda T\over 2\pi^2}
+\left(1 + 2 I_{20}\lambda - {\lambda\over 8\pi^2}\ln{\Lambda\over T}\right)
m^2\biggr]\nonumber\\
&&
+{1\over 24}\Phi_0^4\biggl[\lambda +
\left({27g^4\over 4}+4\lambda^2\right)
\left(I_{20}-{1\over 16\pi^2}\ln{\Lambda\over T}\right)\biggr]
\label{eq:1leffact}
\end{eqnarray}
(for the meaning of the notation $I_{20}$, and others below, see Appendix A).
Its temperature independent cut-off dependences are cancelled by
appropriately choosing the coefficients of the renormalisation constants
in the "classical" counterterm expression:
\begin{equation}
V_{CT}^{(1)}={1\over 2}(\delta m_1^2+Z_{\Phi1}m^2)\Phi_0^2
+(2Z_{\Phi1}+Z_{\lambda1}){\lambda\over24}\Phi_0^4.
\end{equation}
The list of two-loop diagrams and the algebraic expressions for each of them
is the same as in case of the effective potential calculation. They appear in
several papers \cite{Arn93,Fod94} in terms of two standard sum-integrals,
in addition to the $I_4$ function defined above.
These functions in the present case are analytic in the propagator masses
$m_i^2$ because the zero-frequency modes are excluded from the summation:
\begin{eqnarray}
H_4(m_1,m_2.m_3) &=
\int_{P1}^\prime\int_{P2}^\prime\int_{P3}^\prime\delta (P_1+P_2+P_3)\nonumber\\
&{\displaystyle 1\over
\displaystyle (P_1^2+m_1^2)(P_2^2+m_2^2)(P_3^2+m_3^2)},\nonumber\\
L_4(m_1,m_2) &=\int^\prime_{P1}\int^\prime_{P2}{\displaystyle (P_1P_2)^2\over
\displaystyle P_1^2(P_1^2+m_1^2)P_2^2(P_2^2+m_2^2)}.
\end{eqnarray}
The latter expression cancels algebraically from the sum of the 2-loop
contributions. The main properties of the functions $H_4$ and $L_4$
appear in Appendix A. The sum of the two-loop contributions
without the counterterms,
displaying typical three-dimensional linear and logarithmic cut-off
dependences leads to the following regularised expression (for the values of
the constants $K_{..}$ and $I_{..}$, see Appendix A):
\begin{eqnarray}
V^{(2)}_{{CT\!\!\!\!\!\!\!\!\!\rule[0.5 ex]{1em}{0.5pt}}\hskip.5em} &=& \Phi_0^4\,\Biggl\{\,
g^6 \left({21 I_{20}^2\over 16} + {15 I_3\over 64}
+ {87 K_{10}\over 128}\right)
+ g^4 \lambda\left({33 I_{20}^2\over 32} - {K_{10}\over64}\right) \nonumber\\
&& + g^2 \lambda^2 \left({3 I_3\over 32} + {K_{10}\over8}\right)
+ \lambda^3 \left({5 I_{20}^2\over 18} +{I_3\over24}
- {7 K_{10}\over 108}\right)\nonumber\\
&& + \ln{\Lambda\over T} \Biggl[g^2 \lambda^2 {K_{1log}\over 8}
+ g^6 \left({87 K_{1log}\over 128} - {21 I_{20}\over128 \pi^2}\right)
\nonumber\\
&& + g^4 \lambda \left(-{K_{1log}\over64}-{33I_{20}\over256\pi^2}\right)
+ \lambda^3 \left(-{7 K_{1log}\over 108}-{5I_{20}\over144\pi^2}\right)
\Biggr]\nonumber\\
&& + \left(\lot\right)^2\Biggl[-{177 g^6\over16384 \pi^4} + {9 g^4 \lambda\over2048 \pi^4}
- {3 g^2 \lambda^2\over1024 \pi^4}
+ {\lambda^3\over384 \pi^4}\Biggr]\Biggr\}\nonumber\\
&+&\!\!\Phi_0^2\,\Biggl\{\,
\Lambda^2\Biggl[\lambda^2 \left(-{K_{02}\over6} + {I_{20}\over8 \pi^2}\right)
+ g^2 \lambda \left({3 K_{02}\over 4} + {9 I_{20}\over32 \pi^2}\right)
\nonumber\\
&& + g^4 \left({81 K_{02}\over 32}
+ {15 I_{20}\over16 \pi^2}\right)\Biggr]
+ \Lambda^2\ln{\Lambda\over T}\Biggl[-{15 g^4\over256 \pi^4} -
{9 g^2 \lambda\over512 \pi^4}
- {\lambda^2\over128 \pi^4}\Biggr]\nonumber\\
&& + \Lambda T \Biggl[g^4 \left({81 K_{01}\over 32}-{15I_{20}\over4\pi^2}\right)
+ g^2 \lambda \left({3 K_{01}\over 4} - {9 I_{20}\over8 \pi^2}\right)
\nonumber\\
&& + \lambda^2 \left(-{K_{01}\over6} - {I_{20}\over2 \pi^2}\right)\Biggr]
+ \Lambda T\ln{\Lambda\over T} \Biggl[{15 g^4\over64 \pi^4} + {9 g^2 \lambda\over128 \pi^4}
+ {\lambda^2\over32 \pi^4}\Biggr]\nonumber\\
&& + T^2\Biggl[g^4\left({5 I_{20}\over 8} + {81 K_{00}\over 32}\right)
+ \lambda g^2\left({3 I_{20}\over 16} +{3 K_{00}\over 4}\right)\nonumber\\
&& + \lambda^2 \left({I_{20}\over12} - {K_{00}\over6}\right)\Biggr]
+ T^2\ln{\Lambda\over T}\Biggl[{365 g^4\over1024 \pi^2} + {27 g^2 \lambda\over256 \pi^2}
- {\lambda^2\over32 \pi^2}\Biggr]\Biggr\}
\end{eqnarray}
From this expression beyond the constants, also terms proportional to $m^2\Phi_0^2$
are omitted. For the required accuracy $m^2$ can be related on the tree level
to the $T=0$ Higgs-mass. Since this mass is proportional to $\lambda$, the
terms proportional to $m^2$ are actually ${\cal O}(g^4\lambda ,
g^2\lambda^2 ,\lambda^3)$.
One also has to compute the 1-loop non-static counterterm contributions,
whose sum is simply:
\begin{eqnarray}
V_{CT} &= & {9\over2} m_a^2(Z_{\Phi1}+2Z_{g1})I_4(m_a)\nonumber\\
&&
+ {1\over2}\left({\lambda\over2}\Phi_0^2(Z_{\lambda1}
+Z_{\Phi1})+\delta m_1^2\right)I_4(m_a)\nonumber\\
&&
+{3\over2}\left({\lambda\over6}\Phi_0^2(Z_{\lambda1}
+Z_{\Phi1})+\delta m_1^2\right)I_4(m_a).
\end{eqnarray}
The renormalised potential term of the effective theory can be determined
once the wave function renormalisation constants are known, by choosing
expressions for $\delta m^2$ and $\delta Z_{\lambda}$ to fulfill some
temperature independent renormalisation conditions for the potential energy.
So, we need the wave function rescalings to 1-loop accuracy first.
\subsection{Kinetic terms of the effective theory}
The effective kinetic terms are extracted from the gradient expansion of
appropriately chosen two-point functions. More closely,
they are determined by the coefficients of the linear term in the expansion
of the 2-point functions into power series with respect to $p^2$.
Two kinds of diagrams appear at one-loop level. The tadpole-type is momentum
independent. The bubble diagrams are listed in Appendix B accompanied by the
corresponding analytic expressions, expanded to $p^2$ terms.
One adds to the corresponding analytic expressions the "classical value" of
the counterterms. The renormalisation
constants $Z$ are fixed by requiring unit residue for the propagators:
\begin{eqnarray}
Z_A &=& 1+{25g^2\over48\pi^2}D_0 - {281g^2\over720\pi^2},\nonumber\\
Z_\Phi &=& 1+{9g^2\over32\pi^2}D_0-{g^2\over4\pi^2}
\label{z1}
\end{eqnarray}
($D_0=\ln (\Lambda /T)-\ln 2\pi +\gamma_E$).
The gauge coupling renormalisation is found by requiring that the gauge
mass term proportional to $\Phi_0^2$, generated radiatively, vanish
(in other words the coupling in front of $(A_i^a\Phi_0 )^2$ stays at $g^2$):
\begin{equation}
Z_g =1-{43g^2\over96\pi^2}D_0+{701g^2\over1440\pi^2}+{\lambda\over48\pi^2}.
\label{zcoupling}
\end{equation}
In terms of the renormalised fields the kinetic term of the effective
Lagrangian can be written down by using its
invariance under spatial gauge transformations in the usual form:
\begin{equation}
L_{kin}={1\over 4}F_{ij}^aF_{ij}^a+{1\over 2}(\nabla_i\Phi )^\dagger
\nabla_i\Phi,\quad\quad i,j=1,2,3.
\end{equation}
\subsection{Renormalisation}
Now, we can write after applying (\ref{z1}) to the $\Phi_0$-field
that form of the regularised potential energy, where
the remaining cut-off dependence of 4-dimensional character is due
exclusively to the mass and scalar self-coupling renormalisation:
\begin{eqnarray}
V^{(2)} &=&
\Phi_0^4\,\Biggl\{\, \lambda^3\left(-{19 I_{20}^2\over18} + {I_3\over24}
- {7 K_{10}\over 108}\right)
+ g^4 \lambda \left(-{39 I_{20}^2\over32} - {K_{10}\over64}
+ {3 I_{20}\over128\pi^2}\right)\nonumber\\
&&+ g^2 \lambda^2 \left({3 I_{20}^2\over2} + {3 I_3\over32}
+ {K_{10}\over8} - {I_{20}\over96\pi^2}\right)\nonumber\\
&&+ g^6 \left({219 I_{20}^2\over32} + {15 I_3\over64} + {87 K_{10}\over 128}
+ {157 I_{20}\over2560 \pi^2}\right)\nonumber\\
&&+ \ln{\Lambda\over T} \Biggl[ g^6\left({87 K_{1log}\over128} - {157\over 40960 \pi^4}
- {219 I_{20} \over256 \pi^2}\right) \nonumber\\
&&+ g^2 \lambda^2\left({K_{1log}\over 8}+{1\over1536\pi^4}
- {3 I_{20}\over16\pi^2}\right)+ \lambda^3 \left(-{7 K_{1log}\over108}
+ {19 I_{20} \over144\pi^2}\right)\nonumber\\
&&+ g^4 \lambda \left(-{K_{1log}\over 64} - {3\over2048 \pi^4}
+ {39 I_{20}\over256 \pi^2}\right)\Biggr]\nonumber\\
&&+ \left(\lot\right)^2 \Biggl[{177 g^6\over16384 \pi^4}
- {9 g^4 \lambda\over2048 \pi^4} + {3 g^2 \lambda^2\over1024 \pi^4}
- {\lambda^3\over384 \pi^4}\Biggr]\Biggr\} \nonumber\\
&+&\!\!\Phi_0^2\,\Biggl\{\,
\Lambda T \Biggl[
g^4 \left({81 K_{01}\over 32} - {157\over2560 \pi^4}
- {243 I_{20}\over32\pi^2}\right)
+ \lambda^2 \left(-{K_{01}\over 6} + {I_{20}\over2 \pi^2}\right)
\nonumber\\
&&+ g^2 \lambda \left({3 K_{01}\over 4} - {1\over64\pi^4}
- {9 I_{20}\over4\pi^2}\right)\Biggr]
+ \Lambda T\ln{\Lambda\over T} \Biggl[{243 g^4\over512\pi^4} +{9 g^2 \lambda\over64\pi^4}
- {\lambda^2\over32\pi^4} \Biggr]\nonumber\\
&&+ \Lambda^2 \Biggl[
\lambda^2 \left(-{K_{02}\over 6} - {I_{20}\over4 \pi^2}\right)
+ g^2 \lambda \left({3 K_{02}\over 4} + {1\over256 \pi^4}
+ {9 I_{20}\over32 \pi^2} \right)\nonumber\\
&&+ g^4 \left({81 K_{02}\over 32} +{157\over 10240 \pi^4}
+ {243 I_{20}\over128 \pi^2}\right)\Biggr]\nonumber\\
&&+ \Lambda^2\ln{\Lambda\over T}\Biggl[ - {243 g^4\over 2048 \pi^4}
- {9 g^2 \lambda\over512 \pi^4} + {\lambda^2\over 64 \pi^4}\Biggr]\nonumber\\
&&+ T^2 \Biggl[
\lambda^2 \left(-{I_{20}\over 12} - {K_{00}\over 6}\right)
+ g^2 \lambda \left({3 I_{20}\over 8} + {3 K_{00}\over 4}
+ {1\over 384 \pi^2}\right)\nonumber\\
&&+ g^4 \left({81 I_{20}\over 64} + {81 K_{00}\over 32}
+ {157\over15360 \pi^2}\right)\Biggr]\nonumber\\
&&+ T^2\ln{\Lambda\over T}\Biggl[
{81 g^4\over256 \pi^2} + {3 g^2 \lambda\over 32 \pi^2}
- {\lambda^2\over48 \pi^2}\Biggr]\Biggr\}.
\end{eqnarray}
The final step is to fix the
parameters of the renormalised potential energy by enforcing certain
renormalisation conditions. We are going to use the
simplest conditions, fixing the second and fourth derivatives of the
temperature independent part od the potential energy at the origin:
\begin{eqnarray}
&
{d^2V(T-independent)\over d\Phi_0^2}=-m^2,\nonumber\\
&
{d^4V(T-independent)\over d\Phi_0^4}=\lambda.
\label{rencond}
\end{eqnarray}
One pays for this the price of having more complicated relations to the
$T=0$ physical observables. The connection of the Higgs and of the vector
masses as well as the vacuum expectation value of the Higgs field to
the couplings renormalised through the above conditions are given in Appendix
C.
The renormalised potential term of the reduced model is finally given by
\begin{eqnarray}
V & = &{1\over 2} m(T)^2 \Phi^\dagger\Phi +
{\lambda \over24}(\Phi^\dagger\Phi)^2,\nonumber\\
m(T)^2 & = & m^2 + T^2\biggl[{3\over16}g^2+{1\over12}\lambda
+g^4\left({81\over32}I_{20}+{81\over16}K_{00}
+{157\over7680\pi^2}\right)\nonumber\\
&&
+g^2\lambda\left({3\over4}I_{20}
+{3\over2}K_{00}+{1\over192\pi^2}\right)
-\lambda^2\left({1\over6}I_{20}+{1\over3}K_{00}\right)\biggr]\nonumber\\
&& +\Lambda T \Biggl[
g^4 \left({81 K_{01}\over 32} - {157\over2560 \pi^4}
- {243 I_{20}\over32\pi^2}\right)
+ \lambda^2 \left(-{K_{01}\over 6} + {I_{20}\over2 \pi^2}\right)
\nonumber\\
&&+ g^2 \lambda \left({3 K_{01}\over 4} - {1\over64\pi^4}
- {9 I_{20}\over4\pi^2}\right)\Biggr]
+ \Lambda T\ln{\Lambda\over T}
\Biggl[{243 g^4\over512\pi^4} +{9 g^2 \lambda\over64\pi^4}
- {\lambda^2\over32\pi^4} \Biggr]\nonumber\\
&&+ T^2\ln{\Lambda\over T}\Biggl[
{81 g^4\over128\pi^2} + {3 g^2 \lambda\over 16 \pi^2}
- {\lambda^2\over24 \pi^2}\Biggr].
\label{bare3dmass}
\end{eqnarray}
The last term of (\ref{bare3dmass}) can be split
into the sum of a finite and an infinite term by introducing a 3d scale $\mu_3$
into it. It is very remarkable, that the 3d scale dependence at this stage
has a coeffcient which is of opposite sign relative to what one has in the
3-d SU(2) Higgs model. For the O(N) scalar models this has been observed by
\cite{Jak96}. It cannot be accidental but we did not pursue this phenomenon
in the present paper. The scale $\mu_3$ should not affect the results in the
exact solution of the 3d model, but might be tuned at finite order,
if necessary.
The finite, $\mu_3$-independent piece of the two-loop thermal mass can be
parametrized as
$m(T)^2=k_1 g^4+k_2g^2\lambda+k_3 \lambda^2$ and the numerical values of the
coefficients are
\begin{eqnarray}
&& k_1=- 0.0390912288\nonumber\\
&& k_2=- 0.0116685842\nonumber\\
&& k_3= 0.0027102886.
\label{numcoeff}
\end{eqnarray}
\subsection{Non-static nonlocality}
The higher terms of the expansion in the external momentum squared (${\bf
p}^2)$ of the bubble diagrams of Appendix B give rise to non-local
interactions of the static fields. For each field a kernel can be introduced,
what we shall denote by $\hat I^H(p^2),\hat I^G(p^2)$ and $\hat I^A(p^2)$,
respectively. They are defined by subtracting from the full expressions of
the corresponding bubbles the first two terms of their gradient expansions,
which were taken into account already by
the wave function renormalisation and the mass renormalisation locally
(see 3.2):
\begin{equation}
\hat I^Z(p^2)=I^Z(p^2)-I^Z(0)-p^2I^{Z\prime}(0)
\end{equation}
($Z=H,G,A$). These kernels are used in the calculation of the Higgs-potential
from the effective 3-d model at the static 1-loop level (see section 5).
Their contribution is of the form of Eq.(\ref{E1}) in Appendix E.
In place of the symbol $m$ in Eq.(\ref{E1})
one should use the mass of the corresponding
static field. When working with ${\cal O}(g^4,\lambda g,\lambda^2)$ accuracy,
we should use the approximation (\ref{E7}) to the ${\bf p}$-integral.
Since $\hat I^Z(0)=0$, the first term of $(\ref{E7})$ does not contribute. This
is welcome, since this circumstance ensures that no potential linear in $\Phi_0$
will be produced. Also, we notice that $\hat I^Z(p^2)=\hat I_1^Z(p^2)$ (for the
notations, see (\ref{E2})), therefore
the integrands of the last terms are given with help of a single function.
Nonetheless, as explained in Appendix E, in the second term on the right hand
side of (\ref{E7}) one can use the expansion of $I^Z(p^2)$ with respect to the
mass-squares of the non-static fields truncated at linear order,
while in the third term only the mass-independent piece is to be retained.
The expression to be used in the first integral
we shall denote below by $\hat I^{Z1}$, while the second by $\hat I^{Z2}$.
One can point out few useful features of the integrals appearing in (\ref{E7}),
allowing the omission of
unimportant terms from the full expressions of the kernels. We shall discuss
these simplifying remarks first, and present only the relevant pieces from the
expressions of the kernels.
There are two kinds of diagrams contributing to each $\hat I^Z$:
i) the vertices are
independent of the background $\Phi_0$, and are proportional to $g$,
ii) the vertices are proportional to $\Phi_0$ and to $g^2, \lambda$. In this case
the two-point functions actually correspond to certain 4-point vertices,
involving two constant external $\Phi$-lines. The corresponding
non-locality has been discussed for the $N$-vector model in \cite{Jak96}.
The way we handle them in the present paper seems to us more standardizable.
In case i) the first term of the mass expansion of $\hat I^{Z1}$
is going to contribute to the effective Higgs-potential a constant which will
be omitted. On the other hand kernels of this type when multiplied by
$m^2$ in front of the third term of (\ref{E7}) already reach the accuracy
limiting our calculation,
therefore their mass independent part is sufficient for the computation.
In case ii) the coupling factors in front of each diagram are already ${\cal O}
(g^4,~etc.)$, therefore they do not contribute to $\hat I^{Z2}$ in a calculation
with the present accuracy, while only their mass independent terms are to be
used in $\hat I^{Z1}$.
Furthermore, some terms of $\hat I^Z(p^2)$ give rise in the
calculation of the effective Higgs potential to IR-finite $\bf p$-integrals
fully factorised from the rest (this is true in particular for the subtractions
from $I^Z(p^2)$). These contributions turn out to be proportional
to the UV cut-off and their only role is to
cancel against the 3-dimensional "counterterms" generated in the reduction
step (see Eq.({\ref{bare3dmass})). We need explicitly only the finite
contributions from the integrals. There is no need to present
those parts of the kernels which surely do not have any finite contribution.
With these remarks, we restrict ourselves to presenting only
the relevant part of the kernels for the three static fields (H,G,A), and also
for the static ghost.
The occurring integrals as a rule were reduced to unit numerators,
with help of the usual "reduction" procedure \cite{Arn93}.
The Higgs-nonlocality:
\begin{eqnarray}
\hat I^{H1}(p^2) & = &\left(-({2\over 3}\lambda^2+{15\over 16}g^4)\Phi_0^2+
{3g^2m_G^2\over 2}-{3g^2m_a^2\over 4}\right)\int{1\over K^2(K+p)^2}\nonumber\\
&&
+3g^2p^2\left(m_G^2+{1\over 2}m_a^2+{g^2\over 8}\Phi_0^2\right)
\int_K{1\over K^4(K+p)^2}.\nonumber\\
\hat I^{H2}(p^2) & = & -3g^2p^2\int_K\left({1\over 2K^2(K+p)^2}-{p^2\over 4K^4
(K+p)^2}\right).
\label{higgsnonl}
\end{eqnarray}
The Goldstone-nonlocality:
\begin{eqnarray}
\hat I^{G1}(p^2) & = &\left(g^2m_G^2+{1\over 2}g^2m_H^2-{3g^2\over 4}m_a^2-
{\lambda^2\Phi_0^2\over 9}\right)\int{1\over K^2(K+p)^2}\nonumber\\
&&
+2g^2p^2\left({3m_a^2\over 4}+{m_H^2\over 2}+m_G^2\right)
\int{1\over K^4(K+p)^2},\nonumber\\
\hat I^{G2}(p^2) & = & {3\over 2}g^2p^2\int\left({1\over K^2(K+p)^2}
+{p^2\over 2K^4(K+p)^2}\right).
\label{goldstonenonl}
\end{eqnarray}
The magnetic vector nonlocality:
\begin{eqnarray}
\hat I^{A1}(p^2) &=& 8g^2m_a^2\int{1\over K^4 Q^2}\biggl[4P^2-2{(KP)^2\over K^2}
-2{(QP)^2\over Q^2}+{1\over2}\biggl(k^2+q^2\nonumber\\
&&-{(KP)^2\over P^2}-{(QP)^2\over P^2}\biggr)\biggl(2+{(KQ)^2\over K^2Q^2}\biggr)\biggr]\nonumber\\
&&+{g^2\over2}(3m_G^2+m_H^2)\int{1\over K^4 Q^2}
\biggl(k^2+q^2
-{(KP)^2\over P^2}-{(QP)^2\over P^2}\biggr)\nonumber\\
&&+ {g^4\over4}\Phi_0^2\int{1\over K^4 Q^2}\left(-2K^2
+k^2-{(KP)^2\over P^2}\right),\nonumber\\
\hat I^{A2}(p^2) &=& -4g^2\int{1\over K^2 Q^2}\biggl[4P^2-4{(KP)^2\over K^2}
+\left(k^2-{(KP)^2\over P^2}\right)\left({(KQ)^2\over K^2Q^2}-1\right)\nonumber\\
&&+3\left(k^2-{(KP)^2\over P^2}\right){K^2-Q^2\over K^2}\biggr].
\label{vectnonloc}
\end{eqnarray}
The ghost-nonlocality (no ghost contribution arises to the second integral of
(\ref{E7})):
\begin{equation}
\hat I^{C1}(p^2)={1\over 2}g^2m_a^2\int {1\over K^2(K+p)^2}-g^2m_a^2p^2
\int{1\over K^4(K+p)^2}.
\label{ghostnonloc}
\end{equation}
Two useful remarks can be made concerning the magnetic vector nonlocality:
In the second equation of (\ref{vectnonloc}) sometimes only combinations of
terms in the integrands prove to be proportional to $p^2$.
Therefore the corresponding weighted integrals should not be performed term
by term.
There are parts of the kernel (\ref{vectnonloc})
which could not be reduced to unit
numerator, but are proportional to powers of ${\bf k}^2$. We shall see that
upon calculating their contribution to the Higgs effective potential
from the 3-d gauge-Higgs effective system, they combine with non-local
contributions
to the potential from the static $A_0$ integration (see the end of subsection
4.1 below) into formally Lorentz-invariant contributions.
\section{Static electric fluctuations}
\subsection{Contribution to the potential term}
The $A_0$-integration resummed with help of a thermal mass $m_D$ yields
at 1-loop:
\begin{equation}
V^{(1-loop)}={3\over2}I_3(m_{A0}),
\end{equation}
defined through the integral
\begin{equation}
I_3(m)=T\int_{\bf k}\ln ({\bf k}^2+m^2)
\end{equation}
(its value appears in Appendix A).
At two loops, vacuum diagrams involving one $A_0$ propagator are listed with
their analytic definitions in Appendix D. In addition also three counterterm
contributions of (\ref{Lct}) should be taken into account on 1-loop level.
They can be expressed (up to constant terms) with help of $I^\prime_3(m_{A0})$
where the prime denotes differentiation with respect to $m_{A0}^2$:
\begin{equation}
V_{CT}^{(A0)}={3T\over 2}I^\prime_3(m_{A0})
\left[-m_D^2-Z_{A1}m_{A0}^2
+{g^2\Phi_0^2\over 4}(Z_{A1}+Z_{\Phi1}+2Z_{g1})\right].
\end{equation}
When only terms up to ${\cal O}(g^4)$ are retained and further constants are
thrown away, one finds the contribution
\begin{equation}
V_{CT}^{(A0)}={3Tm_D^2m_{A0}\over 8\pi}+
{3g^2T\Phi_0^2\Lambda \over 16\pi^2}(Z_{\Phi1}+2Z_{g1}).
\end{equation}
The evaluation of 2-loop diagrams with topology of 'Figure 8' does not pose
any problem, since their expressions are automatically factorised into the
product of static and nonstatic (sum-)integrals. The evaluation scheme of
'setting sun' diagrams needs, however, some explanation.
Their general form is as appears in (\ref{E1}). Exploiting the analysis of
Appendix E we see that the final result of the $\bf p$-integration
depends non-analytically on $m_{A0}^2$. The non-analytic piece comes from the
first term of the Taylor expansion of the kernel $I$ with respect
to ${\bf p}^2$: $I(0)\equiv I_3^\prime (m_{A0})$.
In other words, this contribution is the product of a
static and of a nonstatic integral, similarly to the 'Figure 8' topology.
In summary, we find that contributions linear in $m_{A0}$ come from the
thermal counterterm and from the "factorised" 2-loop expressions. They sum up
to
\begin{equation}
{3m_{A0}\over 8\pi}\left(-{5g^2\over 2\pi^2}\Lambda T+{5g^2\over 6}T^2-m_D^2
\right).
\end{equation}
An optimal choice for the resummation mass ($m_D$) is when it is not
receiving any finite contribution from higher orders of the perturbation theory.
This requirement leads to the equality
\begin{equation}
m_D^2={5\over 6}g^2T^2,
\end{equation}
which means that the two-loop $A_0$-contribution at the level of the "local
approximation" yields exclusively a linearly diverging, non-polynomial
contribution to the potential energy term of the effective 3-d gauge-Higgs
model:
\begin{equation}
V^{(2-loop)}_{loc}={15g^2\over 16\pi^3}m_{A0}\Lambda T.
\label{nonpoldiv}
\end{equation}
This term plays essential role in the consistent 2-loop perturbative solution
of the effective model (see Section 5).
Further terms of the expansion of $I(p^2)$ are interpreted as
higher derivative kinetic terms of $A_0$.
Their evaluation is based on (\ref{E7}). The only difference relative to the
content of subsection 3.4 is that the $A_0$ integration is performed already
at this stage, therefore the contribution from its non-local kinetic
action modifies now the potential energy of the Higgs field. When the terms
yielding ${\cal O}(g^4,g^2\lambda ,\lambda^2)$ accuracy are retained,
it turns out that they actually contribute only to the mass term:
\begin{eqnarray}
&&\!\!\Phi^\dagger\Phi\,\Biggl\{-g^2\left({3g^2\over 2}
+{3\over8}\lambda\right)
\int{1\over p^2}\int{1\over Q^4}-{3g^4\over 4}\int{1\over p^2}
\int{Q_0^2\over Q^6}\nonumber\\
&&+3g^4\int{1\over p^2Q^2K^4}\left(2p^2-{(kp)^2\over K^2}-
{(qp)^2\over Q^2}+2Q_0^2+Q_0^2{(KQ)^2\over K^2Q^2}\right)\nonumber\\
&&+{3\over2}g^2\left(\lambda +{g^2\over4}\right)
\int{Q_0^2\over p^2Q^2K^4}-{3\over8}g^4\int{1\over p^2Q^2K^2}\nonumber\\
&&+{3g^4\over 2}\int{1\over p^4Q^2K^2}\left(2p^2-2{(kp)^2\over K^2}+
Q_0^2{(KQ)^2\over K^2Q^2}-Q_0^2\right)\nonumber\\
&&+{9g^4\over 2}\int{Q_0^2(K^2-Q^2)\over p^4Q^2K^4}\Biggr\}.
\label{a0mass}
\end{eqnarray}
In view of the remark at the end of subsection 3.4
on non-static nonlocalities, it is not convenient to evaluate
explicitly this contribution already at the present stage.
Its Lorentz non-invariant pieces are going to combine in the expression of
the final effective
Higgs potential with contributions from the static magnetic vector fluctuations
into simpler Lorentz invariant integrals (see section 5).
\subsection{$A_0$ contribution to the gauge-Higgs kinetic terms}
The $A_0$-bubble contributes to the self-energy of the magnetic vector
and of the Higgs (SU(2) singlet) scalar fluctuations. The corresponding
analytic expressions are given in Appendix B.
There are two ways to treat
these contributions. The first is to apply the gradient expansion again,
and retain the finite mass correction and the wave function rescaling
factor from the infinite series. This is the approach we followed in case of
non-static fluctuations. Then the magnetic vector receives only
field rescaling correction:
\begin{equation}
\delta Z_{(A0)}^A={g^2T\over 24\pi m_{A0}}.
\label{ascale}
\end{equation}
The Higgs particle receives both finite mass correction and field rescaling
factor from the $A_0$-bubble:
\begin{eqnarray}
\delta m_{(A0)}^H & = & -{3g^4\Phi_0^2T\over 64\pi m_{A0}},\nonumber\\
\delta Z_{(A0)}^H & = & {g^4\Phi_0^2\over 512\pi m_{A0}^3}.
\label{rescal}
\end{eqnarray}
One should
note that the rescaling factors are ${\cal O}(g)$ what is just the order
one has to keep to have the ${\cal O}(g^4)$ accurate effective action
\cite{Bod94}. The different behavior of the Higgs and of the Goldstone fields
reflects the breakdown of the gauge symmetry in presence of $\Phi_0$.
The Higgs-field mass and field rescaling corrections appear only in
a non-zero $\Phi_0$ background. The $\Phi_0$-dependence of
these quantities will be treated as a non-fluctuating {\it parametric}
effect in the course of solving the effective gauge-Higgs model.
In this case the
kernel of the one loop integral is not analytic in $p^2$, therefore no
basis can be provided for the gradient expansion.
Therefore it is important to proceed also the other way,
that is to keep the original bubble as a whole in form of a nonlocal part of the
effective action. Since its coefficient is $g^4$, in the perturbative
solution of the effective model it is sufficient to compute its contribution
to 1-loop accuracy. (This amounts actually to a two-step evaluation of the
of the purely static AAH and AAV setting sun diagrams.)
The difference between the two approaches from the point of view of the
quantitative characterisation
of the phase transition will be discussed within the perturbative framework
in the next Section.
\section{Two-loop effective Higgs potential from the 3-d effective theory}
In this and the next sections we are going to compare various versions of the
3-d theory by calculating perturbatively with their help the characterisation
of the electroweak phase transition. This we achieve by finding the respective
point of degeneracy of the effective Higgs potential in each approximation.
The analysis will be performed for $g=2/3$ and $M_W=80.6$GeV, and for
various choices of the $T=0$ Higgs mass. (The scheme of the determination of
the renormalised parameters is sketched in Appendix C).
\subsection{Approximations to the effective theory}
We start by summarising the effective Lagrangian obtained from the calculations
of sections 3 and 4. The parameters were calculated with accuracy ${\cal O}
(g^4,\lambda g^2, \lambda^2)$:
\begin{equation}
{\cal L}_{3D}={1\over 4} F_{ij}^aF_{ij}^a
+ {1\over 2} (\nabla_i\Phi)^{\dagger}(\nabla_i\Phi)
+ {1\over 2} m(T)^2 \Phi^\dagger\Phi + {\lambda_3\over24}(\Phi^\dagger\Phi)^2
- {1\over4\pi}m_{A0}^3+ {\cal L}_{nonloc}+{\cal L}_{CT},
\label{eq:3dact}
\end{equation}
where
\begin{eqnarray}
& F_{ij}^a & =\partial_i A_n^a -\partial_j A_m^a + g_3
\epsilon_{abc}A_i^aA_j^b \nonumber\\
& \nabla_i & =\partial_i - ig_3A_i^a\tau_a,\nonumber\\
& m_{A0}^2 & =g_3^2\left({5\over6}T+{1\over4}\Phi^\dagger\Phi\right)
\nonumber\\
& m(T)^2 & =m^2 + T^2\biggl[{3\over16}g^2+{\lambda\over12}
+g^4\left({81\over32}I_{20}+{81\over16}K_{00}+{157\over7680\pi^2}\right)
\nonumber\\
&&
+g^2\lambda\left({3\over4}I_{20}+{3\over2}K_{00}+{1\over192\pi^2}\right)
\nonumber\\
&&-\lambda^2\left({1\over6}I_{20}+{1\over3}K_{00}\right)
+\Delta_{A0}^{nonloc}\nonumber\\
&&+{1\over 8\pi^2}\bigl({81g^4\over 16}+{3g^2\lambda\over2}-{\lambda^2\over3}
\bigr)\ln{\mu_3\over T}\biggr].
\end{eqnarray}
The dependence of the thermal mass on the 3-d scale is incorrect. The only
way to ensure the correct scale dependence of the thermal mass is to include
also into the local approximations the effect of the non-static and and some
of the static nonlocalities.
For this we write down their contribution to the effective Higgs potential
combined
from (\ref{higgsnonl}), (\ref{goldstonenonl}) (\ref{ghostnonloc}) on one hand,
and from (\ref{vectnonloc}) and (\ref{a0mass}) on the other. The contributions
from the different nonlocalities are
expressed through some integrals appearing in Appendix A:
{\it Higgs+Goldstone}:
\begin{equation}
{1\over 2}\Phi_0^2\left[H_{43}(0,0,0)\left(-\lambda^2
-{27\over 16}g^4+3\lambda g^2\right)-{\displaystyle{d\over dm_1^2}C(0,0)}
\left({15\over 8}g^4+{9\over 4}\lambda g^2\right)\right]
\label{higonl}
\end{equation}
{\it Magnetic and electric vector:}
\begin{eqnarray}
&{1\over 2}\Phi_0^2 \Biggl[H_{43}(0,0,0)\left({129\over 8}g^4+
{3\over 2}\lambda g^2\right)+{\displaystyle{d\over dm_1^2}C(0,0)}
\left( {81\over 16}g^4-{3\over 4}\lambda g^2\right)\nonumber\\
&
-{3\over 2}g^4\displaystyle{{d\over dm_1^2}}L_4(0,0)\Biggr].
\label{magenl}
\end{eqnarray}
{\it Ghost}:
\begin{equation}
{1\over2}\Phi_0^2\left[{3\over4}g^4H_{43}(0,0,0)+{3\over2}g^4{d\over dm_1^2}C(0,0)
\right].
\end{equation}
Using the features of the 3 basic integrals ($H_{43}, C, L_4$),
listed in Appendix A, the final numerical expression for the sum
of the local (e.g. (\ref{numcoeff}))
plus non-local ${\cal O}(g^4)$ thermal mass terms is the following:
\begin{eqnarray}
&
{1\over 2}\Phi_0^2\biggl(0.0918423g_3^4+0.0314655\lambda_3 g_3^2
-0.0047642\lambda_3^2\nonumber\\
&
+{1\over16\pi^2}\ln{\mu_3\over T}\left({1\over3}\lambda^2-{3\over2}
\lambda g^2-{81\over16}g^4\right)\biggr).
\end{eqnarray}
The 2-loop contribution to $m^2(T)$ could be much influenced
by the choice of the 3d renormalisation scale. Through the 4-d
renormalisation of the mass-parameter $m^2$ it is sensitive also to the 4-d
normalisation scale (see Appendix C).
All fields in (\ref{eq:3dact}) are renormalised 4-d fields multiplied by
$\sqrt{T}$. The 3-d gauge coupling in the kinetic terms is simply
$g_3=g\sqrt{T}$, due to the fact that all finite T-independent corrections are
included into $Z_g$ (cf. eq.(\ref{zcoupling})). Also the quartic coupling
takes the simple form $\lambda_3=\lambda T$, in view of the particular
renormalisation condition (\ref{rencond}).
The cut-off dependent part of the mass term is hidden in ${\cal L}_{CT}$.
It ensures the cancellation of the cut-off dependence
of the 3-d calculations. Except for the non-polynomial term, we are not going
to discuss this cancellation. The formal correctness of our effective Higgs
potential will be verified by checking its agreement with results of
other 3-d calculations, expressed through the parameters
$m^2(T),g_3^2,\lambda_3, m_H^2,m_G^2,m_a^2$.
Off course these parameters themselves are connected to the 4-d couplings
differently.
The non-polynomial term of the above Lagrangian is simply the result of the
1-loop $A_0$-integration.
Terms of the remaining non-local Lagrangian
${\cal L}_{nonloc}$ are all of the generic form
\begin{equation}
{\cal L}_{nonloc}= {1\over2}\phi(-p){\cal N}(p)\phi(p)
\end{equation}
($\phi (p)$ refers to general 3-d fields). The kernels ${\cal N}(p)$ are the
result of purely static $A_0$-loops, listed in Appendix B.
\begin{eqnarray}
&
{\cal N}_H(p)=-{3g_3^4\over8}\Phi_0^2\int_{\bf k}\displaystyle{{1\over
(k^2+m_{A0}^2)((k+p)^2+m_{A0}^2)}}\nonumber\\
&
{\cal N}_{ij}^a(p)=-4g^2\int_{\bf k}\displaystyle{\biggl
[{k_s\Pi_{si}k_r\Pi_{rj}\over (k^2+m_{A0}^2)
((k+p)^2+m_{A0}^2)}-{1\over3}{k^2\delta_{ij}\over (k^2+m_{A0}^2)^2}}\biggr]
\end{eqnarray}
In order to see clearly the effect of the non-polynomial and nonlocal terms we
shall consider the effective potential in three different approximations.
\vskip .3truecm
{\it i) Local, polynomial approximation}
In this approximation non-local effects due
to the $A_0$-bubbles are neglected. The parametric "susceptibilities"
of the $A_i$ and $\Phi_H$ fields arising from the static $A_0$-bubble
are included into the approximation (the background $\Phi_0$ appearing in
their expressions is a non-fluctuating parameter!).
The non-polynomial term is expanded to
fourth power around $m_{D}^3$. In presence of non-zero Higgs background the
Lagrangean actually breaks(!) the gauge symmetry (some coefficients explicitly
depend on the value of the background field, cf. (\ref{rescal})).
The approximate form of the effective Lagrangian is:
\begin{eqnarray}
{\cal L}_{LP} &=& {1\over 4\mu}F_{ij}^aF_{ij}^a+{1\over 2}(\nabla_i\Phi )
^\dagger(\nabla_i\Phi )+{1\over 2}Z_H(\partial_i\xi_H)^2\nonumber\\
&&
+V_3(\Phi^\dagger\Phi )+ig_3\epsilon_{abc}p_ic^\dagger_a(p)A_i^a(q)c_c(k),
\end{eqnarray}
with the local potential
\begin{equation}
V_3(\Phi^\dagger\Phi )={1\over 2}(m(T)^2+\delta m^2)\Phi^\dagger\Phi +{1\over 2}
\delta m_H^2\xi_H^2+{\lambda_3 +\delta\lambda\over 24}(\Phi^\dagger\Phi )^2,
\end{equation}
where
\begin{eqnarray}
&
\mu = (1+{\displaystyle g_3^2\over \displaystyle 24\pi m_{A0}})^{-1},
&\delta\lambda =-{9g_3^4\over 64\pi m_D}\nonumber\\
&
\delta m^2 = -{\displaystyle 3g_3^2m_D\over \displaystyle 16\pi},
&\delta m_H^2 = -{3g_3^4\Phi_0^2\over
64\pi m_{A0}},\nonumber\\
&
Z_H ={\displaystyle 3g_3^4\Phi_0^2\over \displaystyle 512\pi m_{A0}^3},&
\end{eqnarray}
(the expressions of $m(T)^2,~g_3^2,~\lambda_3$ have been given above).
We note, that also a term $Z_H(A_i^a)^2\xi_H^2/2$ is present, but it would
contribute to higher orders in the couplings, and such terms will not be
displayed here.
In order to apply perturbation theory with canonical kinetic terms, one has
to rescale both the $A_i$ and $\xi_H$ fields:
\begin{equation}
\bar g_3=g_3\mu^{1/2},~~~\bar A_i=A_i\mu^{-1/2},~~~\bar\xi =\xi_H(1+Z_H/2).
\end{equation}
Below we continue to use the {\it un}barred notation for these quantities!
After rescalings and the $\Phi_0$-shift in the $R_\alpha$-gauge the action
density is written as
\begin{eqnarray}
{\cal L}_{LP} &=& V_3((1-Z_H)\Phi_0^2)\nonumber\\
&&+{1\over 2}A_i^a(-k)\left[(k^2+m_a^2)\delta_{ij}-(1-{1\over \alpha})k_ik_j
\right]A_i^a(k)+c_a^\dagger k^2c_a(k)\nonumber\\
&&+{1\over 2}\xi_H(-k)[k^2+m_h^2]\xi_H(k)+{1\over 2}\xi_a(-k)[k^2+m_g^2]\xi_a(k)
\nonumber\\
&&+ig_3S_{abc}^{ijk}(p,q,k)A_i^a(p)A_j^b(q)A_k^c(k)+{1\over 8}A_i^2[g_3^2\xi_a^2
+g_{h}^2\xi_H^2]\nonumber\\
&&+{g_3^2\over 4}[(A_i^aA_i^a)^2-(A_i^aA_j^a)^2]+{g_{h}^2\Phi_0 \over 4}
A_i^2\xi_H\nonumber\\
&&+ig_3\epsilon_{abc}p_ic_a^\dagger (p)A_i^a(q)c_c(k)\nonumber\\
&& -i{1\over 2}A_i^a(p)[g_{h}(k-q)_i\xi_a(q)\xi_H(k)+
g_3\epsilon_{abc}k_i\xi_b(q)\xi_c(k)]\nonumber\\
&&
+{1\over 24}(\lambda_{HH}\xi_H^4+2\lambda_{HG}\xi_H^2\xi_G^2+\lambda_{GG}
\xi_a^2)\nonumber\\
&&\Phi_0{1 \over 6}\xi_H(q_{HHH}\xi_H^2+q_{HGG}\xi_a^2) +{\cal L}_{CT}^{polyn}+
{15g^2\over 16\pi^3}m_{A0}\Lambda.
\label{3dnonloclagr}
\end{eqnarray}
Here the expressions for the different couplings read as
\begin{eqnarray}
g_{h}^2=g_3^2(1-Z_H), &
\lambda_{GG}=\lambda_3+\delta\lambda\nonumber\\
\lambda_{HH}=(\lambda_3+\delta\lambda )(1-2Z_H), & \lambda_{HG}=
(\lambda_3+\delta\lambda )(1-Z_H),\nonumber\\
q_{HHH}=(\lambda_3+\delta\lambda )(1-2Z_H), & q_{HGG}=(\lambda_3+\delta\lambda)
(1-Z_H).
\end{eqnarray}
The effective masses of the different fields are
\begin{eqnarray}
m_a^2 &=& {g_3^2\over 4}\Phi_0^2(1-Z_H),\nonumber\\
m_h^2 &=& (m(T)^2+\delta m^2+\delta m_H^2)(1-Z_H)+
{\lambda_3+\delta\lambda\over 2}\Phi_0^2(1-2Z_H),\nonumber\\
m_g^2 &=& m(T)^2+\delta m^2+{\lambda_3+\delta\lambda\over 6}\Phi_0^2(1-Z_H).
\end{eqnarray}
\vskip .3truecm
{\it ii) Local, non-polynomial approximation}
The rescaling and the shift of the fields proceeds as before. The main
difference is that here the order of expanding and shifting the $\Phi$-field
in the non-polynomial term is changed relative to case i).
The form of the Lagrangian can be equally written in the form
(\ref{3dnonloclagr}). Some expressions and
constants appearing in it are modified relative to case i):
\begin{equation}
V_3(\Phi^\dagger\Phi )={1\over 2}(m(T)^2+\delta m^2)\Phi^\dagger\Phi +{1\over 2}
\delta m_H^2\xi_H^2+{\lambda_3 +\delta\lambda\over 24}(\Phi^\dagger\Phi )^2
-{1\over 4\pi}m_{A0}^3,
\end{equation}
\begin{eqnarray}
m_{A0}^2 &=& m_D^2+{1\over 4}g_{h}^2\Phi_0^2, \nonumber\\
\delta m^2 &=& -{3g_3^2 m_{A0}\over 16\pi}, \nonumber\\
m_h^2 &=& (m(T)^2+\delta m^2+\delta m_H^2)(1-Z_H)+{\lambda_3\over 2}\Phi_0^2
(1-2Z_H),\nonumber\\
m_g^2 &=& m(T)^2+\delta m^2+{\lambda_3\over 6}\Phi_0^2(1-Z_H),
\end{eqnarray}
\begin{eqnarray}
&q_{HGG}=\lambda_3(1-Z_H)-{\displaystyle{9g_{h}^2g_3^2\over 64\pi m_{A0}}},\nonumber\\
&q_{HHH}=q_{HGG}(1-Z_H)+{\displaystyle{3g_{h}^6\Phi_0^2\over 256 m_{A0}^3}},\nonumber\\
&\lambda_{GG}=\lambda_3-{\displaystyle{9g_{h}^4\over 64\pi m_{A0}}},\nonumber\\
&\lambda_{HG}=\lambda_{GG}(1-Z_H)+
{\displaystyle{9g_{h}^4g_3^2\Phi_0^2\over 64\pi m_{A0}^3}},
\nonumber\\
&\lambda_{HH}=\lambda_{HG}(1-Z_H)+
\displaystyle{{9g_{h}^6\Phi_0^2\over 256\pi m_{A0}^3}
-{9g_{h}^8\Phi_0^4\over 1024\pi m_{A0}^5}}.
\end{eqnarray}
{\it iii) Nonlocal, nonpolynomial representation}
Its Lagrangean density coincides with (\ref{eq:3dact}), and the major issue
of our investigation is the quantitative comparison of its ${\cal O}(g^4)$
solution to the approximate solutions corresponding to cases i) and ii).
The couplings $q_{HHH},q_{HGG}$ and $\lambda_{HG},\lambda_{GG},\lambda_{HH}$
agree with their expression derived for case ii)
when one puts $Z_H=0, \delta m_H^2=0$.
\subsection{Variations on the effective Higgs potential}
The 1-loop correction to the "classical" value of $V_3(\Phi_0^2)$ has the unique
form for all approximations:
\begin{equation}
V_{eff}^{(1)}=-{1\over 12\pi}(6m_a^3+m_h^3+3m_g^3)+{15g_3^2\over
16\pi^3}m_{A0}\Lambda+V_{CT}^{polin}(\Phi_0 ).
\end{equation}
In this expression cut-off dependent terms with polynomial dependence on
$\Phi_0$ are not displayed, their cancellation is taken for granted. Though
the form of this expression is unique, one should keep in mind the variable
meaning of the different masses from one approximation to the other.
From the point of view of consistent
cancellation of non-polynomial divergences it is important to notice, that
the 1-loop linear divergences of the Higgs- and Goldstone-fields by
the contribution of the shifted non-polynomial term to their respective mass
produce a non-polynomial cut-off dependent term
$(-3g_3^2/16\pi^3)\Lambda m_{A0}$ (in cases ii) and iii) only)
to be compared with
the induced non-polynomial divergence eq.(\ref{nonpoldiv}). In the first two
approximation schemes no further divergences of this form are produced,
therefore their cut-off dependence is clearly different from what is generated
in the course of the integration. Below, for cases i) and ii) we assume the
presence of the appropriately modified counterterms.
The 2-loop diagrams of a local, 3-d gauge-Higgs system are the same as in
4-d. Just the functions appearing in the expressions are defined as
three-dimensional
momentum integrals. In purely algebraic steps one can work out the
general representation given in terms of the functions $I_3(m^2),
H_3(m_1,m_2,m_3)$ (see Appendix A), to arrive finally at the following
2-loop contribution:
\begin{eqnarray}
V_{eff}^{(2)} &=&
L_0\biggl( {63 g^2 m_a^2\over8} - {3 g_h^2 m_a^2\over2}
+ {3 g^2 m_G^2\over2}+ {3 g_h^2 m_G^2\over4}
+ {3 g^2 m_H^2\over4}\nonumber\\
&& - {q_{GGH}^2+q_{HHH}^2\over12}\Phi_0^2\biggr)
+{1\over128 \pi^2}(5\lambda_{GG} m_G^2+2\lambda_{GH} m_G m_H
+\lambda_{HH} m_H^2)\nonumber\\
&&+{g^2\over128 \pi^2}\biggl(
(9-63\ln3)m_a^2- {3g_h^2m_a^2\over g^2} + 12 m_a m_G
- {3g_h^2m_am_G\over g^2}\nonumber\\
&& {6 g_h^2 m_a m_H\over g^2}
+3 m_G^2 + {3 g_h^2 m_G m_H\over g^2} + {3g_h^2 m_H^2\over2g^2}\biggr)\nonumber\\
&&-{3 g_h^2\over 128 \pi^2} {m_H-m_G\over m_a} ( m_H^2-m_G^2 )\nonumber\\
&&+ {q_{GGH}^2\Phi_0^2\over192 \pi^2} \ln{2 m_G + m_H\over T}
+ {q_{HHH}^2\Phi_0^2\over192 \pi^2} \ln{3 m_H\over T}\nonumber\\
&&+ {3\over64 \pi^2}\Biggl[g_h^2 m_H^2 \ln{m_a+m_H\over2m_a+m_H}
+ {g_h^2 m_H^4\over 4 m_a^2} \ln{m_H(2m_a+m_H)\over(m_a+m_H)^2}\nonumber\\
&& + {g_h^2(m_H^2-m_G^2)^2\over 2 m_a^2} \ln{m_a+m_G+m_H\over m_G+m_H}\nonumber\\
&& - g_h^2\left(m_H^2+m_G^2-{m_a^2\over2}\right) \ln{m_a+m_G+m_H\over T}\nonumber\\
&&+ g^2\left( {m_a^2\over2}-2m_G^2\right) \ln{m_a+2m_G\over T}
-{g_h^2 m_a^2\over2} \ln{m_a+m_H\over T}\nonumber\\
&& +2g_h^2 m_a^2 \ln{2m_a+m_H\over T}
- 11 m_a^2 \ln{m_a\over T}\Biggr].
\label{pot2loop}
\end{eqnarray}
The final step of composing the full effective potential corresponds to
picking up the contributions of the purely static {\bf VAA}, and {\bf HAA}
diagrams:
\begin{eqnarray}
V_{eff}^{(2nonloc)} &=& 6g_3^2L_0(m_{A0}^2-{3\over 8}m_a^2)+
{3g_3^2\over 32\pi^2}(m_{A0}^2+
2m_{A0}m_a)-{12g_3^2\over 16\pi^3}\Lambda m_{A0}\nonumber\\
&&-{3g_3^2\over 32\pi^2}\bigl[(4m_{A0}^2-m_a^2)\ln{2m_{A0}+m_a\over \mu_3}\nonumber\\
&& -{1\over 2}m_a^2\ln{2m_{A0}+m_H\over \mu_3}\bigr].
\label{pota0nonloc}
\end{eqnarray}
The cancellation of the linear divergence, nonpolynomial in $\Phi_0^2$ can be
seen explicitly.
The 2-loop corrections to the effective potential, given by
Eqs.(\ref{pot2loop})
and (\ref{pota0nonloc}) can be compared to the results of the direct 4-d
calculation of \cite{Fod94,Buc95}. Those calculations were done with
different regularisation and renormalisation scheme, therefore in the
expression
of the potential terms reflecting this difference are seen. Still, using
the leading expressions for our couplings $g_h,q_{GGH},q_{HHH},
\lambda_{GG},\lambda_{GH}$ and $\lambda_{HH}$, all logarithmic terms expressed
through the propagator masses agree. Also our polynomial terms, except
those proportional to the regularisation dependent coefficient $L_0$, have
their exact counterpart in the 4-d expression. The $T$-independent contribution
appearing in the 4d result has no corresponding term in our case, an obvious
effect of the difference of the renormalisation schemes. The polynomial
terms explicitly depending on $\bar \mu$ and proportional to the constants
$c_1,c_2$ characteristic for the dimensional regularisation, can be
compared to our terms proportional to $L_0$. Choosing in both potentials
$\mu =T$, one finds in the 4-d expression for the coefficients of the terms
$g^2m_W^2$, $g^2(m_H^2+3m_G^2)$ and $\lambda^2\Phi_0^2$, the values
$2.441\times 10^{-2},4.012\times 10^{-3}, -2.239\times 10^{-4}$, respectively.
The corresponding coefficients from our expression are: $2.765\times 10^{-2},
5.027\times 10^{-3}, -5.374\times 10^{-4}$. Therefore the only origin of
considerable deviations in physical quantities could be the
effect of the reduction on various couplings. Our numerical experience was
that the ${\cal O}(g^4)T^2$ correction in $m^2(T)$ accounts for essentially all
differences.
\section{Phase transition characteristics}
In this section we describe and discuss the phase transition in the
three perturbative approximations introduced in the previous section.
Our analysis will cover the Higgs mass range 30 to 120 GeV. Our main interest
lies in finding the percentual variation in the following physical quantities:
the critical temperature ($T_c$), the Higgs discontinuity ($\Phi_c$),
the surface tension ($\sigma$) and the latent heat ($L_c$). The amount of
variation from one approximation to the other will give an idea of the
theoretical error coming from the reduction step.
The first step of the calculation consists to choose values for
$g$ and $\lambda$. For example one can fix $g=2/3$ and tune $\lambda$
guided by the tree-level Higgs-to-gauge boson mass ratio.
All dimensional quantities are scaled by appropriate
powers of $|m|$ (what practically amounts to set $|m|=1$ in the
expressions of the Higgs effective potential).
Next, one finds from the degeneracy condition of the effective potential
the ratio $T_c/|m|$. Here we have to discuss a phenomenon already noticed by
\cite{Fod94}. It has been observed that for Higgs mass values $m_H\geq 100$GeV
when the temperature is lowered one reaches the barrier temperature before
the degenaracy of the minima would occur. The phenomenon was traced back to
the action of the term $\sim g^2m_{A0}(m_H+3m_G)$ in the effective potential.
In our non-polynomial approximation the $A_0$-integration contributes
a negative term to $m^2(T)$ (i.e. $\delta m^2$), which acts even stronger,
and the same phenomenon is generated, when using $\mu_3 =T$, already for
$m_H\leq 70$GeV. However, by choosing somewhat more exotic value for the
normalisation scale ($\mu_3 =T/17$), we could follow the transition
up to $m_H=120$GeV. We have used this value in the whole temperature range.
It has been checked for $m_H=35$GeV that the variation of $\mu_3$ leads in
the physical quantities to negligible changes.
Finally, the relations of Appendix C allow to express with ${\cal O}(g^4)$
accuracy the ratios $M_W/|m|$ and $M_H/|m|$ with help of $g,\lambda$ and
$T_c/|m|$ (the dependence on the latter appears through our choice of the 4-d
normalisation point). The pole mass $M_H$ resulting from this calculation
appears in column 2 of Table 1, where $M_W=80.6$GeV sets the scale in physical
units. Our 4-d renormalisation scheme leads to somewhat smaller mass shifts,
than the scheme used in \cite{Far94}.
In order to present physically meaningful plots one has to eliminate
from these relations $|m|$ and scale everything by an appropriate physical
quantity.
In Fig. 1 (coulumn 3 of Table 1) we present $T_c/M_H$ as a function of
$M_H/M_W$ in the non-polynomial
non-local (NNA) approximation (upper case notation for the masses always refer
to T=0 pole mass).
Internally,
our different approximations do not affect the critical temperature, they all
agree within better than 1\%, as can be seen in Table 2.
The curve in Fig.1 is, however, systematically below the
data appearing in Table 2 of \cite{Far94}. 15\% deviation is observed
for $m_H=$35GeV, which gradually decreases to 8\% for $m_H=90$GeV.
If one wishes to compare to the 2-loop 4d calculations of
\cite{Fod94,Buc95} one has to use their coupling point: $g=0.63722, m_W=80.22$.
Again we find that our $T_c/m_H$ values are 10\% lower than those appearing
in Fig.4 of \cite{Buc95}. Our ${\cal O}(g^4)T^2$ correction to $m^2(T)$ is
about 10\% of the 1-loop value (and is about 9 times larger than what was
found in \cite{Far94}), therefore one qualitatively understands that the
barrier temperature is brought down in about the same proportion. At least in
the region $M_W\simeq M_H$ the transition is already so weak that the
transition temperature should agree very well with the temperatures limiting
the metastability range.
In Fig. 2 (column 4 of Table 1) the order parameter discontinuity is
displayed in proportion
to $T_c$. Here the agreement is extremely good in the whole 35-90 GeV range
both in comparison to \cite{Far94,Kaj96} and \cite{Buc95}. Also the variance
of our different approximations (see Table 2) is minimal.
The most interesting is the case of the surface tension,
which is shown for all three approximations in Fig.4 in $(GeV)^4$ units
(the dimensionless ratio appears in column 5 of Table 1).
We did not observe any strengthening tendency for larger Higgs masses, in
contrast to \cite{Fod94}. This systematic difference seems to be
correlated with the extended range where we find phase transition.
It leads to
$\sigma_c$ values in the range $M_H=50-80$GeV which perfectly agree with
the perturbative values quoted by \cite{Kaj96}.
The dispersion between the values of our different approximations is much
larger in the high mass range (Table 2) than for other quantities, reflecting the increased
sensitivity to non-local and non-polynomial contributions.
The situation is just a little less satisfactory in
case of the latent heat (Fig.3 and column 6 of Table 1).
Our approximations are 10-15\% above $L_c/T_c^4$ curve of Fig.6 of
\cite{Buc95} and also of the perturbative values in Table 6 of \cite{Kaj96}.
\section{Conclusions}
In this paper we have attempted to discuss in great detail the reduction
strategy, allowing non-renormalisable and non-local effective interactions.
We have made explicit the effect of various approximations on the
phase transition characteristics of the SU(2) Higgs model. Our investigation
remained within the framework of the perturbation theory. By comparing
with other perturbative studies \cite{Fod94,Buc95,Far94} we have pointed out
that the $T_c/M_H$ ratio is quite sensitive to the choice of the
3-d renormalisation scale. The order parameter discontinuity and the
the surface tension were found only moderately sensitive,
while the latent heat in the $\overline{MS}$ scheme seems to drop faster with
increasing Higgs mass. The minimum of the surface tension when $M_H$ is varied
has disappeared. One might wonder to what extent is the strengthening
effect observed in the 4-d perturbative treatment with increasing
Higgs mass physical at all.
The local polynomial and the local non-polynomial approximations start to show
important (larger than 5\%) deviations from the nonlocal, nonpolynomial
version of the effective model only for $M_H\geq 80$GeV (also below
30GeV). Since in these
regions the perturbation theory becomes anyhow unreliable, we can say that
the application of the reduction to the description of the electroweak phase
transition in the relevant Higgs mass range ($M_H\sim M_W$)
could be as accurate as 5\%. This can be true for dimensionless ratios, but
not for $T_c/M_H$.
The present assessment of the accuracy is not universal, though the structure
of the analysis is quite well fitting other field theoretic models, too.
For instance, our methods could be applied to extended versions
of the electroweak theory \cite{Los96,Cli96,Lai96} which are accessible to
quantitative non-perturbative investigations only in 3-d reduced form.
One knows that non-perturbative studies of the SU(2) Higgs transition
led to lower $T_c/M_H$ ratio, than the perturbative prediction \cite{Fod95},
and at least for $M_H=80$GeV the surface tension is much lower than it
was thought on the basis of the strenghtening effect \cite{Csi96}.
The results from the PPI-reduction seems to push the perturbative
phase transition characteristics towards this direction.
The expressions of the mass parameter of the effective theory
(\ref{bare3dmass}) and the T=0 pole masses (Appendix C) provide the necessary
background for the analysis of 3-d non-perturbative investigations following
the strategy advocated by \cite{Kaj93,Kaj96}. This will be the subject of
a forthcoming publication \cite{KarTo}.
| proofpile-arXiv_065-362 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Since the early work of Wigner \cite{Wig53}
random matrix theory (RMT) has been applied with success in many
domains of physics~\cite{Meh91}.
Initially developed to serve for nuclear physics, RMT proves itself
to provide an adequate description to any situation implying chaos.
It has been found that the spectra of many quantum systems is very close
to one of four archetypal situations described by four statistical
ensembles. For the few integrable models this is the ensemble of
diagonal random matrices, while for non-integrable systems this can be
the Gaussian Orthogonal Ensemble (GOE),
the Gaussian Unitary Ensemble (GUE), or
the Gaussian Symplectic Ensemble (GSE),
depending on the symmetries of the model under consideration.
In the last years several quantum spin Hamiltonians have been investigated
from this point of view.
It has been found \cite{PoZiBeMiMo93,HsAdA93}
that 1D systems for which the Bethe
ansatz applies have a level spacing distribution
close to a Poissonian (exponential) distribution, $P(s) = \exp(-s)$,
whereas if the Bethe ansatz does not apply, the level spacing distribution
is described by the Wigner surmise for the Gaussian orthogonal ensemble (GOE):
\begin{equation}
\label{e:wigner}
P(s) = \frac{\pi}{2} s \exp( -\pi s^2 / 4) \;.
\end{equation}
Similar results have been found for 2D quantum spin systems
\cite{MoPoBeSi93,vEGa94,BrAdA96}.
Other statistical properties
have also been analyzed, showing that the description of
the spectrum of the quantum spin system
by a statistical ensemble is valid not only
for the level spacings but also
for quantities involving more than two eigenvalues.
In a recent letter \cite{hm4} we proposed the
extension of random matrix theory
analysis to models of classical statistical mechanics
(vertex and spin models), studying
the transfer matrix of the eight-vertex model as an example.
The underlying idea is that, if there actually exists
a close relation between integrability and the Poissonian
character of the distribution, it could be
better understood in a framework which makes
Yang--Baxter integrability and its key structures
(commutation of transfer matrices depending on
spectral parameters) crystal clear: one wants to
switch from quantum Hamiltonian framework to
transfer matrix framework.
We now present the complete results of our study of transfer matrices
and a detailed description of the numerical method.
This work is split into two papers:
the first one describes the
numerical methods and the results on the eight-vertex model,
the second one treats the case of discrete spin models with the
example of the Ising model in two and three dimensions and
the standard Potts model with three states.
We will analyze a possible connection between
statistical properties of the entire spectrum of the model's
transfer matrix and the Yang--Baxter integrability.
A priori, such a connection is not sure to exist since only the few
eigenvalues with largest modulus have a physical signification,
while we are looking for properties of the entire spectrum.
However, our numerical results
show a connection which we will discuss.
We will also give an extension of the so-called ``disorder
variety'' to the asymmetric eight-vertex model
where the partition
function can be summed up without Yang--Baxter integrability.
We then present an infinite discrete symmetry group
of the model and an infinite set of
algebraic varieties stable under this group.
Finally, we test all these varieties
from the point of view of RMT analysis.
This paper is organized as follows:
in Sec.~\ref{s:numeric} we recall the
machinery of RMT, and we give some details
about the numerical methods we use.
Sec.~\ref{s:8v} is devoted to the eight-vertex model.
We list the cases where the partition function can be
summed up, and give some new analytical results concerning
the disorder variety and the automorphy group
of the asymmetric eight-vertex model.
The numerical results of the analysis of the spectrum
of transfer matrices
are presented in Sec.~\ref{s:results8v}.
The last section concludes with a discussion.
\section{Numerical Methods of RMT}
\label{s:numeric}
\subsection{Unfolding of the Spectrum}
In RMT analysis
one considers the spectrum of the (quantum) Hamiltonian, or of the
transfer matrix, as a
collection of numbers, and one looks for some possibly universal
statistical properties of this collection of numbers.
Obviously, the raw spectrum will not have any universal properties.
For example, Fig.~\ref{f:density} shows schematically
three densities of eigenvalues:
for a 2d Hubbard model, for an eight-vertex model and for the
Gaussian Orthogonal Ensemble. They have clearly nothing in common.
To find universal properties, one has to perform a kind of renormalization
of the spectrum, this is the so-called unfolding operation.
This amounts to making the {\em local} density of eigenvalues equal to
unity everywhere in the spectrum.
In other words, one has to subtract the regular part from the integrated
density of states and consider only the fluctuations.
This can be achieved by different means, however, there is no rigorous
prescription and the best criterion is the insensitivity of the final
result to the method employed or to the parameters
(for ``reasonable'' variation).
Throughout this paper, we call $E_i$ the raw eigenvalues and
$\epsilon_i$ the corresponding unfolded eigenvalues.
Thus the requirement is that the local density of the $\epsilon_i$'s is one.
We need to compute an averaged integrated density of states $\bar\rho(E)$
from the actual integrated density of states:
\begin{equation}
\rho(E)={1\over N}\int_{-\infty}^E \sum_i{\delta(e-E_i)}\,de \;,
\end{equation}
and then we take $\epsilon_i = N \bar\rho(E_i)$.
To compute $\bar\rho(E)$ from $\rho(E)$, we have performed a running average:
we choose some odd integer $2r+1$ of the order of 9--25 and then replace
each eigenvalue $E_i$ by a local average:
\begin{equation}
E_i^\prime = {1\over 2r+1} \sum_{j=i-r}^{i+r} E_j \;,
\end{equation}
and $\bar\rho(E)$ is approximated by the linear interpolation between
the points of coordinates $(E_i^\prime,i)$.
We compared the results with other methods:
one can replace
each delta peak in $\rho(E)$ by a Gaussian with a properly chosen mean square
deviation. Another method is to discard the low frequency components in
a Fourier transform of $\rho(E)$.
A detailed explanation and tests of these methods of unfolding are given
in Ref.~\cite{BrAdA97}.
Note also that for very peculiar spectra it is necessary to break it into
parts and to unfold each part separately. Also the extremal eigenvalues
are discarded since they induce finite size effects.
It comes out that of the
three methods, the running average unfolding is the best suited in
the context of transfer matrices, and it is also the fastest.
\subsection{Symmetries}
For quantum Hamiltonians, it is well known that
it is necessary to sort the eigenvalues with
respect to their quantum numbers, and to compare only
eigenvalues of states belonging to the same quantum numbers.
This is due to the fact that eigenstates with different symmetries are
essentially uncorrelated.
The same holds for transfer matrices.
In general, a transfer matrix $T$ of a classical
statistical mechanics lattice model (vertex model)
depends on several parameters (Boltzmann weights $w_i$). Due to
the lattice symmetries, or to other symmetries (permutation of colors
and so on), there exist some operators $S$ acting on the same space as the
transfer matrix and which are {\em independent of the parameters},
commuting with $T$: $[T(\{w_i\}),S] = 0$.
It is then possible to find subspaces of $T$ which are also
independent of the parameters.
Projection on these invariant subspaces amounts to block-diagonalizing $T$
and to split the unique spectrum of $T$ into the many spectra of each block.
The construction of the projectors is done with the help of the character
table of irreducible representations of the symmetry group.
Details can be found in \cite{BrAdA97,hmth}.
As we will discuss in the next sections, we always restricted ourselves
to symmetric transfer matrices.
Consequently the blocks are also symmetric and there are only {\em real}
eigenvalues. The diagonalization is performed
using standard methods of linear algebra (contained in the LAPACK library).
The construction of the transfer matrix and
the determination of its symmetries
depend on the model and are detailed in Sec.~\ref{s:transfer}
for the eight-vertex model.
\subsection{Quantities Characterizing the Spectrum}
\label{s:quantities}
Once the spectrum has been obtained and unfolded, various statistical
properties of the spectrum are investigated. The simplest one is the
distribution $P(s)$ of the spacings $s=\epsilon_{i+1}-\epsilon_i$
between two consecutive unfolded eigenvalues.
This distribution will be compared to an exponential
and to the Wigner law (\ref{e:wigner}).
Usually, a simple visual inspection is sufficient to recognize
the presence of level repulsion, the main property for non-integrable
models.
However, to quantify the ``degree'' of level repulsion, it is convenient
to use a parameterized distribution which interpolates between the
Poisson law, the Wigner law.
From the many possible distributions we have chosen
the Brody distribution \cite[ch.\ 16.8]{Meh91}:
\begin{mathletters}
\begin{equation}
P_\beta(s) = c_1\, s^\beta\, \exp\left(-c_2 s^{\beta+1}\right)
\end{equation}
with
\begin{equation}
c_2=\left[\Gamma\left({\beta+2\over\beta+1}\right)\right]^{1+\beta}
\quad\mbox{and}\quad c_1=(1+\beta)c_2 \;.
\end{equation}
\end{mathletters}
For $\beta=0$, this is a simple exponential for the Poisson ensemble,
and for $\beta=1$, one recovers the Wigner surmise for the GOE.
This distribution turns out to be convenient since its indefinite
integral can be expressed with elementary functions.
It has been widely used in the literature, except when special
distributions were expected as at the metal insulator transition
\cite{VaHoScPi95}.
Minimizing the quantity:
\begin{equation}
\phi(\beta) = \int_0^\infty(P_\beta(s)-P(s))^2 \,ds
\end{equation}
yields a value of $\beta$ characterizing the degree of level repulsion
of the distribution $P(s)$. We have always found $\phi(\beta)$ small.
When $-0.1<\beta<0.1$, the distribution is close to a Poisson law,
while for $0.5<\beta<1.2$ the distribution is close to the Wigner surmise.
If a distribution is found to be close to the Wigner surmise (or the
Poisson law), this does not mean that the GOE (or the Diagonal Matrices
Ensemble) describes correctly the spectrum.
Therefore it is of interest to compute functions involving higher
order correlations as for example the spectral rigidity
\cite{Meh91}:
\begin{equation}
\Delta_3(E) =
\left\langle \frac{1}{E} \min_{a,b}
\int_{\alpha-E/2}^{\alpha+E/2}
{\left( N(\epsilon)-a \epsilon -b\right)^2 d\epsilon} \right\rangle_\alpha \;,
\end{equation}
where $\langle\dots\rangle_\alpha$ denotes an average over
the whole spectrum.
This quantity measures the deviation from equal spacing.
For a totally rigid spectrum, as that of the harmonic oscillator, one has
$\Delta_3^{\rm osc}(E) = 1/12$, for an integrable (Poissonian) system one has
$\Delta_3^{\rm Poi}(E) = E/15$, while for the Gaussian Orthogonal
Ensemble one has
$\Delta_3^{\rm GOE}(E) = \frac{1}{\pi^2} (\log(E) - 0.0687) + {\cal O}(E^{-1})$.
It has been found that the spectral rigidity of quantum spin systems
follows $\Delta_3^{\rm Poi}(E)$ in the integrable case and
$\Delta_3^{\rm{GOE}}(E)$ in the non-integrable case.
However, in both cases, even though $P(s)$ is in good agreement with RMT,
deviations from RMT occur for $\Delta_3(E)$ at some system dependent
point $E^*$.
This stems from the fact that the rigidity
$\Delta_3(E)$ probes correlations beyond nearest neighbours
in contrast to $P(s)$.
\section{The Asymmetric Eight-Vertex Model on a square lattice}
\label{s:8v}
\subsection{Generalities}
We will focus in this section on the asymmetric
eight-vertex model on a square lattice.
We use the standard notations of Ref.~\cite{Bax82}.
The eight-vertex condition specifies that only vertices are allowed
which have an even number of arrows pointing to the center of the
vertex.
Fig.~\ref{f:vertices}
shows the eight vertices with their corresponding Boltzmann weight.
The partition function per site depends on these eight homogeneous variables
(or equivalently seven independent values):
\begin{equation}
Z(a,a',b,b',c,c',d,d')\;.
\end{equation}
It is customary to arrange the eight (homogeneous) Boltzmann weights
in a $4 \times 4$ $R$-matrix:
\begin{eqnarray}
\label{e:Rmat}
{\cal R} \, = \,
\left(
\begin {array}{cccc}
a&0&0&d\\
0&b& c&0\\
0&c^\prime&b^\prime&0\\
d^\prime&0&0& a^\prime
\end {array}
\right)
\end{eqnarray}
The entry ${\cal R}_{i j}$ is the Boltzmann weight
of the vertex defined by the four digits of the binary representation
of the two indices $i$ and $j$. The row index corresponds
to the east and south edges and the column index corresponds
to the west and north edges:
\[
{\cal R}_{i j}={\cal R}_{\mu\alpha}^{\nu\beta}
=w(\mu,\alpha|\beta,\nu)
\]
\[
\begin{picture}(40,30)(-20,-15)
\put(-10,0){\line(1,0){20}}
\put(0,-10){\line(0,1){20}}
\put(-20,-3){$\mu$}
\put(-3,13){$\beta$}
\put(13,-3){$\nu$}
\put(-3,-20){$\alpha$}
\end{picture}
\]
When the Boltzmann weights are
unchanged by negating all the four edge values the model
is said {\em symmetric} otherwise it is {\em asymmetric}. This should
not be confused with the symmetry of the transfer matrix.
Let us now discuss a general symmetry property of the model.
A combinatorial argument \cite{Bax82} shows that for any lattice
without dangling ends,
the two parameters $c$ and $c^\prime$ can be taken equal, and
that, for most regular lattices (including the periodic
square lattice considered
in this work), $d$ and $d^\prime$ can
also be taken equal (gauge transformation \cite{GaHi75}).
Specifically, one has:
\begin{equation}\label{e:gauge}
Z(a,a',b,b',c,c',d,d') =
Z(a,a',b,b',\sqrt{cc'},\sqrt{cc'},\sqrt{dd'},\sqrt{dd'}) \;.
\end{equation}
We will therefore always take $c=c'$ and $d=d'$ in the
numerical calculations.
In the following, when $c'$ and $d'$ are not mentioned it is
implicitly meant that $c'=c$ and $d'=d$.
Let us finally recall that the
asymmetric eight-vertex model is equivalent to an Ising
spin model on a square lattice including next nearest neighbor interactions
on the diagonals and four-spin interactions around a plaquette
(IRF model) \cite{Bax82,Kas75}.
However, this equivalence is not exact on a finite lattice since
the $L\times M$ plaquettes do not form a basis
(to have a cycle basis, one must take any $L\times M -1$ plaquettes plus
one horizontal and one vertical cycle).
\subsection{The Row-To-Row Transfer Matrix}
\label{s:transfer}
Our aim is to study the full spectrum of the transfer matrix.
More specifically,
we investigate the properties of the row-to-row transfer
matrix which corresponds to build up a periodic
$L\times M$ rectangular lattice by adding rows of length $L$.
The transfer matrix $T_L$ is a
$2^L\times 2^L$ matrix and the partition function becomes:
\begin{equation}
Z(a,a',b,b',c,d) = {\rm Tr} \, [T_L(a,a',b,b',c,d)]^M\;.
\label{e:Ztrace}
\end{equation}
However, there are many other possibilities to build up the lattice,
each corresponding to another form of transfer matrix: it just has to
lead to the same partition function.
Other widely used examples are
diagonal(-to-diagonal) and corner transfer matrices \cite{Bax82}.
The index of the row-to-row transfer matrix enumerates the $2^L$
possible configurations of one row of $L$ vertical bonds. We choose a
binary coding:
\begin{equation}
\alpha=\sum_{i=0}^{L-1} \alpha_i2^i \equiv | \alpha_0,\dots,\alpha_{L-1} \rangle
\end{equation}
with $\alpha_i\in\{0,1\}$, 0 corresponding to arrows pointing up or to the right
and 1 for the other directions.
One entry $T_{\alpha,\beta}$ thus describes the contribution to
the partition function of two neighboring rows having the configurations
$\alpha$ and $\beta$:
\begin{equation}
T_{\alpha,\beta} = \sum_{\{\mu\}}
\prod_{i=0}^{L-1} w(\mu_i,\alpha_i | \beta_i,\mu_{i+1}) \;. \label{e:Tab}
\end{equation}
With our binary notation, the eight-vertex condition means that
$w(\mu_i,\alpha_i | \beta_i,\mu_{i+1})=0$ if the sum
$\mu_i+\alpha_i+\beta_i+\mu_{i+1}$ is odd.
Therefore, the sum (\ref{e:Tab}) reduces to
exactly two terms: once $\mu_0$ is chosen (two possibilities),
$\mu_1$ is uniquely defined since $\alpha_0$ and $\beta_0$ are fixed
and so on.
For periodic boundary conditions,
the entry $T_{\alpha,\beta}$ is zero
if the sum of all $\beta_i$ and $\alpha_i$ is odd.
This property naturally splits the transfer matrix into two blocks:
entries between row configurations with an even number of up arrows
and entries between configurations with an odd number of up arrows.
\subsubsection{Symmetries of the Transfer Matrix}
\label{s:c=d}
Let us now discuss various symmetry properties of the transfer matrix.
(i) When one exchanges the rows $\alpha$ and $\beta$, the vertices of type
$a$, $a'$, $b$, and $b'$ will remain unchanged while the vertices
of type $c$ and $d$ will exchange into one another.
Thus for $c=d$ the transfer matrix $T_L(a,a',b,b',c,d)$ is symmetric.
In general the symmetry of the row-to-row transfer matrix
is satisfied for $c=d'$ and $d=c'$.
In terms of the equivalent IRF Ising model, condition $c=d$ means
that the two diagonal interactions $J$
and $J'$ (confer to Ref.~\cite{Bax82})
are the same: the Ising model is isotropic and therefore its
row-to-row transfer matrix is symmetric, too.
This coincidence is remarkable since the equivalence between
the asymmetric
eight-vertex model and the Ising model is not exact on a finite
lattice as already mentioned.
(ii) We now consider the effect of permutations of lattice sites preserving
the neighboring relations.
Denote by $S$ a translation operator defined by:
\begin{equation}
S|\alpha_0,\alpha_1,\dots,\alpha_{L-1} \rangle
= |\alpha_1,\dots,\alpha_{L-1},\alpha_0\rangle \;.
\end{equation}
Then we have:
\begin{equation}
\langle\alpha S^{-1}|T_L(a,a',b,b',c,d)|S\beta\rangle =
\langle\alpha|T_L(a,a',b,b',c,d)|\beta\rangle \;,
\end{equation}
and therefore:
\begin{equation}
[T_L(a,a',b,b',c,d),S] = 0 \;.
\end{equation}
For the reflection operator $R$ defined by:
\begin{equation}
R|\alpha_0,\alpha_1,\dots,\alpha_{L-1} \rangle
= |\alpha_{L-1},\dots,\alpha_{1},\alpha_0\rangle \;,
\end{equation}
we have:
\begin{equation}
\langle\alpha R^{-1}|T_L(a,a',b,b',c,d)|R\beta\rangle =
\langle\alpha|T_L(a,a',b,b',d,c)|\beta\rangle \;.
\end{equation}
Thus $R$ commutes with $T$ only for the symmetric case $c=d$:
\begin{equation}
[T_L(a,a',b,b',c,c),R] = 0 \;.
\end{equation}
Combination of the translations $S$ and the reflection $R$ leads to
the dihedral group ${\cal D}_L$.
These are all the general lattice symmetries in the square
lattice case.
The one dimensional nature of the group ${\cal D}_L$
reflects the dimensionality of the rows added to the
lattice by a multiplication by $T$. This is general :
the symmetries of the transfer matrices of $d$-dimensional lattice
models are the symmetries of ($d-1$)-dimensional space.
The translational invariance in the last space direction has already
been exploited with the use of the transfer matrix itself leading to
Eq.~(\ref{e:Ztrace}).
(iii) Lastly, we look at symmetries due to operations on the dynamic
variables themselves.
There is a priori no continuous symmetry in this model
in contrast with the
Heisenberg quantum chain which has a continuous $SU(2)$ spin symmetry.
But one can define an operator $C$ returning all arrows:
\begin{equation}
C|\alpha_0,\alpha_1,\dots,\alpha_{L-1} \rangle
= |1-\alpha_0, 1-\alpha_1,\dots,1-\alpha_{L-1}\rangle \;.
\end{equation}
This leads to an exchange of primed and unprimed Boltzmann weights:
\begin{equation}
\langle\alpha C^{-1}|T_L(a,a',b,b',c,d)|C\beta\rangle =
\langle\alpha|T_L(a',a,b',b,c,d)|\beta\rangle \;,
\end{equation}
Thus for the symmetric eight-vertex model (Baxter model)
the symmetry operator $C$ commutes with the transfer matrix:
\begin{equation}
[T_L(a,a,b,b,c,d),C] = 0 \;.
\end{equation}
\subsubsection{Projectors}
Once the symmetries have been identified,
it is simple to construct the projectors of one row of each irreducible
representation of the group ${\cal D}_L$
(details can be found in \cite{BrAdA97,hmth}).
When $L$ is even, there are four representations of dimension 1 and
$L/2-1$ representations of dimension 2 (i.e.\ in all there are $L/2+3$
projectors). When $L$ is odd, there are two one-dimensional representations
and $(L-1)/2$ representations of dimension 2, in all $(L-1)/2 + 2$ projectors.
For the symmetric model with $a=a'$ and $b=b'$,
there is an extra ${\cal Z}_2$ symmetry
which doubles the number of projectors.
Using the projectors block diagonalizes the transfer matrix leaving a
collection of small matrices to diagonalize instead of the large one.
For example, for $L=16$, the total row-to-row
transfer matrix has the linear size
$2^L=65536$, the projected blocks have linear sizes between 906 and 2065
(see also Tabs.~\ref{t:aj14} and \ref{t:aj16}).
As already mentioned, the block projection not only saves computing time
for the diagonalization but is necessary to sort the eigenvalues
with respect to the symmetry of the corresponding eigenstates.
In summary, when $c=d$, the row-to-row
transfer matrix is symmetric leading to
a real spectrum. Its symmetries have been identified.
This is a fortunate situation since restriction $c=d$ does neither
prevent, nor enforce, Yang--Baxter integrability as will be
explained in the following section.
\subsection{Integrability of the Eight-Vertex Model}
We now summarize the cases where the partition function of the
eight-vertex model can be analyzed and possibly computed.
These are the symmetric eight-vertex model,
the asymmetric six-vertex model, the free-fermion variety and some
``disorder solutions''.
\subsubsection{The Symmetric Eight-Vertex Model}
\label{ss:s8v}
Firstly, in the absence of an `electrical field', i.e.\ when
$a=a'$, $b=b'$, $c=c'$, and $d=d'$, the transfer matrix can be
diagonalized using the Bethe ansatz
or the Yang--Baxter equations \cite{Bax82}.
This case is called the symmetric eight-vertex model, also called
Baxter model \cite{Bax82}.
One finds that two row-to-row transfer matrices $T_L(a,b,c,d)$ and
$T_L(\bar a,\bar b,\bar c,\bar d)$ commute if:
\begin{mathletters}
\begin{eqnarray}
\Delta(a,b,c,d) &=& \Delta(\bar a, \bar b, \bar c, \bar d) \\
\Gamma(a,b,c,d) &=& \Gamma(\bar a, \bar b, \bar c, \bar d)
\end{eqnarray}
\end{mathletters}
with:
\begin{mathletters}
\label{e:gd}
\begin{eqnarray}
\Gamma(a,b,c,d) & = & {ab-cd \over ab + cd} \;,\\
\Delta(a,b,c,d) & = & {a^2+b^2-c^2-d^2 \over 2(ab+cd)} \;.
\end{eqnarray}
\end{mathletters}
Note that these necessary conditions
are valid for {\em any} lattice size $L$.
One also gets the {\em same} conditions for the column-to-column
transfer matrices of this model.
Thus the commutation relations lead to
a foliation of the parameter space
in elliptic curves given by the intersection of
two quadrics Eq.~(\ref{e:gd}), that is
to an elliptic parameterization
(in the so-called principal regime \cite{Bax82}):
\begin{mathletters}
\label{e:parabax}
\begin{eqnarray}
a &=& \rho {\:{\rm sn}\,}(\eta-\nu) \\
b &=& \rho {\:{\rm sn}\,}(\eta+\nu) \\
c &=& \rho {\:{\rm sn}\,}(2\eta) \\
d &=& -\rho\, k {\:{\rm sn}\,}(2\eta){\:{\rm sn}\,}(\eta-\nu)
{\:{\rm sn}\,}(\eta+\nu)
\end{eqnarray}
\end{mathletters}
where ${\:{\rm sn}\,}$ denotes the Jacobian elliptic function
and $k$ their modulus.
It is also well known that the transfer matrix $T(a,b,c,d)$
commutes with the Hamiltonian of the anisotropic Heisenberg chain
\cite{Sut70}:
\begin{equation}
{\cal H} = -\sum_i J_x \sigma^x_i\sigma^x_{i+1}
+ J_y \sigma^y_i\sigma^y_{i+1}
+ J_z \sigma^z_i\sigma^z_{i+1}
\end{equation}
if:
\begin{equation}
\label{e:heisass}
1:\Gamma(a,b,c,d):\Delta(a,b,c,d) = J_x:J_y:J_z \;.
\end{equation}
This means that, given the three coupling constants $J_x$, $J_y$, and $J_z$
of a Heisenberg Hamiltonian, there exist infinitly many quadruplets
$(a,b,c,d)$ of parameters such that:
\begin{equation}
[T(a,b,c,d),{\cal H}(J_x,J_y,J_z)] = 0 \;.
\end{equation}
Indeed the three constants $J_x$, $J_y$, and $J_z$ determine uniquely
$\eta$ and $k$ in the elliptic parameterization (\ref{e:parabax}) and
the spectral parameter
$\nu$ can take any value, thus defining a continuous one-parameter family.
Not only $T$ and ${\cal H}$ commute for arbitrary values
of the parameter $\nu$, but ${\cal H}$ is also related to the
logarithmic derivative of $T$ at $\nu=\eta$.
In this work, we examine only regions
with the extra condition $c=d$ to ensure that $T$ is symmetric,
and thus that the spectrum is symmetric.
Using the symmetries of the eight-vertex model, one finds that
the model $(a,b,c,d)$, with $c=d$ mapped into its principal
regime, gives a model $(\bar a, \bar b, \bar c,\bar d)$ with
$\bar a = \bar b$. In terms of the elliptic parameterization this means
${\:{\rm sn}\,}(\eta-\nu)={\:{\rm sn}\,}(\eta+\nu)$ or $\nu=0$.
In summary, in the continuous one-parameter family of
commuting transfer matrices $T(\nu)$
corresponding to a given value of $\Delta$ and $\Gamma$,
there are two special values of the spectral parameter $\nu$:
$\nu=\eta$ is related to the Heisenberg Hamiltonian
${\cal H}(1,\Gamma,\Delta)$, and for $\nu=0$
the transfer matrix $T(\nu)$ is symmetric
(up to a gauge transformation).
\subsubsection{Six-Vertex Model}
The six-vertex model
is a special case of the eight-vertex model: one disallows the two
last vertex configurations of Fig.~\ref{f:vertices}, this means
$d=d'=0$. Both, the symmetric
and asymmetric six-vertex models,
have been analyzed using the Bethe ansatz
or also the Yang-Baxter equations
\cite{Bax82,LiWu72,Nol92}.
We did not examine this situation any further since
condition $c=d$
to have a real spectrum (see paragraph \ref{s:c=d}(i))
leads to a trivial case.
\subsubsection{Free-Fermion Condition}
Another case where the asymmetric eight-vertex model can be solved is
the case where the Boltzmann weights verify the
so-called {\em free-fermion} condition:
\begin{equation}
\label{e:ff}
aa'+ bb' = cc'+ dd'
\end{equation}
For condition (\ref{e:ff}) the model reduces to
a quantum problem of free fermions
and the partition function can thus be computed
\cite{FaWu70,Fel73c}.
The free-fermion asymmetric eight-vertex model
is Yang--Baxter integrable, however the parameterization of the
Yang--Baxter equations is more involved
compared to the situation described in section \ref{ss:s8v}:
the row-to-row and column-to-column commutations
correspond to two different foliations of the parameter
space in algebraic surfaces.
It is also known that the asymmetric
eight-vertex free-fermion model can be mapped
onto a checkerboard Ising model. In
Appendix \ref{a:ff} we give the correspondence between the vertex
model and the spin model. The partition function
per site of the model can be expressed in term of elliptic
functions $E$ which are not (due to the complexity
of the parameterization of the Yang--Baxter equations)
straightforwardly related to the two sets of surfaces
parameterizing the Yang--Baxter equations or even
to the canonical elliptic parameterization
of the generic (non free-fermion) asymmetric eight-vertex
model (see Eqs~\ref{e:paraas} in the following,
see also \cite{BeMaVi92}). The elliptic modulus
of these elliptic functions $E$ is given in Appendix \ref{a:ff}
as a function of the checkerboard Ising variables as well as
in the homogeneous Boltzmann weights
($a$, $a'$, $b$, $b'$, $c$, $c'$, $d$, and $d'$) for the
free-fermion asymmetric eight-vertex model.
Finally, we remark that the restriction $c=d$ is compatible
with the condition (\ref{e:ff}) and, in contrast with the
asymmetric six-vertex model, the asymmetric free-fermion model
provides a case where the row-to-row transfer matrix of the model
is symmetric.
\subsubsection{Disorder Solutions}
If the parameters $a$, $a^\prime$, $b$, $b^\prime$, $c$, and
$d$ are chosen such that the $R$-matrix (\ref{e:Rmat})
has an eigenvector which is a pure tensorial product:
\begin{equation}
\label{e:condeso}
\cal{R}
\left( \begin{array}{c} 1 \\ p \end{array} \right)
\otimes
\left( \begin{array}{c} 1 \\ q \end{array} \right)
=
\lambda
\left( \begin{array}{c} 1 \\ p \end{array} \right)
\otimes
\left( \begin{array}{c} 1 \\ q \end{array} \right)
\end{equation}
then the vector:
\begin{equation} \label{e:vectorpq}
\left( \begin{array}{c} 1 \\ p \end{array} \right)
\otimes
\left( \begin{array}{c} 1 \\ q \end{array} \right)
\otimes
\cdots
\otimes
\left( \begin{array}{c} 1 \\ p \end{array} \right)
\otimes
\left( \begin{array}{c} 1 \\ q \end{array} \right)
\end{equation}
($2 L$ factors)
is an eigenvector of the diagonal(-to-diagonal)
transfer matrix $\tilde T_L$, usually simply called the diagonal
transfer matrix.
The corresponding eigenvalue is $\Lambda=\lambda^{2L}$,
with
\begin{equation}
\lambda = {aa'-bb'+cc'-dd' \over (a+a')-(b+b')} \; .
\label{e:lambda}
\end{equation}
However, the eigenvalue $\Lambda$ may, or may not, be the eigenvalue
of largest modulus.
This corresponds to the existence of so-called
disorder solutions \cite{JaMa85} for which some dimensional
reduction of the model occurs \cite{GeHaLeMa87}.
Condition (\ref{e:condeso}) is simple
to express, it reads:
\begin{eqnarray}
\label{e:condeso2}
\lefteqn{A^2 + B^2 + C^2 + D^2 + 2 A B - 2 A D - 2 B C - 2 C D= } \nonumber \\
& & (A+B-C-D) (a + b) (a^\prime + b^\prime)
-(A-D)(b^2 + b^{\prime 2}) - (B-C) (a^2 +a^{\prime 2})
\end{eqnarray}
where $A=aa^\prime$, $B=bb^\prime$, $C=cc^\prime$, and $D=dd^\prime$.
Note that in the symmetric case
$a=a^\prime$, $b=b^\prime$, $c=c^\prime$, and
$d=d^\prime$, Eq. (\ref{e:condeso2}) factorizes as:
\begin{equation}
(a - b + d - c) (a - b + d + c) (a - b - d - c) (a - b - d + c) = 0
\nonumber \end{equation}
which is the product of terms giving two disorder varieties
and two critical varieties of the Baxter model.
It is known that the symmetric model has four disorder varieties
(one of them, $a+b+c+d=0$, is not in the physical domain of
the parameter space)
and four critical varieties \cite{Bax82}.
The missing varieties can be obtained by replacing ${\cal R}$ by
${\cal R}^2$ in Eq.~(\ref{e:condeso}).
In our numerical calculations we have always found
for the asymmetric eight-vertex model that
$\Lambda$ is either the eigenvalue of largest modulus
or the eigenvalue of lowest modulus.
Finally, note that condition (\ref{e:condeso2}) does
{\em not} correspond to a solution of the Yang-Baxter equations.
This can be understood since
disorder conditions like (\ref{e:condeso2}) are not
invariant under the action of the
infinite discrete symmetry group $\Gamma$ presented in the next
subsection, whereas the solutions of the Yang-Baxter equations
are actually invariant
under the action of this group
\cite{BMV:vert,Ma86}.
On the other hand, similarly to the Yang--Baxter equations,
the ``disorder solutions''
can be seen to correspond to families of commuting
diagonal transfer matrices $\tilde T_L$ on a subspace $V$
of the $2^{2L}$ dimensional space on which $\tilde T_L$ acts:
\begin{equation}
[\tilde T_L(a,a',b,b',c,d),
\tilde T_L(\bar a,\bar a',\bar b,\bar b',\bar c,\bar d)] \bigm|_V =0\;,
\end{equation}
where subscript $V$ means that the commutation is only valid on
the subspace $V$.
Actually, this subspace is the one-dimensional subspace corresponding
to vector (\ref{e:vectorpq}).
The notion of transfer matrices commuting only on a subspace $V$ can
clearly have precious consequences on the calculation of the
eigenvectors and eigenvalues, and hopefully of the partition function
per site.
One sees that the Yang--Baxter integrability and the disorder
solution ``calculability'' are two limiting cases where $V$
respectively corresponds to the entire space where $\tilde T_L$ acts
and to a single vector, namely Eq.~(\ref{e:vectorpq}).
\subsection{Some Exact Results on the Asymmetric Eight-Vertex Model}
\label{ss:exares}
When the Boltzmann weights of the model do not
verify any of the conditions of the preceding section, the
partition function of the model has not yet been calculated.
However, some analytical results can be stated.
Algebraic varieties of the parameter space can be presented,
which have very special symmetry properties.
The intersection of these algebraic varieties with critical
manifolds of the model are candidates for multicritical points
\cite{hm1}.
We have tested the properties of the spectrum
of the transfer matrices on these loci of the phase space.
There actually exists an infinite discrete group of symmetries
of the parameter space of the asymmetric eight-vertex model
(and beyond of the sixteen-vertex model \cite{BeMaVi92}).
The critical manifolds of the model have to be compatible
with this group of symmetries and this is also true
for any exact property of the model: for instance
if the model is Yang--Baxter integrable, the YBE are compatible
with this infinite symmetry group
\cite{BMV:vert}.
However, it is crucial to recall that this symmetry group is not
restricted to the Yang--Baxter integrability. It is a symmetry group
of the model {\em beyond} the integrable framework and
provides for instance a canonical elliptic foliation of
the parameter space of the model
(see the concept of quasi-integrability \cite{BeMaVi92}).
The group is generated by simple transformations of the
homogeneous parameters of the model: the matrix inversion and
some representative geometrical symmetries, as
for example the geometrical symmetry of the
square lattice which amounts to
a simple exchange of $c$ and $d$:
\[
t_1
\left (
\begin {array}{cccc}
a &0 &0 &d \\
0 &b &c &0 \\
0 &c &b'&0 \\
d &0 &0 &a'
\end {array}
\right )
=
\left (
\begin {array}{cccc}
a&0&0&c\\
0&b&d&0\\
0&d&b^\prime&0\\
c&0&0& a^\prime
\end {array}
\right )
\]
Combining $I$ and $t_1$ yields an infinite discrete
group $\Gamma$ of symmetries of the parameter space
\cite{Ma86}.
This group is isomorphic to the infinite dihedral group
(up to a semi-direct product with ${\cal Z}_2$).
An infinite order generator of the non-trivial part of this group
is for instance $t_1 \cdot I $.
In the parameter space of the model this
generator yields an infinite set of points located on elliptic curves.
The analysis of the orbits of the group $\Gamma$ for the
asymmetric eight-vertex model yields (a finite set of) elliptic curves
given by:
\begin{equation}
\label{e:paraas}
\frac{(a a' + b b' - c c' - d d')^2}{a a' b b'} ={\rm const}
,\quad \frac{a a' b b'}{c c' d d'} = {\rm const}
\end{equation}
and
\[
\frac{a}{a'} = {\rm const} ,\quad
\frac{b}{b'} = {\rm const} ,\quad
\frac{c}{c'} = {\rm const} ,\quad
\frac{d}{d'} = {\rm const}.
\]
In the limit of the symmetric eight-vertex model one recovers the
well-known elliptic curves (\ref{e:gd}) of the Baxter model
given by the intersection of two quadrics.
Recalling parameterization (\ref{e:parabax}) one sees that $t_1\cdot I$,
the infinite order generator of $\Gamma$, is actually
represented as a shift by $\eta$ of the spectral parameter:
$\nu \rightarrow \nu+\eta$.
The group $\Gamma$ is generically infinite, however, if some
conditions on the parameters hold, it degenerates into a finite
group.
These conditions define algebraic varieties for which the model
has a high degree of symmetry.
The location of multicritical points seems to correspond to
enhanced symmetries namely to the algebraic varieties where the
symmetry group $\Gamma$ degenerates into a finite group
\cite{hm1}.
Such conditions of commensuration of the shift $\eta$ with one
of the two periods of the elliptic functions occurred many times
in the literature of theoretical physics
(Tutte--Behara numbers, rational values of the central charge and
of critical exponents \cite{MaRa83}).
Furthermore, one can have, from the conformal field theory literature,
a prejudice of free-fermion parastatistics on these algebraic
varieties of enhanced symmetry \cite{DaDeKlaMcCoMe93}.
It is thus natural to concentrate on them.
We therefore have determined
an {\em infinite} number of these algebraic varieties, which are remarkably
{\em codimension-one} varieties of the parameter space.
Their explicit expressions
become quickly very large in terms of the homogeneous parameters
of the asymmetric eight-vertex model, however,
their expressions are remarkably simple in terms of
some algebraic invariants generalizing those
of the Baxter model, namely:
\begin{mathletters}
\begin{eqnarray}
J_x & = & \sqrt{ a a' b b'} + \sqrt{c c' d d'} \\
J_y & = & \sqrt{ a a' b b' }- \sqrt{c c' d d'} \\
J_z & = &{{ a a' + b b' - c c' - d d'}\over {2}}.
\end{eqnarray}
\end{mathletters}
Note that, in the symmetric subcase, one recovers Eqs.~(\ref{e:heisass}).
In terms of these well-suited homogeneous variables,
it is possible to extend
the ``shift doubling'' ($ \eta \rightarrow 2 \eta$) and
``shift tripling'' ($ \eta \rightarrow 3 \eta$) transformations
of the Baxter model to the asymmetric eight-vertex model.
One gets for the shift doubling transformation:
\begin{mathletters}
\label{doubling}
\begin{eqnarray}
J_x' & = & J_z^2 J_y^2 - J_x^2 J_y^2- J_z^2 J_x^2 \\
J_y' & = & J_z^2 J_x^2 - J_x^2 J_y^2- J_z^2 J_y^2 \\
J_z' & = & J_x^2 J_y^2 - J_z^2 J_x^2- J_z^2 J_y^2
\end{eqnarray}
\end{mathletters}
and for the shift tripling transformation:
\begin{mathletters}
\label{three}
\begin{eqnarray}
J_x'' & = &
\left (-2 J_z^2J_y^2J_x^4 -3 J_y^4J_z^4+2 J_y^2J_z^4J_x^2+J_y^4
J_x^4+2 J_y^4J_z^2J_x^2+J_z^4J_x^4
\right ) \cdot J_x
\\
J_y'' & = &
\left (2 J_z^2J_y^2J_x^4
-3 J_z^4J_x^4+J_y^4J_x^4 -2 J_y^4J_z^2J_x^2+J_y^4J_z^4
+2 J_y^2J_z^4J_x^2
\right ) \cdot J_y \\
J_z'' & = &
\left (J_y^4J_z^4+2 J_y^4J_z^2J_x^2 -3 J_y^4J_x^4
-2 J_y^2J_z^4J_x^2+2 J_z^2J_y^2J_x^4
+J_z^4J_x^4
\right ) \cdot J_z
\end{eqnarray}
\end{mathletters}
The simplest codimension-one finite order varieties are:
$J_x=0$, $J_y=0$, or $J_z=0$.
One remarks that $J_z=0$ is nothing but the free-fermion condition
(\ref{e:ff}) which is thus a condition for $\Gamma$ to be finite.
Another simple example is:
\begin{equation}
J_y J_z - J_x J_y - J_x J_z = 0,
\end{equation}
and the relations obtained by all permutations of $x$, $y$, and $z$.
Using the two polynomial transformations (\ref{doubling})
and (\ref{three}) one can easily get an {\em infinite number}
of codimension-one algebraic varieties of finite order.
The demonstration that the codimension-one algebraic varieties
built in such a way are actually finite order conditions
of $\Gamma$ will be given elsewhere.
Some low order varieties are given in Appendix \ref{a:invariant}.
In the next section, the lower order varieties are tested
from the view point of statistical properties of the
transfer matrix spectrum.
\section{Results of the RMT Analysis}
\label{s:results8v}
\subsection{General Remarks}
The phase space of the asymmetric eight-vertex model with
the constraint $c=d$ (ensuring symmetric transfer matrices
and thus real spectra)
is a four-dimensional space
(five homogeneous parameters $a$, $a'$, $b$, $b'$, and $c$).
Many particular algebraic varieties of this four-dimensional space
have been presented in the previous section and will now be
analyzed from the random matrix theory point of view.
We will present the full distribution of eigenvalue spacings
and the spectral rigidity at some representative points.
Then we will analyze the behavior of the eigenvalue spacing
distribution along different paths in the four-dimensional
parameter space.
These paths will be defined keeping some Boltzmann weights
constant and parameterizing the others by a single parameter $t$.
We have generated transfer matrices
for various linear sizes, up to $L=16$ vertices,
leading to transfer matrices of size up to $65536\times65536$.
Tables~\ref{t:aj14} and \ref{t:aj16} give the dimensions of the different
invariant subspaces for $L=14$ and $L=16$.
Note that the size of the blocks to diagonalize increases
exponentially with the linear size $L$.
The behavior in the various subspaces is not significantly different.
Nevertheless the statistics is better for larger blocks since
the influence of the boundary of the spectrum
and finite size effects are smaller. To get better statistics
we also have averaged the results of several blocks
for the same linear size $L$.
\subsection{Near the Symmetric Eight-Vertex Model}
Fig.~\ref{f:pds} presents the
probability distribution of the eigenvalue spacings for three
different sets of Boltzmann weights which are listed in
Tab.~\ref{t:pdscases}.
Fig.~\ref{f:pds}a) corresponds to a point of a symmetric eight-vertex model
while the other cases (b) and (c) are results for the asymmetric
eight-vertex model.
The data points result from about 4400 eigenvalue spacings coming from the
ten even subspaces for $L=14$ which are listed in Tab.~\ref{t:aj14}.
For the symmetric model (a),
using the symmetry under reversal of all arrows,
these blocks can once more be splitted into two sub-blocks of equal size.
The broken lines show the exponential and the Wigner distribution
as the exact results for the diagonal random matrix ensemble
(i.e.\ independent eigenvalues)
and the $2\times2$ GOE matrices.
In Fig.~\ref{f:pds}a) the data points fit very well an exponential,
whereas in Figs.~\ref{f:pds}b) and \ref{f:pds}c)
they are close to the Wigner surmise.
In the latter cases we have also added the best fitting Brody distribution
with the parameter $\beta$ listed in Tab.~\ref{t:pdscases}.
The agreement with the Wigner distribution is better for the
case (c) where the asymmetry expressed by the ratio $a/a'$ is bigger.
We also have calculated the spectral rigidity to test how accurate
is the description of spectra of transfer matrices in terms of
spectra of mathematical random matrix ensembles.
We present in Fig.~\ref{f:d3}
the spectral rigidity $\Delta_3(E)$
for the same points in parameter space corresponding to integrability
and to non-integrability as in Fig.~\ref{f:pds}. The two
limiting cases corresponding to the Poissonian distributed
eigenvalues (solid line) and to GOE distributed eigenvalues (dashed line)
are also shown.
For the integrable point the agreement between the numerical
data and the expected rigidity is very good.
For the non-integrable case
the departure of the rigidity from the expected behavior
appears at $E \approx 2$ in case (b) and at $E\approx6$ in case
(c) (in units of eigenvalue spacings),
indicating that the RMT analysis is only valid at short scales.
Such behavior has already been seen in quantum spin systems
\cite{MoPoBeSi93,BrAdA96}.
We stress that the numerical results concerning the rigidity
depend much more on the unfolding than the results concerning
the spacing distribution.
Summarizing the results for the eigenvalue spacing distribution
and the rigidity, we have found very good agreement with the
Poissonian ensemble for the symmetric eight-vertex model (a),
good agreement with the GOE for the asymmetric model (c) and
some intermediate behavior for the asymmetric eight-vertex model (b).
The difference between
the behavior for the cases (b) and (c) can be explained by
the larger asymmetry in case (c): case (b) is closer to
the integrable symmetric eight-vertex model.
To study the proximity to the integrable model, we
have determined the `degree' of level repulsion $\beta$ by fitting the
Brody distribution to the statistics along a path
($a=t$, $a'=4/t$, $b=b'=4/5$, $c=\sqrt{5/8}$)
joining the cases (a) and (c) for different lattice sizes.
The result is shown in Fig.~\ref{f:fss}, the details
about the number of blocks and eigenvalue spacings used in the
distributions are listed in Tab.~\ref{t:fss}.
A finite size effect is seen: we always find $\beta\approx0$ for
the symmetric model at $a/a'=1$ and increasing the system size
leads to a better coincidence with the Wigner distribution ($\beta=1$) for
the non-integrable asymmetric model $a\not=a'$.
So in the limit $L\rightarrow \infty$
we claim to find a delta-peak at the symmetric point $a=a'=2$.
We also have found that the size effects are really
controlled by the length $L$ and not by the size of the block.
However, our finite size analysis is only qualitative.
There is an uncertainty on $\beta$ of about $\pm0.1$.
There are two possible sources for this uncertainty.
The first one is a simple statistical effect and
could be reduced increasing the number of spacings.
The second one is a more inherent problem due to the
empirical parameters in the unfolding procedure. This
source of errors can not be suppressed increasing the size $L$.
For a quantitative analysis of the finite size effects
it would be necessary to have a high accuracy on $\beta$ and
to vary $L$ over a large scale, which is not possible because
of the exponential growth of the block sizes with~$L$.
To test a possible extension of the critical variety $a=b+c+d$
outside the symmetric region $a=a'$, $b=b'$,
we have performed similar calculations along the path
($a=t$, $a'=4/t$, $b=b'=4/5$, $c=3/5$) crossing the symmetric
integrable variety at a critical point $t=2$. The results are the same:
we did not find any kind of Poissonian behavior when $t \neq 2$.
We have tested one single path and not the whole neighborhood
around the Baxter critical variety.
This would be necessary if one really wants to test a
possible relation between Poissonian behavior and criticity
instead of integrability
(both properties being often related).
The possible relation between Poissonian behavior
and criticity will be discussed for spin models in the second paper.
We conclude, from all these calculations, that the analysis
of the properties of the unfolded spectrum of
transfer matrices provides an efficient way to detect
integrable models, as already known for the
energy spectrum of quantum systems
\cite{PoZiBeMiMo93,HsAdA93,MoPoBeSi93,vEGa94}.
\subsection{Case of Poissonian Behavior
for the Asymmetric Eight-Vertex Model}
We now investigate the phase space far from the Baxter model.
We define paths in the asymmetric region which cross the varieties
introduced above but which do not cross the Baxter model.
These paths and their intersection with the different varieties
are summarized in Tab.~\ref{t:paths}.
Fig.~\ref{f:beta1} corresponds to the path (a)
($a=4/5$, $a'=5/4$, $b=b'=t$, $c=1.3$).
This defines a path which crosses the free-fermion
variety at the solution of Eq.~(\ref{e:ff}): $t=t_{\rm ff}=\sqrt{2.38}$
and the disorder variety at
the two solutions of Eq.~(\ref{e:condeso}): $t=t_{\rm di}^{\rm max}\approx1.044$ and
at $t=t_{\rm di}^{\rm min}\approx1.0056$
(the subscript ``di'' stands for disorder).
See Tab.~\ref{t:paths} for the intersections with the other varieties.
We have numerically found that, at the point $t=t_{\rm di}^{\rm max}$, the
eigenvalue (\ref{e:lambda}) is the one of largest modulus,
whereas at $t=t_{\rm di}^{\rm min}$ it is
the eigenvalue of smallest modulus
(this is the superscript min or max).
The results shown are obtained using
the representation $R=0$ for $L=16$ (see Tab.~\ref{t:aj16}).
After unfolding and discarding boundary states we are left with
a distribution of about 1100 eigenvalue spacings.
One clearly sees that, most of the time,
$\beta$ is of the order of one, signaling that the spacing
distribution is very close to the Wigner distribution,
except for $t=t_{\rm ff}\approx1.54$ and for $t$ close to
the disorder solutions, where $\beta$ is close to zero.
This is not surprising for $t=t_{\rm ff}$
since the model is Yang--Baxter integrable at this point.
The value $\beta(t_{\rm ff})$ is slightly negative: this
is related to a `level attraction' already noted in \cite{hm4}.
The downward peak is very sharp
and a good approximation of a $\delta$ peak that
we expect for an infinite size.
At $t=t_{\rm di}^{\rm min}$ and $t = t_{\rm di}^{\rm max}$
the model is {\em not} Yang-Baxter integrable.
We cannot numerically resolve between these two points.
Therefore, we now study paths where these two
disorder solutions are clearly distinct.
For Fig.~\ref{f:beta2} they are both below the free-fermion point,
while for Fig.~\ref{f:beta3} the free-fermion point is between
the two disorder solution points.
In each of the Figs.~\ref{f:beta3} and \ref{f:beta2}
are shown the results for two paths
which differ only by an exchange of the two weights $a\leftrightarrow a'$.
In Fig.~\ref{f:beta3}
one clearly sees a peak to $\beta$ slightly negative
at the free-fermion point at $t=0.8$
and another one at one disorder solution point
$t=t_{\rm di}^{\rm max}\approx1.46$
for both curves but no peak at the second disorder solution
point $t=t_{\rm di}^{\rm min}\approx0.55$.
It is remarkable that only point
$t=t_{\rm di}^{\rm max}$ yields the eigenvalue of largest modulus
for the diagonal(-to-diagonal) transfer matrix. Consequently,
one has the partition function per site of the model at this point.
At point $t=t_{\rm di}^{\rm min}$,
where the partition function is not known,
we find level repulsion. However, only for path (c) the degree of
level repulsion $\beta$ is close to unity
while for path (b) it saturates at a much smaller value.
Another difference between the cases (b) and (c)
is a minimum in the curve
of $\beta(t)$ for path (c) at $t\approx 1.8$ which
is not seen for path (b). We do not
have a theoretical explanation for these phenomena yet:
these points are not located on any of the varieties presented
in this paper. We stress that an explanation cannot straightforwardly be
found in the framework of the symmetry group $\Gamma$ presented here
since $a$ and $a'$ appear only with the product $aa'$.
It also cannot be found in the Yang--Baxter framework,
since $a$ and $a'$ are on the same footing in the Yang--Baxter equations.
In Fig.~\ref{f:beta2} the curves of $\beta$ for the two paths (d) and (e)
again coincide very well at the free-fermion point
at $t=t_{\rm ff}\approx1.61$.
But the behavior is very different for $t<1$ where the solutions
of the disorder variety are located.
For the path (d) neither of the two disorder points
is seen on the curve $\beta(t)$ which is almost
stationary near a value around 0.6. This means that some
eigenvalue repulsion occurs, but the entire distribution is
not very close to the Wigner surmise.
On the contrary, for path (e) the spacing distribution is very close to
a Poissonian distribution ($\beta(t) \approx 0$)
when $t$ is between the two disorder points.
This suggests that the status of eigenvalue spectrum on
the disorder variety of the asymmetric eight-vertex model is not simple:
a more systematic study would help to clarify the situation.
We now summarize the results from the Figs.~\ref{f:beta1}--\ref{f:beta2}:
generally, the statistical properties of the transfer matrix spectra
of the asymmetric eight-vertex model are close to those of the GOE
except for some algebraic varieties.
We have always a very sharp peak with $\beta\rightarrow 0$
at the free-fermion point, often $\beta\approx-0.2$.
All other points with $\beta \rightarrow 0$ are found
to be a solution of the disorder variety (\ref{e:condeso2})
of the asymmetric eight-vertex model.
\subsection{Special Algebraic Varieties}
To conclude this section we discuss the special
algebraic varieties of the symmetry group $\Gamma$. As explained
in subsection \ref{ss:exares} it is possible to construct
an infinite number of algebraic varieties where the generator
is of finite order $n$: $(t_1\cdot I)^n = {\rm Id}$
and thus $\Gamma$ is finite order.
As an example, the solutions for $n=6$ and $n=16$ are given in Appendix
\ref{a:invariant}. We have actually calculated a third variety,
the expression of which is too long to be given ($n=8$).
We give in Tab.~\ref{t:paths} the values of the parameter
$t$ for which each path crosses each variety
$t_{\rm fo}^6$, $t_{\rm fo}^8$, $t_{\rm fo}^{16}$
(the subscript ``fo'' stands for finite order and the superscript
is the order $f$). It is easy to verify on the
different curves that no tendency to Poissonian behavior
occurs at these points.
We therefore give a negative answer to the question of a
special status of {\em generic} points of
the algebraic finite order varieties
with respect to the statistical properties of
the transfer matrix spectra.
However, one can still imagine that subvarieties of these
finite order varieties could have Poissonian behavior and
be candidates for free parafermions or multicritical points.
\section{Conclusion and Discussion}
We have found that the entire spectrum of the symmetric
row-to-row transfer matrices of the eight-vertex model
of lattice statistical mechanics
is sensitive to the Yang--Baxter integrability of the model.
The GOE provides a satisfactory
description of the spectrum of non Yang--Baxter
integrable models: the eigenvalue spacing distribution
and the spectral rigidity up to an energy scale
of several spacings are in agreement with the
Wigner surmise and the rigidity of the
GOE matrix spectra. This accounts for
``eigenvalue repulsion''.
In contrast, for Yang--Baxter integrable
models, the unfolded spectrum has many features of a set of
independent numbers: the spacing distribution is Poissonian and the
rigidity is linear over a large energy scale.
This accounts for ``eigenvalue independence''.
However, we have also given a non Yang--Baxter integrable
disorder solution of the asymmetric eight-vertex model.
For some parts of it the spectrum is clearly Poissonian, too.
This suggests that the Wignerian nature of the spectrum is not completely
controlled by the Yang--Baxter integrability alone, but possibly
by a more general notion of ``calculability'',
possibly based on the existence of a family of
transfer matrices commuting on the same subspace.
We have also found some ``eigenvalue attraction'' for some
Yang--Baxter integrable models, namely for most points of the
free-fermion variety.
These results could surprise
since we do not a priori expect properties
involving all the $2^L$ eigenvalues when only the
few eigenvalues of larger modulus have a physical significance.
However, the eigenvalues of small modulus control the finite
size effects, and it is well known that, for example,
the critical behavior (critical exponents) can be deduced
from the finite size effects.
The nature of the eigenvalue spacing distribution being an effective
criterion,
we have also used it to
test unsuccessfully various special manifolds including
the vicinity of the critical variety of the Baxter model.
We will present in a forthcoming publication a similar
study of spin model (rather than vertex models).
In particular it is interesting to analyze
the spectrum on a critical, but not Yang--Baxter integrable,
algebraic variety of codimension one
as it can be found in the $q=3$ Potts model on a triangular
lattice with three-spin interactions \cite{WuZi81}.
However, this leads models the transfer
matrix of which cannot be made symmetric.
This will require a particular study of complex spectra which is
much more complicated.
In particular the eigenvalue repulsion becomes two-dimensional,
and to investigate the eigenvalue spacing distribution, one has
to analyze the distances between eigenvalues in two dimensions.
\acknowledgments
We would like to thank Henrik Bruus
for many discussions concerning random matrix theory.
| proofpile-arXiv_065-363 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The distribution of metallicity in nearby G-dwarfs, a major
constraint on models of the evolution of the Galaxy, has long
presented us with the ``G dwarf problem'' (Pagel and Patchett
1975). The G-dwarf problem arises because there is an apparent paucity
of metal poor G-dwarfs relative to what we would expect from the
simplest (closed box) models of Galactic chemical evolution. There are
rather a large number of ways that the evolutionary models can be
modified in order to bring them into consistency with the data, such
as pre-enrichment of the gas, a time dependent Initial Mass Function
or gas infall.
G-dwarfs are sufficiently massive that some of them have evolved
away from the main sequence, and these evolutionary corrections must
be taken into account when determining their space densities and
metallicities. While these problems are by no means intractable, it
has long been recognised that K dwarfs would make for a cleaner sample
of the local metal abundance distribution, because for these stars the
evolutionary corrections are negligible. K dwarfs are of course
intrinsically fainter, and it has not been until recently that
accurate spectroscopic K dwarf abundance analyses have become
available, with which to calibrate a photometric abundance estimator.
Furthermore, with the release of {\it Hipparcos} data expected soon,
accurate parallaxes and distances of a complete and large sample of K
dwarfs will become available, from which the distribution of K dwarf
abundances can be measured. Also, the accurate parallax results given
by {\it Hipparcos} will mean that we can select dwarfs by absolute
magnitude, which is a better way of isolating stars of a given mass
range than is selection by colour (as has been used in samples of G
dwarfs).
In this paper, we have developed a photometric abundance indicator
for G and K dwarfs. In section 1 we have taken a sample of nearby disk
G and K dwarfs for which accurate spectroscopic abundances in a
number of heavy elements and effective temperatures have been measured
by Morell (1994). We have supplemented these data with several low
metallicity G and K dwarfs for which accurate metallicity and
effective temperature data are available in the literature. In
sections 2 and 3 we use broadband $VRI$ and Geneva photometry (taken
from the the literature), to develop an abundance index which
correlates well with the spectroscopic metallicities, and can be
transformed to abundance with an accuracy of circa 0.2 dex. In section
4 we measure abundances for approximately 200 G and K dwarfs drawn
from the Gliese catalog. In sections 5 and 6 we describe the
kinematics of the dwarfs, and demonstrate that the K dwarfs show the
same paucity of metal deficient stars as seen in the G dwarfs,
indicating that there is a ``K dwarf'' as well as a G dwarf
problem. In section 6 we draw our conclusions.
\section{Spectroscopic G and K dwarf Sample}
Our starting point for calibrating a photometric abundance index for
the G and K dwarfs is a sample of accurate and homogeneously determined
spectroscopic abundances. Good abundances for K dwarfs have been
difficult to carry out until recently, because of the effects of line
crowding, the extended damping wings of strong lines, the strong
effects on the atmospheres of molecular species and the intrinsic
faintness of the stars.
Our sample of dwarfs comes primarily from Morell, K\"allander and
Butcher (1992) and Morell (1994). These authors give accurate
metallicities, gravities and effective temperatures for 26 G0 to K3
dwarfs. Morell (1994) observed a sample of dK stars with high
dispersion (resolving power 90,000) and high signal to noise with the
ESO Coud\'e Auxiliary Telescope (CAT) at La Silla. The sample
included all dK stars in the Bright Star Catalogue which were
observable from La Silla, after removing known spectroscopic
binaries. Wavelength regions were primarily chosen to determine CNO
abundances as well as various metals (Na, Mg, Al, Si, Ca, Sc, Ti, V,
Cr, Fe, Co, Ni, Cu, Y, Zr, La, Nd) at 5070.0 to 5108.0 \AA, 6141.0 to
6181.5 \AA, 6290.0 to 6338.0 \AA, 6675.0 to 6724.5 \AA and 7960.0 to
8020.0 \AA. Signal to noise exceeded 100 for most stars and spectral
regions.
The spectra were analysed using spectral synthesis methods, based on
model atmospheres calculated with the ODF version of the MARCS program
(Gustafsson et. al. 1975). Initial estimates of the stellar effective
temperatures were made from the $V-R$ and $R-I$ colours, using the
temperature scale of vandenBerg and Bell (1985). (Cousins UBVRI
photometry was obtained for 17 stars with the ESO 50 cm telescope in
April and November 1988 and February 1989). The temperatures were
then improved by examining 12 Fe lines with a range of excitation
energies, and adjusting the temperatures until no trends were seen
between excitation energy and the derived abundance of the species.
For half the stars this lead to adjustments of less than 50 K, and for
the remaining half to adjustments between 50 and 250 K. Gravities
were determined from a single Ca line at $\lambda\,6162$ \AA. Three of
the G stars in the sample were found to be slightly evolved, with
lower log(g) values. Abundances were determined using spectral
synthesis techniques for many species; here we describe only the Fe
abundances. Fe abundances were measured for 12 neutral, weak and
unblended Fe lines, and very good agreement was obtained amongst the
lines. The errors in the derived mean Fe abundances are estimated as
smaller than 0.05 dex. An error of approximately 100 K in adopted
effective temperature leads to a change in derived Fe abundance of
only 0.01 dex, so any systematic errors in the temperature scale do
not have a large effect on the abundance scale. Table 1 shows our
sample of G and K dwarfs. Column 1 shows the HD number, column 2 a
secondary identification, column 3 the spectral type Sp, column 4 the
effective temperature \ifmmode{~T_{\mathrm{eff}}}\else$T_{\mathrm{eff}}$~\fi, column 5 the surface gravity log(g), and
column 6 the spectroscopically determined abundance [Fe/H]$_{\rm
Spec}$, with a note on its source in column 7. Columns 8 and 9 show
$b_1$ and Cousins $R-I$, with a note on the source of $R-I$ in column
10. The estimated abundance [Fe/H]$_{\mathrm{Gen}}$ based on $b_1$ and
$R-I$ (as described in the next section) is shown in last column.
\begin{table*}
\small
\caption{The G and K dwarf sample.}
\begin{center}
\begin{tabular}{lllrrrcrrcr}
\hline
HD & Other ID& Sp & \ifmmode{~T_{\mathrm{eff}}}\else$T_{\mathrm{eff}}$~\fi &log(g)& [Fe/H]$_{\rm Spec}$& Note & $b_1~~$ & $R-I$ & Note & [Fe/H]$_{\rm Gen}$ \\
2151 &HR98 &G2IV & 5650 & 4.0~~ & $ -0.30~~$& 1& $ 1.072$ & 0.339 & 9& $-0.33~~~~$ \\
4628 &HR222 &K2V & 5150 & 4.6~~ & $ -0.40~~$& 2& $ 1.252$ & 0.443 & 2& $-0.18~~~~$ \\
10361 &HR487 &K5V & 5100 & 4.6~~ & $ -0.28~~$& 2& $ 1.235$ & 0.451 & 2& $-0.42~~~~$ \\
10700 &HR509 &G8V & 5300 & 4.4~~ & $ -0.50~~$& 1& $ 1.129$ & 0.385 & 9& $-0.45~~~~$ \\
13445 &HR637 &K1V & 5350 & 4.6~~ & $ -0.24~~$& 2& $ 1.186$ & 0.420 & 2& $-0.43~~~~$ \\
23249 &HR1136 &K0IV & 4800 & 3.9~~ & $ -0.10~~$& 1& $ 1.263$ & 0.435 & 9& $ 0.02~~~~$ \\
26965 &HR1325 &K1V & 5350 & 4.6~~ & $ -0.30~~$& 2& $ 1.198$ & 0.419 & 2& $-0.31~~~~$ \\
38392 &HR1982 &K2V & 4900 & 4.6~~ & $ -0.05~~$& 2& $ 1.301$ & 0.462 & 2& $-0.02~~~~$ \\
63077 &HR3018 &G0V & 5600 & 4.0~~ & $ -1.00~~$& 1& $ 1.016$ & 0.360 & 9& $-1.06~~~~$ \\
72673 &HR3384 &K0V & 5200 & 4.6~~ & $ -0.35~~$& 2& $ 1.157$ & 0.405 & 2& $-0.47~~~~$ \\
100623 &HR4458 &K0V & 5400 & 4.6~~ & $ -0.26~~$& 2& $ 1.183$ & 0.412 & 2& $-0.35~~~~$ \\
102365 &HR4523 &G5V & 5600 & 4.1~~ & $ -0.30~~$& 1& $ 1.080$ & 0.360 & 9& $-0.53~~~~$ \\
131977 &HR5568 &K4V & 4750 & 4.7~~ & $ 0.05~~$& 2& $ 1.420$ & 0.522 & 2& $ 0.20~~~~$ \\
136352 &HR5699 &G4V & 5700 & 4.0~~ & $ -0.40~~$& 1& $ 1.073$ & 0.350 & 9& $-0.46~~~~$ \\
136442 &HR5706 &K0V & 4800 & 3.9~~ & $ 0.35~~$& 2& $ 1.372$ & 0.473 & 2& $ 0.43~~~~$ \\
146233 &HR6060 &G2V & 5750 & 4.2~~ & $ 0.00~~$& 1& $ 1.092$ & 0.335 & 9& $-0.11~~~~$ \\
149661 &HR6171 &K2V & 5300 & 4.6~~ & $ 0.01~~$& 2& $ 1.214$ & 0.397 & 2& $ 0.10~~~~$ \\
153226 &HR6301 &K0V & 5150 & 3.8~~ & $ 0.05~~$& 2& $ 1.260$ & 0.450 & 2& $-0.20~~~~$ \\
160691 &HR6585 &G3IV/V& 5650 & 4.2~~ & $ -0.10~~$& 1& $ 1.124$ & 0.335 & 9& $ 0.15~~~~$ \\
165341 &HR6752A&K0V & 5300 & 4.5~~ & $ -0.10~~$& 1& $ 1.228$ & 0.455 & 9& $-0.53~~~~$ \\
190248 &HR7665 &G7IV & 5550 & 4.4~~ & $ 0.20~~$& 1& $ 1.171$ & 0.345 & 9& $ 0.41~~~~$ \\
192310 &HR7722 &K0V & 5100 & 4.6~~ & $ -0.05~~$& 2& $ 1.255$ & 0.419 & 2& $ 0.16~~~~$ \\
208801 &HR8382 &K2V & 5000 & 4.0~~ & $ 0.00~~$& 2& $ 1.300$ & 0.460 & 2& $ 0.00~~~~$ \\
209100 &HR8387 &K4/5V & 4700 & 4.6~~ & $ -0.13~~$& 2& $ 1.382$ & 0.510 & 2& $ 0.04~~~~$ \\
211998 &HR8515 &F2V: & 5250 & 3.5~~ & $ -1.40~~$& 1& $ 1.033$ & 0.410 & 9& $-1.56~~~~$ \\
216803 &HR8721 &K4V & 4550 & 4.7~~ & $ -0.20~~$& 2& $ 1.413$ & 0.530 & 2& $ 0.04~~~~$ \\\hline
64090 &BD+31 1684 &sdG2 & 5419 & 4.1~~ & $ -1.7~~~$& 4& $ 1.013$ & 0.41\,~~& 7& $-1.72~~~~$ \\
103095 &BD+38 2285 &G8Vp & 4990 & 4.5~~ & $ -1.4~~~$& 5& $ 1.106$ & 0.437 & 8& $-1.30~~~~$ \\
132475 &BD$-$21 4009 &G0V & 5550 & 3.8~~ & $ -1.6~~~$& 7& $ 0.982$ & 0.38\,~~& 7& $-1.60~~~~$ \\
134439 &BD$-$15 4042 &K0/1V & 4850 & 4.5~~ & $ -1.57~~$& 6& $ 1.118$ & 0.447 & 9& $-1.33~~~~$ \\
134440 &BD$-$15 4041 &K0V: & 4754 & 4.1~~ & $ -1.52~~$& 6& $ 1.185$ & 0.478 & 9& $-1.17~~~~$ \\
184499 &BD+32 3474 &G0V & 5610 & 4.0~~ & $ -0.8~~~$& 7& $ 1.032$ & 0.36\,~~& 7& $-0.93~~~~$ \\
201889 &BD+23 4264 &G1V & 5580 & 4.5~~ & $ -1.1~~~$& 7& $ 1.031$ & 0.37\,~~& 7& $-1.06~~~~$ \\
216777 &BD$-$08 5980 &G6V & 5540 & 4.0~~ & $ -0.6~~~$& 7& $ 1.079$ & 0.38\,~~& 7& $-0.80~~~~$ \\
--- &BD+29 ~366 &--- & 5560 & 3.8~~ & $ -1.1~~~$& 7& $ 1.010$ & 0.39\,~~& 7& $-1.49~~~~$ \\
\hline
\end{tabular}
\end{center}
\begin{flushleft}
~~~~~~~~~~~~~~~1 : Morell, K\"allander and Butcher (1992) \\
~~~~~~~~~~~~~~~2 : Morell (1994) \\
~~~~~~~~~~~~~~~3 : Spite, Spite, Maillard (1984) \\
~~~~~~~~~~~~~~~4 : Gilroy et.al. (1988), Peterson (1980), Rebolo, Molaro and Beckman (1988) \\
~~~~~~~~~~~~~~~5 : Sneden and Crocker (1988)\\
~~~~~~~~~~~~~~~6 : Petersen (1980), Carney and Petersen (1980) and Petersen (1981) \\
~~~~~~~~~~~~~~~7 : Rebolo, Molaro and Beckman (1988) \\
~~~~~~~~~~~~~~~8 : Taylor (1995) \\
~~~~~~~~~~~~~~~9 : Bessell (1990) \\
\end{flushleft}
\end{table*}
In the next section we develop a photometric abundance index for G
and K dwarfs, which correlates well with the spectroscopic abundances
determined above. Our aim was to find such an index over as wide a
range of metallicity as possible, so we have supplemented the Morell
data (which are almost all relatively metal rich disk stars) with a
small number of metal weak stars for which spectroscopic metallicities
and effective temperatures have been determined from high dispersion
spectral analyses. These stars were found by searching for metal weak
G and K dwarf stars in the ``Catalog of [Fe/H] Determinations''
(Cayrel de Strobel et. al. 1992), with high dispersion abundance
analyses, and for which Cousins $R-I$ and Geneva photometry could be
located in the literature. The stars are shown in the last 9 rows of
table 1, and come mostly from Rebolo, Molaro and Beckman
(1988). Sources of all the spectroscopic and photometric data are
shown below the table.
\section{Abundance and effective temperature calibration}
In order to qualitatively understand the effects of [Fe/H] and \ifmmode{~T_{\mathrm{eff}}}\else$T_{\mathrm{eff}}$~\fi
on K dwarfs, a set of synthetic spectra of K dwarf stars over a grid
of [Fe/H] and \ifmmode{~T_{\mathrm{eff}}}\else$T_{\mathrm{eff}}$~\fi was kindly prepared for us by Uffe Gr\aa e J\o
rgensen. As expected, the main effects of metallicity could be seen in
the blue regions (3000 to 4500 \AA) where line blanketing is readily
apparent.
For all our stars Geneva $(u,b_1,b_2,v_1,v_2,g)$ intermediate band
photometry colours were available in the Geneva Photometric Catalog
(Rufener 1989). Since Geneva photometry is available for a very large
number of nearby G and K dwarfs, our initial attempt was to develop a
photometric calibration based on Geneva colours only. However, it
turned out that we could not reliably enough estimate effective
temperature using Geneva photometry, which led to corresponding
uncertainties in the abundance indices we developed. In the end we
used broadband Cousins $RI$ photometry to estimate effective
temperatures, and the Geneva $b_1$ colour to define an index which
measures line blanketing in the blue and correlates well with the
spectroscopic abundances.
For various plausible colour indices $c_i$ say, being linear
combinations of the six Geneva $(u,b_1,b_2,v_1,v_2,g)$ colours, we
found we could fit linear relations of the form
\begin{equation}
c_i = f_i\,{\mathrm{[Fe/H]}_{\mathrm Spec}} + t_i\ifmmode{~T_{\mathrm{eff}}}\else$T_{\mathrm{eff}}$~\fi + a_i.
\end{equation}
with low scatter (i.e. less than few$\times 0.01$ mag.), where
$f_i, t_i$ and $a_i$ are constants. For any two such indices, $c_1$
and $c_2$ say, two relations can be inverted to derive a calibration
for [Fe/H] and $T_{\mathrm eff}$. (Note that we also checked for
dependence of each index on log$(g)$, but no significant dependence
was present for any of the indices tried. Hence we only consider \ifmmode{~T_{\mathrm{eff}}}\else$T_{\mathrm{eff}}$~\fi
and [Fe/H] here). We searched for two indices which were respectively
more sensitive to abundance and to temperature, so the inversion would
be as stable as possible. However, for all the filter combinations we
tried, the linear relations fitted were close to being parallel
planes, which is to say that in the spectral region covered by Geneva
photometry, it is difficult to break the degeneracy between abundance
and effective temperature effects for this type of star.
Moving to photometry in the near IR was the obvious way around this
problem, since line blanketing is much weaker in this region. We
gathered $VRI$ photometry from the literature for the stars, and
experimented with the colour indices $V-R$ and $R-I$ ($R$ and $I$ are
throughout the paper on the Cousins system). The $R-I$ data are shown
in the last column of table 1, and are primarily from Morell (1994)
and Bessell (1990). $R-I$ turned out to have no measurable dependence
on the metal abundance of the stars, and could be used as a very
robust temperature estimator, whereas $V-R$ still showed some
dependency on metallicity. We tried combinations of $R-I$ and Geneva
colours and found that $R-I$ and $b_1$ gave an index which correlated
best with the spectroscopic abundances. (All the Geneva colours were
found to measure line blanketting in the blue and correlated with
metallicity to some extent, with the lowest scatter being for
$b_1$). The relations we fit are:
\begin{equation}
R-I = 1.385 - \ifmmode{~T_{\mathrm{eff}}}\else$T_{\mathrm{eff}}$~\fi/5413.5
\end{equation}
\begin{equation}
b_1 = 0.121\,{\mathrm{[Fe/H]}_{\mathrm{Spec}}} - \ifmmode{~T_{\mathrm{eff}}}\else$T_{\mathrm{eff}}$~\fi/3480.7 + 2.737.
\end{equation}
The scatter around the fits $\Delta b_1$ and $\Delta(R-I)$ are shown
as functions of [Fe/H]$_{\mathrm Spec}$, \ifmmode{~T_{\mathrm{eff}}}\else$T_{\mathrm{eff}}$~\fi, log(g), $b_1$ and
$R-I$ in Figure 1. There are no apparent systematic residuals in the
fitting as functions of any of these quantities. In particular, in the
case of $R-I$, there is no dependance on [Fe/H]$_{\mathrm Spec}$ or
log(g), although neither was explicitly fitted, and in the case of
$b_1$, there is no dependence on log(g), although this was not
explicitly fitted.
\begin{figure}
\input epsf
\centering
\leavevmode
\epsfxsize=0.9
\columnwidth
\epsfbox{fig1.ps}
\caption{The scatter in the fits to $b_1$ and $R-I$ ($\Delta b_1$ and $\Delta
(R-I)$ respectively) is shown as functions of \ifmmode{~T_{\mathrm{eff}}}\else$T_{\mathrm{eff}}$~\fi, [Fe/H], log(g),
$b_1$ and $R-I$. There are no apparent residual trends in the fits in
any of these quantities.}
\end{figure}
Inverting these relations, we derive :
\begin{equation}
\ifmmode{~T_{\mathrm{eff}}}\else$T_{\mathrm{eff}}$~\fi = 7494. - 5412.\,(R-I)
\end{equation}
\begin{equation}
{\mathrm{[Fe/H]}_{\mathrm Gen}} = 8.248\,b_1-12.822\,(R-I)-4.822
\end{equation}
Eqns (4) and (5) are valid in the range $0.33 \le R-I \le 0.55$,
which corresponds roughly to G0 to K3 dwarfs.
Effective temperature calibrations for the $R-I$ filter have been
made by Bessell, Castelli and Plez (1996) from synthetic spectra and
filter band passes, and by Taylor (1992) who used model atmosphere
analyses of Balmer line wings. We show in Figure 2 Bessell et al.'s
curve (dotted line) for \ifmmode{~T_{\mathrm{eff}}}\else$T_{\mathrm{eff}}$~\fi versus $R-I$ and Taylor's curve (dashed
line), versus our data for the K dwarfs (from table 1). Our simple
linear fit to the data (Eqn 4) is shown as a solid line. Metal weak
stars ([Fe/H] $<-1.0$ are shown as open squares, showing there is no
systematic difference in temperature scale as a function of
abundance. The match between the data and the three calibrations is
quite satisfactory in the region $0.33 \le R-I \le 0.55$. For cooler
stars ($R-I \ga 0.55$ i.e. later than about K3) there is a good
indication from the Bessell models that our linear fit cannot simply
be extrapolated outwards. For stars later than about K3 obtaining
accurate abundances from high dispersion spectra becomes increasingly
difficult because of the increasing effects of molecular opacity, and
it was for this reason that the Morell sample stopped at K3. Stellar
atmosphere models and line lists are rapidly improving for cooler
stars however, and it should soon be possible to obtain the
spectroscopic abundances necessary to extend the calibration to cooler
stars still.
\begin{figure}
\input epsf
\centering
\leavevmode
\epsfxsize=0.9
\columnwidth
\epsffile{fig2.ps}
\caption{\ifmmode{~T_{\mathrm{eff}}}\else$T_{\mathrm{eff}}$~\fi versus $R-I$. The solid line shows our least squares
fit (Eqn. 4), the dotted line the Bessell, Castelli and Plez (1996)
relation based on synthetic spectra, and the dashed line the Taylor
(1992) relation based on analysis of Balmer line wings. Open symbols
are stars with [Fe/H]$<-1.0$.}
\end{figure}
In figure 3 we show abundances [Fe/H]$_{\mathrm Gen}$ derived using
Eqn. 5 from the $b_1,R-I$ photometry versus the spectroscopically
determined abundances [Fe/H]$_{\mathrm Spec}$ for the stars. The
scatter is 0.18 dex.
\begin{figure}
\input epsf
\centering
\leavevmode
\epsfxsize=0.9
\columnwidth
\epsfbox{fig3.ps}
\caption{Our final abundance calibration, showing the photometric
abundances [Fe/H]$_{\mathrm Gen}$ versus the spectroscopic abundances
[Fe/H]$_{\mathrm Spec}$ (calculated using Eqn 5). The line is the 1:1
relation. The scatter for the transformation is 0.18 dex.}
\end{figure}
\begin{figure}
\input epsf
\centering
\leavevmode
\epsfxsize=0.9
\columnwidth
\epsfbox{fig4.ps}
\caption{The lower panel shows Hyads in the $V$ versus $R-I$ plane.
The upper panel shows the abundance estimate for each star as a
function of $R-I$ colour. The mean abundance for the stars is [Fe/H]
$=0.14\pm0.03$, with a scatter around the mean of 0.17 dex,
representing the error in an individual measurement.}
\end{figure}
\begin{table}
\small
\caption{Hyades G and K dwarfs.}
\begin{center}
\begin{tabular}{llrrrr}
\hline
Name & Sp & $V~~$ & $R-I$ & $b_1$~~ & [Fe/H]$_{\rm Gen}$ \\
BD +20 598 & (G5) & 9.37 & 0.45 & 1.283 &$ -0.01~~~ $ \\
BD +26 722 & (G5) & 9.18 & 0.35 & 1.166 &$ 0.31~~~ $ \\
HD 26756 & G5V & 8.46 & 0.35 & 1.117 &$ -0.10~~~ $ \\
HD 26767 & (G0) & 8.04 & 0.31 & 1.087 &$ 0.17~~~ $ \\
HD 27771 & K1V & 9.09 & 0.39 & 1.220 &$ 0.24~~~ $ \\
HD 28099 & G8V & 8.09 & 0.32 & 1.095 &$ 0.11~~~ $ \\
HD 28258 & K0V & 9.03 & 0.43 & 1.219 &$ -0.28~~~ $ \\
HD 28805 & G8V & 8.66 & 0.35 & 1.157 &$ 0.23~~~ $ \\
HD 28878 & K2V & 9.39 & 0.41 & 1.246 &$ 0.20~~~ $ \\
HD 28977 & K2V & 9.67 & 0.44 & 1.274 &$ 0.04~~~ $ \\
HD 29159 & K1V & 9.36 & 0.41 & 1.243 &$ 0.17~~~ $ \\
HD 30246 & (G5) & 8.30 & 0.33 & 1.101 &$ 0.03~~~ $ \\
HD 30505 & K0V & 8.97 & 0.38 & 1.225 &$ 0.41~~~ $ \\
HD 32347 & (K0) & 9.00 & 0.36 & 1.174 &$ 0.25~~~ $ \\
HD 284253 & K0V & 9.11 & 0.38 & 1.188 &$ 0.10~~~ $ \\
HD 284787 & (G5) & 9.04 & 0.40 & 1.223 &$ 0.14~~~ $ \\
HD 285252 & (K2) & 8.99 & 0.41 & 1.261 &$ 0.32~~~ $ \\
HD 285690 & K3V & 9.62 & 0.44 & 1.320 &$ 0.42~~~ $ \\
HD 285742 & K4V &10.27 & 0.49 & 1.362 &$ 0.13~~~ $ \\
HD 285773 & K0V & 8.94 & 0.41 & 1.210 &$ -0.10~~~ $ \\
HD 285830 & --- & 9.47 & 0.44 & 1.277 &$ 0.07~~~ $ \\
HD 286789 & --- &10.50 & 0.52 & 1.434 &$ 0.34~~~ $ \\
HD 286929 & (K7) &10.03 & 0.51 & 1.391 &$ 0.11~~~ $ \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{A check using the Hyades}
A check of our calibration was made by gathering from the literature
Geneva and $VRI$ photometry for G and K dwarfs in the Hyades
cluster. Cousins $R-I$ colours going well down the main sequence of
the Hyades are available from Reid (1993), and a table of the stars,
their broadband colours and Geneva $b_1$ colour is shown as Table 2.
Figure 4(a) shows the colour magnitude diagram in $V$ versus $R-I$ for
the Hyades G and K dwarfs. For each star the abundance was estimated
using Eqn 5, and is shown in column 6 in table 2. The abundances are
plotted against the $R-I$ colour in Figure 4(b). The mean abundance of
the stars is [Fe/H] $= 0.14 \pm 0.03$ whit a dispersion of 0.17 dex,
representing the error in an individual measurement. Taylor (1994)
summarises the literature on the Hyades abundance and gives a best
estimate of [Fe/H]$=0.11\pm0.01$, so our mean abundance of
$0.14\pm0.03$ dex is quite satisfactory. We also note that there is no
indication of a trend of derived Hyades abundances as a function of
$R-I$ colour, so that for these metal rich stars the
temperature-colour relation appears satisfactory.
\section{Abundances and kinematics of Gliese G and K dwarfs}
The Gliese catalog contains around 800 dwarfs classified between G0
and K3 and estimated to be within 25 pc of the Sun. As a pilot study
for what will be possible after the {\it Hipparcos} data are
available, we have determined abundances for a subset of these stars
having absolute magnitude and space velocity data in the Gliese
catalog and photometric data available in the literature. The sample
presented here is therefore somewhat inhomogeneous, but the kinematic
and abundance properties of the sample nevertheless allow us to
compare to previous work on G dwarfs using other (abundance estimation
methods) and to show that the G dwarf problem probably extends to the
K dwarfs.
We obtained the 1991 version of the Gliese catalog (``Preliminary
Version of the Third Catalogue of Nearby Stars'' Gliese and Jahrei\ss)
from the Centre de Donn\'ees astronomiques de Strasbourg and matched
up objects with Bessell's (1990) catalog of $UBVRI$ photometry for
Gliese stars. We selected stars in $0.33 \le R-I \le 0.55$, the colour
range of the abundance calibration. For all these stars $b_1$ data
were obtained from the Geneva catalog. Some care was needed when
matching the $UBVRI$ and Geneva photometry for stars which were
members of multiple systems to be sure the right components were
matched; all doubtful cases were excluded from further
consideration. Our final lists (184 stars) contained $UBVRI$ and
Geneva photometry, parallax and absolute magnitude data, as well as
$U$, $V$ and $W$ velocities for each star. (Here $U$, $V$ and $W$ are
the usual space motions of the star respectively toward the galactic
center, the direction of galactic rotation and the north galactic
pole.) Abundances for the stars were derived from the $R-I$ and $b_1$
data using Eqn 5.
\begin{figure}
\input epsf
\centering
\leavevmode
\epsfxsize=0.9
\columnwidth
\epsfbox{fig5.ps}
\caption{Colour magnitude diagram for the Gliese G and K stars.
The positions of the main sequence and giant branch are shown. The
dotted line indicates our division into giants (above) and dwarfs
(below) using $M_V=4.35$}
\end{figure}
Figure 5 shows a colour magnitude diagram for the stars: absolute
visual magnitude $M_V$ versus $R-I$ colour, covering approximately G0
to K3. The solid lines show the positions of the main sequence and the
giant branch in this plane. Giants and dwarfs have been separated
using the dashed line (i.e. $M_V=4.35$ and $0.33 \le R-I \le 0.55$).
There are a number of stars up to 2 magnitudes below the main
sequence seen in the diagram. While such stars would classically be
termed subdwarfs, the recent results from the 30 month solution for
{\it Hipparcos} (H30 -- Perryman et.al. 1995) show that the status of
these objects is now uncertain. In fact, most of the objects below the
main sequence are not seen at all in H30 (Perryman et al. their
Figs. 8 (a) and (b)). Perryman et al. ascribe ``a combination of
improved parallaxes, and improved colour indices, the influence of the
latter being particularly important'' as the possible cause.
\footnote{Our measured abundances and the velocities of these ``subdwarfs''
indicate that they really are normal disk stars, as H30 appears to
show} In addition, the H30 results indicate that about two thirds of
the stars in the Gliese catalog are not actually within 25 pc, so that
we can expect some changes within Figure 5 after the {\it Hipparcos}
data become available. The data for most of the G and K dwarfs
analysed here are nevertheless of good quality, and we examine their
kinematic and chemical properties next.
\begin{figure}
\input epsf
\centering
\leavevmode
\epsfxsize=0.9
\columnwidth
\epsfbox{fig6.ps}
\caption{Stellar kinematics of Gliese G and K dwarfs as a function of
abundance. Lower panels show the individual velocities in $U$, $V$ and
$W$ as a function of [Fe/H]. Middle panels show the run of mean
velocity, and the upper panels the run of velocity dispersion with
[Fe/H].}
\end{figure}
In Figure 6 we show the $U$, $V$ and $W$ velocities of the stars as
a function of [Fe/H], as well as the run of mean velocity $<U>$, $<V>$
and $<W>$ and the velocity dispersions $\sigma_U$, $\sigma_V$,
$\sigma_W$. \footnote{The velocities here have been corrected for a
solar motion of $U_\odot=10$ \ifmmode{$~km\thinspace s$^{-1}}\else km\thinspace s$^{-1}$\fi, $V_\odot=15$ \ifmmode{$~km\thinspace s$^{-1}}\else km\thinspace s$^{-1}$\fi and $W_\odot=8$
\ifmmode{$~km\thinspace s$^{-1}}\else km\thinspace s$^{-1}$\fi (Kerr and Lynden-Bell 1986)} The figure shows the well recognised
properties of the old disk, thick disk and halo as has been seen previously
in F and G dwarfs and in K giants (see e.g. Freeman 1987, 1996).
\subsection{Old disk stars}
The old disk is traced in Figure 6 by stars with
[Fe/H]$\ga-0.5$. The trend in the velocity dispersions is a slow
increase with decreasing abundance, with a possible jump in velocity
dispersion in the thick disk regime $ -1 \la $[Fe/H]$ \la -0.6$. This
behaviour is very similar to that seen in local F and G dwarfs
(e.g. Edvardsson et al 1993) and for K giants (e.g. Janes 1979, Flynn
and Fuchs 1994). For 142 stars with [Fe/H]$ > -0.5$, $(\sigma_U,
\sigma_V, \sigma_W) = (37\pm3, 24\pm2, 17\pm1)$.
\subsection{Thick disk stars}
Stars in the range $-1 < $[Fe/H]$ \la -0.6$ show a higher velocity
dispersion in all three space motions, and can be identified with the
thick disk. The ratio of thick disk ($-1 < $[Fe/H]$ < -0.6$) to disk
stars ([Fe/H]$ > -0.5$) in the sample is $0.09\pm0.02$, which is the
thick disk local normalisation in this volume limited sample. This is
in accord with literature estimates of the thick disk local
normalisation, which vary between approximately 2 per cent and 10 per
cent (Reid and Majewski 1993). The elements of the thick disk velocity
ellipsoid\footnote{Note that the star at [Fe/H]$=-0.67, U=-177$ \ifmmode{$~km\thinspace s$^{-1}}\else km\thinspace s$^{-1}$\fi
has been excluded from these calculations, as it may well be a halo
interloper} $(\sigma_U, \sigma_V, \sigma_W) = (45\pm12, 44\pm11,
35\pm9)$ for 16 stars in the range $-1 <$ [Fe/H]$ < -0.6$, and the
assymetric drift is approximately 30 \ifmmode{$~km\thinspace s$^{-1}}\else km\thinspace s$^{-1}$\fi, all in good accord with
previous work (see e.g. Freeman 1987, 1996).
\subsection{Metal weak stars}
There are 7 stars with [Fe/H] $<-1$, (two of which are bound to each
other, HD 134439 and HD134440 or Gliese 579.2A and B respectively). In
a total sample of 184 stars, 7 halo stars seems rather an
embarrassment of riches. (Bahcall (1986) estimates the local disk to
halo normalisation as 500:1, while Morrison (1993) estimates
850:1). Halo stars are probably over represented in this sample
because they are more likely to be targeted for photometric
observations. It will be interesting to return to the halo and thick
disk normalisations after the {\it Hipparcos} data are available and we can
define and observe a complete volume limited sample of K dwarfs.
\section{The metallicity distribution}
As discussed in the introduction, the metallicity distribution of
local G dwarfs has long presented a challenge for explaining the
buildup of metallicity in the Galactic disk. We present in this
section the abundance distribution for G dwarfs and for K dwarfs using
our abundance estimator.
In order to compare to recent work on the G dwarf problem
(e.g. Pagel 1989, Sommer-Larsen 1991, Wyse and Gilmore 1995,
Rocha-Pinto and Maciel 1996), we convert [Fe/H] to [O/H] using the
relations
\begin{flushleft}
\begin{eqnarray}
{\mathrm{[O/H]}} = 0.5{\mathrm{[Fe/H]}} ~~~~~~~~~~ {\mathrm{[Fe/H]}} \ge -1.2\cr
{\mathrm{[O/H]}} = 0.6+{\mathrm{[Fe/H]}} ~~~~~~ {\mathrm{[Fe/H]}} < -1.2.
\end{eqnarray}
\end{flushleft}
When comparing to models, Oxygen abundances are more appropriate
because Oxygen is produced in short lived stars and can be treated
using the convenient approximation of instantaneous recycling -- see
e.g. Pagel (1989).
\begin{table}
\small
\caption{Abundance Distributions for G and K dwarfs.}
\begin{center}
\begin{tabular}{rrrrr}
\hline
[O/H]& $N_G$& $f_G$ & $N_K$ & $f_K$ \\
$-1.25$ & 0 & 0.000 & 0 & 0.000 \\
$-1.15$ & 0 & 0.000 & 1 & 0.011 \\
$-1.05$ & 0 & 0.000 & 0 & 0.000 \\
$-0.95$ & 0 & 0.000 & 0 & 0.000 \\
$-0.85$ & 0 & 0.000 & 0 & 0.000 \\
$-0.75$ & 0 & 0.000 & 2 & 0.023 \\
$-0.65$ & 0 & 0.000 & 2 & 0.023 \\
$-0.55$ & 0 & 0.000 & 2 & 0.023 \\
$-0.45$ & 1 & 0.010 & 2 & 0.023 \\
$-0.35$ & 7 & 0.072 & 5 & 0.057 \\
$-0.25$ & 28 & 0.289 & 13 & 0.149 \\
$-0.15$ & 19 & 0.196 & 19 & 0.218 \\
$-0.05$ & 19 & 0.196 & 18 & 0.207 \\
$ 0.05$ & 15 & 0.155 & 10 & 0.115 \\
$ 0.15$ & 6 & 0.062 & 7 & 0.080 \\
$ 0.25$ & 1 & 0.010 & 3 & 0.034 \\
$ 0.35$ & 1 & 0.010 & 3 & 0.034 \\
$ 0.45$ & 0 & 0.000 & 0 & 0.000 \\
\hline
\end{tabular}
\end{center}
The number of G dwarfs, $N_G$ and the number of K dwarfs, $N_K$ binned
by [O/H] in bins 0.1 dex wide, where column 1 indicates the bin
center. There are 97 G dwarfs and 87 K dwarfs. The relative numbers,
normalised to the sample sizes, are shown in the columns headed $f_G$
and $f_K$. See also Figs 7(b) and (c).
\end{table}
In Figure 7(a) we show the distribution of [O/H] derived using Eqn
6, as a function of $R-I$ colour. The approximate positions of G0, K0
and K3 spectral types are indicated below the stars. Dividing the
stars into G and K dwarfs, we show histograms of the stellar abundance
in the lower panels, normalised to the sample sizes. In Figure 7(b),
the distribution of [O/H] for 97 G dwarfs shows the well known paucity
of metal poor stars relative to metal rich stars, i.e the G dwarf
problem. The dotted line shows the G dwarf abundance distribution of
Pagel and Patchett (1975) and the dashed line the distribution for
local G dwarfs of Rocha-Pinto and Maciel (1996). Our histogram
follows the previous determinations well, indicating our abundance
scale is in good agreement with previous work.
In Fig 7(c) we show the [O/H] distribution for 87 K dwarfs (solid
line). Abundance histogram of this type have been determined by Rana
and Basu (1990) for F, G and K dwarfs, from the 1984 Catalog of
Spectroscopic Abundances (Cayrel de Strobel et al. 1985). This
procedure suffers from the shortcoming that the abundance data are
inhomogeneous, but we show the abundance distribution of 60 K dwarfs
from Rana and Basu by the dotted line in Fig 7(c). Our distribution
and that of Rana and Basu are in broad agreement, and are similiar to
the abundance distributions for the G dwarfs.
\begin{figure}
\input epsf
\centering
\leavevmode
\epsfxsize=0.9
\columnwidth
\epsfbox{fig7.ps}
\caption{Panel (a) Oxygen abundances for the stars as a function of
$R-I$ colour. Panel (b) shows the histogram of [O/H] in the G dwarfs
in this sample (solid line), for the Pagel and Patchett (1975) sample
(dotted line) and the Rocha-Pinto and Maciel sample (dashed line).
Panel (c) shows our [O/H] distribution for the K dwarfs (solid line)
and that of Rana and Basu (dotted line).}
\end{figure}
Both the G and K dwarf abundance distributions presented here suffer
from selection bias however. Since high proper motion or high
velocity stars are frequently targeted for parallax or photometric
observations, metal weak stars are likely to be over-represented, as
was demonstrated for the halo stars ([Fe/H]$<-1$) in section 4.3, and
even stars with thick disk abundances could be
over-represented. Hence, we regard the abundance distributions
reported here cautiously, but nevertheless remark that the K dwarf
abundance distribution is quite similar to that of the G dwarfs, and
offers {\it prima face} evidence for a ``K dwarf problem''. The {\it
Hipparcos} data offer the exciting prospect in the near future of
defining a large, complete and volume limited sample of G and/or K
dwarfs, largely circumventing the above difficulties. Our abundance
distributions for the G and K dwarfs are shown in Table 3.
In summary, the kinematics and abundances of the G and K dwarfs
examined in this section follow the trends already well established in
the solar neighbourhood for F and G dwarfs and for K giants. The
ability to measure abundance in K dwarfs offers several interesting
possibilities, especially after the {\it Hipparcos} data become
available. As future work, we plan to analyse the metallicity
distribution of an absolute magnitude selected and volume limited
sample of K dwarfs, which will give us a very clean view of
metallicity evolution in the solar cylinder.
\section{Conclusions}
We have calibrated an abundance index for G and K dwarfs, which uses
broadband Cousins $R-I$ photometry to estimate stellar effective
temperature, and the Geneva $b_1$ colour to measure line blanketing in
the blue. Our calibration is based on a recent sample of accurate
abundance determinations for disk G and K dwarfs, (Morell, K\"allander
and Butcher 1992 and Morell 1994) determined from high dispersion
spectra. The index gives [Fe/H] estimates with a scatter of 0.2 dex
for G0 to K3 dwarfs. The [Fe/H] estimator has been checked using the
stars in the Hyades cluster, and we derive a mean abundance for the
stars of [Fe/H]$=0.14$ dex, consistent with previous determinations.
We take a sample of G and K dwarfs from the Gliese catalog, find $R-I$
and $b_1$ data from the literature, and derive the local abundance
distribution for approximately 200 G and K dwarfs. The kinematics of
the G and K dwarfs are examined as function of abundance, the K dwarfs
for the first time, and we see the well known kinematic properties of
the local neighbourhood, as established in the literature from studies
of F and G dwarfs and from K giants. The abundance distributions in
the G and K dwarfs are quite similar, indicating that the ``G dwarf
problem'' extends to the K dwarfs.
\section*{Acknowledgments}
We thank Bernard Pagel for many helpful discussions and Helio
Rocha-Pinto for interesting comments. This research has made extensive
use of the Simbad database, operated at CDS, Strasbourg, France.
| proofpile-arXiv_065-364 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
One of the long standing problems in multiparticle dynamics is
an integrated description of multiplicity distributions and
correlations. In particular, from the
point of view of perturbative QCD, these observables are not
satisfactorily described: this is particularly
evident in restricted regions of
phase space, where the most interesting experimental results
are.\cite{VietriRU}
The discrete approximation to QCD~\cite{discreteQCD}
allows to perform
exclusive and inclusive calculations at parton level;
in this talk,
a few results are described concerning Multiplicity Distributions (MD's)
and fluctuations.
\section{A short description of discrete QCD}
Here I will present only the fundamental ideas behind this
model; for a detailed discussion, follow the pointers
in the bibliography.\cite{discreteQCD}
Consider an event in which a quark-antiquark pair is produced:
the cross section for the emission of gluons by this pair
is, in the Double Log Approximation (DLA), given by the
``colour dipole radiation'' formula:\cite{QCDcoherence}
\begin{equation}
dP \approx \frac{C \alpha_s}{\pi} \frac{d k_{\perp}^2}{k_{\perp}^2} dy
\label{eq:dipole}
\end{equation}
where $C$ is a constant depending on the colour charges of the partons
at the end points of the dipole, and $y$ and $k_{\perp}$ are the rapidity
and transverse momentum of the emitted gluon, respectively. Notice
that the azimuthal direction is here integrated over, but it is
of course possible to take it fully into account.
The phase space limits for the gluon emission are given
approximately by
\begin{equation}
|y| < \log(W/k_{\perp}) \label{eq:triangle}
\end{equation}
where $W$ is the dipole mass (which in the case of the original
$q\bar q$ dipole equals the c.m.\ energy);
the rapidity range available for the first gluon emission is thus
$\Delta y = \log(W^2/k_{\perp}^2)$. After the first emission, the three partons
form two independent dipoles: they will in turn emit gluons according to
Eq.~\ref{eq:dipole} (in the c.m.\ frame of each dipole)
and the cascade develops.\cite{Dipole}
This is illustrated in Figure \ref{fig:triangle} (left part)
where it is also shown that, because the gluons carry colour charges,
the phase space actually increases with each emission: a new double
faced triangular region is added for each new gluon emitted;
notice that according to Eq.~\ref{eq:triangle} the baseline
of each face added is half the height of the same face.
The Modified Leading Log Approximation (MLLA) changes this picture
in that it provides
a region of phase space, close to the end points of the dipole,
where gluon emission is suppressed.\cite{pQCD}
Thus the effective rapidity range becomes smaller:
for example, for a $gg$ dipole one has for large $W$:
\begin{equation}
\Delta y = \log\frac{W^2}{k_{\perp}^2} - {\delta y_g} \label{eq:mlla}
\end{equation}
It can be argued,\cite{discreteQCD,Gustafson} from the form of the beta term
in the Callan-Symanzik equation, that ${\delta y_g}$ equals $11/6$.
Dipole splitting with ordering in $k_{\perp}$ can also
be shown to be equivalent to the angular ordering prescription
in standard parton cascades.\cite{discreteQCD,pQCD}
\begin{figure}[t]
\begin{center}
\mbox{\epsfig{file=triangolo.eps,height=6.2cm}}
\end{center}
\caption[Phase space]{{\textbf{left)}} Phase space in the Dipole Cascade Model.
The dimensions shown are $y$ and $\kappa = \logk_{\perp}^2$, respectively
rapidity and log of transverse momentum, taken in each dipole's c.m.\ frame.
{\textbf{right)}} Phase space
in the discrete QCD model. Emission of ``effective gluons''
can only take place at the points marked with a cross.
}
\label{fig:triangle}
\end{figure}
In the discrete QCD model,\cite{discreteQCD}
the idea expressed in Eq.~\ref{eq:mlla} is carried to its
extreme consequences: it is assumed that if two gluons are
closer than ${\delta y_g}$ they will effectively be re-absorbed into a single
gluon. One needs then only consider ``effective gluons'': the
``rapidity'' dimension of phase
space is thus partitioned in discrete steps of size ${\delta y_g}$.
However, this is true also for every triangle ``added'' to the
original phase space, which means that discrete steps must be taken
also in the vertical dimension, i.e., in $\kappa \equiv \log k_{\perp}^2$:
phase space is
thus partitioned in rectangular cells of size ${\delta y_g} \times 2{\delta y_g}$.
In a complementary way, one can say that emission of effective gluons
takes place only at a discrete lattice of points, ${\delta y_g}$ units apart
horizontally and $2 {\delta y_g}$ units apart vertically
(see Figure~\ref{fig:triangle}).
From the very structure of the cascade described above,
it follows that each ``column'' of cells is independent of the others:
such a column will be called a {\em tree} because of the
possibility of emitting further triangles (``branches'').
The probability of emitting a gluon with transverse momentum
$\kappa$ in such a cell can be obtained from Eq.~\ref{eq:dipole}
using the leading order expression for the running coupling
constant $\alpha_s$ and the appropriate colour factor for the $g\to gg$
vertex:
\begin{equation}
dP = \frac{1}{{\delta y_g}} \frac{d\kappa}{\kappa} {\delta y_g} = \frac{d\kappa}{\kappa}
\end{equation}
Normalizing with a Sudakov factor, the distribution in $\kappa$
for an effective gluon at a given rapidity is uniform:
\begin{equation}
d {\mathrm{Prob}} = \exp\left\{ -\int_\kappa^{\kappa_{\mathrm{max}}}
dP \right\} dP = \frac{d\kappa}{\kappa_{\mathrm{max}}}
\label{eq:dprob}
\end{equation}
where $\kappa_{\mathrm{max}}$ corresponds to the maximum $k_{\perp}$ available
at that rapidity.
Each triangle emerging from the original one can then be described
by the position and length of its baseline (both as
integer multiples of ${\delta y_g}$):
an events is a collection
of independent trees of different height, the number of trees
being given by $L = 2\log (W/{\delta y_g})$. The maximum height ${H_j^L}$
of each tree (numbering, e.g., the trees from left to right)
depends only on the tree's position $j$ and on the event ``baseline
length'' $L$.
A tree of height $H$ corresponds to an emitted gluon of
transverse momentum $\kappa = H \cdot 2 {\delta y_g}$. A tree of height $0$
corresponds to the emission of no gluons; notice that this introduces
a natural cut-off for cascade evolution. Notice also that
only a finite number of gluons can be emitted. These
points are important in characterizing discrete QCD.
In order to calculate the multiplicity distribution,
one can now define an {\em $N$-tree} as a tree with height between
$0$ and $N-1$ (in units of ${\delta y_g}$) where $N$ is an integer.
It is convenient to define also a {\em true $N$-tree} as a tree
of height $N-1$. Because of Eq.~\ref{eq:dprob}, an $N$-tree is
obtained by summing true $H$-trees ($H=1,\ldots,N$) with equal
probability $1/N$. This is easily expressed in terms of
the generating functions for the MD's in an $N$-tree, ${\mathcal P}_N(z)$,
and in a true $N$-tree, ${\mathcal Q}_N(z)$:
\begin{equation}
{\mathcal P}_N(z) = \frac{1}{N} \sum_{H=1}^{N} {\mathcal Q}_H(z) \label{eq:one}
\end{equation}
It will also be recognized that a true $(N+1)$-tree can be obtained from a
true $N$-tree by attaching two $N$-trees to it (one on each side)
\begin{equation}
{\mathcal Q}_{N+1}(z) = {\mathcal Q}_N(z) \left[ {\mathcal P}_N(z) \right]^2 \label{eq:two}
\end{equation}
Using the obvious initial conditions:
\begin{equation}
{\mathcal P}_1(z) = {\mathcal Q}_1(z) = 1 ; \qquad {\mathcal Q}_2(z) = z ; \qquad
{\mathcal P}_2(z) = \frac{1+z}{2} \label{eq:initcond}
\end{equation}
Eq.s~\ref{eq:one} and \ref{eq:two} can be solved,\cite{discreteQCD}
thus giving the MD in a tree.
\section{Calculations in rapidity intervals}
In order to approach the calculation in rapidity intervals,
one should understand that the size of ${\delta y_g}$ does not
imply that there is no structure at distances smaller than $11/6$.
In fact, ${\delta y_g}$ correspond to a separation
in rapidity along the original $q\bar q$ axis (which one can identify
to a good approximation with the thrust axis of the event)
only for trees which `grow' from the original baseline, while inside
each branch, ${\delta y_g}$ is a distance along the local `jet' direction.
Thus one can say that gluons belonging to the same tree are all
localized in rapidity inside one unit of ${\delta y_g}$.
That this is
actually a good approximation has been checked with a Monte Carlo
implementation of the discrete QCD model. In this framework one can then
calculate observables integrated over intervals
of rapidity whose width is a multiple of ${\delta y_g}$, i.e., which contain
a whole number of trees. So one recognizes that the MD
in any of these rapidity interval is given by ${\mathcal P}_{{H_j^L}}(z)$.
Adjacent trees are independent, and so are adjacent rapidity intervals
in this approximation; thus the MD generating function in an interval
$\Delta y$ is given by
\begin{equation}
{\mathcal P}_{\Delta y}(z)\; = \prod_{j \in \Delta y} {\mathcal P}_{{H_j^L}}(z)
\end{equation}
where the product is over all trees inside the interval $\Delta y$.
Figure \ref{fig:md}, on the left, shows the resulting MD in
different central rapidity intervals: the solid
line shows a negative binomial distribution (NBD), phenomenologically
widely used for describing the data.\cite{VietriRU}
The same figure shows on the right the MD for what I will call
in the following ``2-jet like'' events, obtained by cutting
away the tip of the phase space triangle,
i.e., requiring a cut-off
on the {\em maximum} $k_{\perp}$ of an emitted gluon.
Notice that in the standard events, for small intervals,
the NBD behaves w.r.t. the points in a way similar
to the way it behaves w.r.t to experimental data on hadronic MD,
going first below the data, than above and then below again.\cite{DEL:2}
In the 2-jet like events, on the other hand,
the NBD is closer to the points than in the standard events:
one is reminded of the experimental result that a NBD is found
to describe the hadronic MD in 2-jet events (defined via a jet-finder
algorithm) better than in the full sample.\cite{DEL:4}
\begin{figure}[t]
\begin{center}
\mbox{\epsfig{file=draw_yint_t.ps,%
bbllx=23pt,bblly=78pt,bburx=550pt,bbury=388pt,width=11.5cm}}
\end{center}
\caption[MD in intervals]{{\textbf{a)}}
Multiplicity distributions in central intervals
of rapidity of size (from bottom to top) 2, 4, 6, 8 and 10
(in units of ${\delta y_g}$) for standard events with baseline length $L=12$.
Each distribution is divided by 10 with respect
to one above it in order to make the figure legible.
{\textbf{b)}} Same as in a), but for ``2-jet like'' events.}
\label{fig:md}
\end{figure}
In the literature, the NBD is often explained in terms of
clan structure:\cite{AGLVH:2,FaroAG}
independent emission of clans, according to a Poisson distribution,
followed by particle emission from each clan in a cascade way (logarithmic
distribution).
In the framework of discrete
QCD, it is tempting to notice that each tree which comes from the
baseline of the original triangle is independent from the others;
a tree which produces no partons is not ``observed'', so the number
of trees in a rapidity interval is variable.
One can therefore call trees of height larger than $0$
{\em non-empty trees}; by definition these contain at least one parton.
\begin{figure}[!t]
\begin{center}
\mbox{\epsfig{file=plot_nmedio.ps,bbllx=61pt,%
bblly=266pt,bbury=672pt,bburx=528pt,width=11.5cm}}
\end{center}
\caption[clans]{{\textbf{a)}} Average number of non-empty trees in
rapidity intervals vs size of the interval (in units of ${\delta y_g}$),
for three sizes of the event: $L=8$ (crosses), $L=9$ (diamonds)
and $L=10$ (squares).
{\textbf{b)}} Average multiplicity in an average non-empty tree
vs size of the rapidity interval for the said event sizes.
{\textbf{c)}} As in a), but for 2-jet like events.
{\textbf{d)}} As in b), but for 2-jet like events.}
\label{fig:clans}
\end{figure}
Consider now an event with $L$ trees:
\begin{equation}
x_j \equiv {\mathcal P}_{{H_j^L}}(0) = \frac{1}{{H_j^L}}
\end{equation}
is the probability that the $j$-th tree is empty. The MD in
the number of non-empty trees $n$ is given by the product of
the probabilities that $L-n$ trees are empty and $n$ are not:
\begin{equation}
P_n^{(L)} = \sum_{\mathrm{perm}} (1-x_1)\ldots(1-x_n) x_{n+1}\ldots x_L
\qquad n=0,\ldots,L
\end{equation}
where the sum is over all permutations of $x$'s.
It can be shown that $P_n^{(L)}$ satisfies the recurrence relation
\begin{equation}
P_n^{(L)} = \sum_{j=1}^n \frac{(-1)^{j+1}}{n} a_j P_{n-j}^{(L)}
\end{equation}
where
\begin{equation}
a_r = \sum_{i=1}^L \left( \frac{1}{x_i} - 1\right)^r \qquad{\mathrm and}
\qquad P_0^{(L)} = \prod_{i=1}^L x_i
\end{equation}
From these relations $P_n^{(L)}$ can be calculated;
it is easy to show that one obtains a binomial distribution
when $x_j$ is independent of $j$, and a Poisson distribution
when $a_r = \delta_{1,r}$: the former is incompatible with
Eq.~\ref{eq:one}, but the latter is a good approximation
when $x_j$ is not very small.
\begin{figure}[t]
\begin{center}
\mbox{\epsfig{file=plothq_fps.ps,bbllx=36pt,bblly=480pt,%
bburx=576pt,bbury=709pt,width=11.0cm}}
\end{center}
\caption[Hq moments]{{\textbf{a)}} Ratio of factorial cumulant moments to
factorial moments, $H_q$, vs the order of the moments, $q$,
for standard events of baseline length $L=10$.
{\textbf{b)}} Same as in a) but for 2-jet like events.}
\label{fig:hq}
\end{figure}
Figure \ref{fig:clans}
shows the result for the average number of non-empty trees $\Nbar(\Delta y)$ and
for the average multiplicity within an average non-empty tree $\bar n_c(\Delta y)$
(defined as the ratio of the average multiplicity to $\Nbar(\Delta y)$)
for standard events. The noticeable features for the average number
of non-empty trees are an independence
from the size $L$ of the event in small intervals, and a linearity
with the size of the interval at fixed $L$. Both these characteristics
are seen in experimental data with the statistical definition of
clans.\cite{DEL:2,Elba}
Finally, in Figure \ref{fig:hq} the results of
a calculation in full phase space of the ratio $H_q$
of factorial cumulant moments to factorial moments
are shown: these where shown to be sensitive to the structures
in the MD,\cite{NijmegenAG,FaroRU} and in particular to the radiation
of hard gluons in the early stages of the partonic evolution.
The figure shows that oscillations in sign of the ratio $H_q$
appear in the discrete QCD model, and when
examined in the 2-jet like sample these oscillations diminish
strongly in amplitude, in accordance with the behaviour of
data.\cite{NijmegenAG}
\section*{Acknowledgments}
I would like to thank Bo Andersson and Jim Samuelson for
very useful discussions. A warm thank goes also to the organizers
of this excellent meeting.
\section*{References}
\input{dqcd.ref}
\end{document}
| proofpile-arXiv_065-365 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec: intro}
Gauge models with anomaly are interesting from different points of view.
First, there is a problem of consistent quantization for these models.
Due to anomaly some constraints change their nature after quantization:
instead of being first-class constraints, they turn into second-class
ones. A consistent canonical quantization scheme clearly should take
into account such a change \cite{jack85}-\cite{sarad91}.
Next is a problem of the relativistic invariance. It is known that in the
physical sector where the local gauge invariance holds the relativistic
invariance is broken for some anomalous models, namely the chiral
Schwinger model (CSM) and chiral $QCD_2$ \cite{niemi86}-\cite{sarad96}.
For both models the Poincare
algebra commutation relations breaking term can be constructed
explicitly \cite{sarad96}.
In the present paper we address ourselves to another aspect of anomalous
models: the Berry phase and its connection to anomaly. A common topological
nature of the Berry phase, or more generally quantum holonomy, and gauge
anomalies was noted in \cite{alva85},\cite{niemi85}. The former was shown
to be crucial in the hamiltonian interpretation of anomalies.
We consider a general version of the CSM with a ${\rm U}(1)$ gauge field
coupled with different charges to both chiral components of a fermionic
field. The non-anomalous Schwinger model (SM) where these charges are
equal is a special case of the generalized CSM. This will allow us
to see any distinction between the models with and without anomaly.
We suppose that space is a circle of length ${\rm L}$,
$-\frac{\rm L}{2} \leq x < \frac{\rm L}{2}$, so space-time manifold
is a cylinder ${\rm S}^1 \otimes {\rm R}^1$. We work in the temporal
gauge $A_0=0$ and use the system of units where $c=1$. Only matter
fields are quantized, while $A_1$ is handled as a classical
background field. Our aim is to calculate the Berry phase and the
corresponding ${\rm U}(1)$ connection and curvature for the fermionic
Fock vacuum as well as for many particle states constructed over the
vacuum and to show explicitly a connection between the nonvanishing
vacuum Berry phase and anomaly.
Our paper is organized as follows. In Sect.~\ref{sec: quant}, we
apply first and second quantization to the matter fields and obtain
the second quantized fermionic Hamiltonian. We define the Fock
vacuum and construct many particle Fock states over the vacuum. We
use a particle-hole interpretation for these states.
In Sect.~\ref{sec: berry} , we first derive a general formula for
the Berry phase and then calculate it for the vacuum and many
particle states. We show that for all Fock states the Berry phase
vanishes in the case of models without anomaly. We discuss a connection
between the nonvanishing vacuum Berry phase, anomaly and effective
action of the model.
Our conclusions are in Sect.~\ref{sec: con}.
\newpage
\section{Quantization of matter fields}
\label{sec: quant}
The Lagrangian density of the generalized CSM is
\begin{equation}
{\cal L} = - {\frac{1}{4}} {\rm F}_{\mu \nu} {\rm F}^{\mu \nu} +
\bar{\psi} i {\hbar} {\gamma}^{\mu} {\partial}_{\mu} \psi +
e_{+} {\hbar} \bar{\psi}_{+} {\gamma}^{\mu} {\psi_{+}} A_{\mu} +
e_{-} {\hbar} \bar{\psi}_{-} {\gamma}^{\mu} {\psi_{-}} A_{\mu} ,
\label{eq: odin}
\end{equation}
where ${\rm F}_{\mu \nu}= \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu}$ ,
$(\mu, \nu) = \overline{0,1}$ , $\gamma^{0}={\sigma}_1$,
${\gamma}^{1}=-i{\sigma}_2$, ${\gamma}^0 {\gamma}^1={\gamma}^5=
{\sigma}_3$, ${\sigma}_i (i=\overline{1,3})$ are Pauli matrices.
The field $\psi$ is $2$--component Dirac spinor, $\bar{\psi} =
\psi^{\dagger} \gamma^0$ and $\psi_{\pm}=\frac{1}{2} (1 \pm \gamma^5)
\psi$.
In the temporal gauge $A_0=0$, the Hamiltonian density is
\begin{equation}
{\cal H} = \frac{1}{2}{\rm E}^2 + {\cal H}_{+} + {\cal H}_{-},
\label{eq: dva}
\end{equation}
with ${\rm E}$ momentum canonically conjugate to $A_1$, and
\[
{\cal H}_{\pm} \equiv \hbar \psi_{\pm}^{\dagger} d_{\pm} \psi_{\pm} =
\mp \hbar \psi_{\pm}^{\dagger}(i{\partial}_{1}+e_{\pm}A_1)\psi_{\pm}.
\]
On the circle boundary conditions for the fields must be specified.
We impose the periodic ones
\begin{eqnarray}
{A_1} (- \frac{\rm L}{2}) & = & {A_1} (\frac{\rm L}{2}) \nonumber \\
{\psi_{\pm}} (- \frac{\rm L}{2}) & = & {\psi_{\pm}} (\frac{\rm L}{2}).
\label{eq: tri}
\end{eqnarray}
The Lagrangian and Hamiltonian densities
are invariant under local time-independent gauge
transformations
\begin{eqnarray*}
A_1 & \rightarrow & A_1 + {\partial}_{1} \lambda,\\
\psi_{\pm} & \rightarrow & \exp\{ie_{\pm} \lambda\} \psi_{\pm},
\end{eqnarray*}
$\lambda$ being a gauge function.
For arbitrary $e_{+},e_{-}$, the gauge transformations do not respect
the boundary conditions ~\ref{eq: tri}.
The gauge transformations compatible with the boundary conditions
must be either of the form
\[
\lambda (\frac{\rm L}{2})=\lambda (- \frac{\rm L}{2}) +
{\frac{2\pi}{e_{+}}}n,
\hspace{5 mm}
{\rm n} \in \cal Z.
\]
with $e_{+} \neq 0$ and
\begin{equation}
\frac{e_{-}}{e_{+}} = {\rm N},
\hspace{5 mm}
{\rm N} \in \cal Z,
\label{eq: cet}
\end{equation}
or of the form
\[
\lambda(\frac{\rm L}{2}) = \lambda(-\frac{\rm L}{2}) +
\frac{2\pi}{e_{-}} n ,
\hspace{5 mm}
{\rm n} \in \cal Z,
\]
with $e_{-} \neq 0$ and
\begin{equation}
\frac{e_{+}}{e_{-}} = \bar{\rm N},
\hspace{5 mm}
\bar{\rm N} \in \cal Z.
\label{eq: pet}
\end{equation}
Eqs. ~\ref{eq: cet} and ~\ref{eq: pet} imply a quantization condition
for the charges. Without loss of generality, we choose ~\ref{eq: cet}.
For ${\rm N}=1$, $e_{-}=e_{+}$ and we have the standard Schwinger model.
For ${\rm N}=0$, we get the model in which only the positive chirality
component of the Dirac field is coupled to the gauge field.
We see that the gauge transformations under consideration are divided
into topological classes characterized by the integer $n$. If
$\lambda(\frac{\rm L}{2}) = \lambda(-\frac{\rm L}{2})$, then the
gauge transformation is topologically trivial and belongs to the
$n=0$ class. If $n \neq 0$ it is nontrivial and has winding number $n$.
The eigenfunctions and the eigenvalues of the first quantized
fermionic Hamiltonians are
\[
d_{\pm} \langle x|n;{\pm} \rangle = \pm \varepsilon_{n,{\pm }}
\langle x|n;{\pm } \rangle ,
\]
where
\[
\langle x|n;{\pm } \rangle = \frac{1}{\sqrt {\rm L}}
\exp\{ie_{\pm} \int_{-{\rm L}/2}^{x} dz{A_1}(z) +
i\varepsilon_{n,{\pm}} \cdot x\},
\]
\[
\varepsilon_{n,{\pm }} = \frac{2\pi}{\rm L}
(n - \frac{e_{\pm}b{\rm L}}{2\pi}).
\]
We see that the spectrum of the eigenvalues depends on the zero
mode of the gauge field:
\[
b \equiv \frac{1}{\rm L} \int_{-{\rm L}/2}^{{\rm L}/2} dx
A_1(x,t).
\]
For $\frac{e_{+}b{\rm L}}{2\pi}={\rm integer}$, the spectrum contains
the zero energy level. As $b$ increases from $0$ to
$\frac{2\pi}{e_{+}{\rm L}}$, the energies of
$\varepsilon_{n,+}$ decrease by $\frac{2\pi}{\rm L}$, while the energies
of $(-\varepsilon_{n,-})$ increase by $\frac{2\pi}{\rm L} {\rm N}$.
Some of energy levels change sign. However, the spectrum at the
configurations $b=0$ and $b=\frac{2\pi}{e_{+}{\rm L}}$
is the same, namely, the integers, as it must be since these gauge-field
configurations are gauge-equivalent. In what follows, we
will use separately the integer and fractional parts of
$\frac{e_{\pm}b{\rm L}}{2\pi}$, denoting them as
$[\frac{e_{\pm}b{\rm L}}{2\pi}]$ as $\{\frac{e_{\pm}b{\rm L}}{2\pi}\}$
correspondingly.
Now we introduce the second quantized right-handed and
left-handed Dirac fields. For the moment, we will assume that $d_{\pm}$
do not have zero eigenvalues. At time $t=0$, in terms of the
eigenfunctions of the first quantized fermionic Hamiltonians the second
quantized ($\zeta$--function regulated) fields have the expansion
\cite{niese86} :
\[
\psi_{+}^s (x) = \sum_{n \in \cal Z} a_n \langle x|n;{+} \rangle
|\lambda \varepsilon_{n,+}|^{-s/2},
\]
\begin{equation}
\psi_{-}^s (x) = \sum_{n \in \cal Z} b_n \langle x|n;{-} \rangle
|\lambda \varepsilon_{n,-}|^{-s/2}.
\label{eq: vosem}
\end{equation}
Here $\lambda$ is an arbitrary constant with dimension of length
which is necessary to make $\lambda \varepsilon_{n,\pm}$ dimensionless,
while $a_n, a_n^{\dagger}$ and $b_n, b_n^{\dagger}$ are correspondingly
right-handed and left-handed fermionic creation and annihilation
operators which fulfil the commutation relations
\[
[a_n , a_m^{\dagger}]_{+} = [b_n , b_n^{\dagger}]_{+} =\delta_{m,n} .
\]
For $\psi_{\pm }^{s} (x)$, the equal time anticommutators are
\begin{equation}
[\psi_{\pm}^{s}(x) , \psi_{\pm}^{\dagger s}(y)]_{+}=\zeta_{\pm} (s,x,y),
\label{eq: devet}
\end{equation}
with all other anticommutators vanishing, where
\[
\zeta_{\pm} (s,x,y) \equiv \sum_{n \in \cal Z} \langle x|n;{\pm} \rangle
\langle n;{\pm}|y \rangle |\lambda \varepsilon_{n,\pm}|^{-s},
\]
$s$ being large and positive. In the limit, when the regulator
is removed, i.e. $s=0$, $\zeta_{\pm}(s=0,x,y) = \delta(x-y)$ and
Eq.~\ref{eq: devet} takes the standard form.
The vacuum state of the second quantized fermionic Hamiltonian
\[
|{\rm vac};A \rangle = |{\rm vac};A;+ \rangle \otimes
|{\rm vac};A;- \rangle
\]
is defined such that all negative energy
levels are filled and the others are empty:
\begin{eqnarray}
a_n|{\rm vac};A;+\rangle =0 & {\rm for} & n>[\frac{e_{+}b{\rm L}}{2\pi}],
\nonumber \\
a_n^{\dagger} |{\rm vac};A;+ \rangle =0 & {\rm for} & n \leq
[\frac{e_{+}b{\rm L}}{2\pi}],
\label{eq: deset}
\end{eqnarray}
and
\begin{eqnarray}
b_n|{\rm vac};A;-\rangle =0 & {\rm for} & n \leq
[\frac{e_{-}b{\rm L}}{2\pi}], \nonumber \\
b_n^{\dagger} |{\rm vac};A;- \rangle =0 & {\rm for} & n >
[\frac{e_{-}b{\rm L}}{2\pi}].
\label{eq: odinodin}
\end{eqnarray}
In other words, in the positive chirality vacuum all the levels
with energy lower than ${\varepsilon}_{[\frac{e_{+}b{\rm L}}
{2\pi}]+1,+}$ and in the negative chirality one all the levels
with energy lower than $(-{\varepsilon}_{[\frac{e_{-}b{\rm L}}
{2\pi}],-})$ are filled:
\begin{eqnarray*}
|{\rm vac}; A;+ \rangle & = & \prod_{n=\infty}^{[\frac{e_{+}b
{\rm L}}{2\pi}]} a_m^{\dagger} |0;+ \rangle, \\
|{\rm vac}; A;- \rangle & = & \prod_{n=[\frac{e_{-}b{\rm L}}
{2\pi}]+1}^{+\infty} b_n^{\dagger} |0;- \rangle, \\
\end{eqnarray*}
where $|0 \rangle = |0,+ \rangle \otimes |0,- \rangle$ is the state
of "nothing" with all the energy levels empty.
The Fermi surfaces which are defined to lie halfway between the highest
filled and lowest empty levels are
\[
{\varepsilon}_{\pm}^{\rm F} = \pm \frac{2\pi}{\rm L}
(\frac{1}{2} - \{\frac{e_{\pm}b{\rm L}}{2\pi}\}).
\]
For $e_{+}=e_{-}$, ${\varepsilon}_{+}^{\rm F}=-{\varepsilon}_{-}^{\rm F}$.
Next we define the fermionic parts of the second-quantized Hamiltonian as
\[
\hat{\rm H}_{\pm}^s = \int_{-{\rm L}/2}^{{\rm L}/2} dx
\hat{\cal H}_{\pm}^s(x)= \frac{1}{2} \hbar \int_{-{\rm L}/2}^{{\rm L}/2}dx
(\psi_{\pm}^{\dagger s} d_{\pm} \psi_{\pm}^s
- \psi_{\pm}^s d_{\pm}^{\star} \psi_{\pm}^{\dagger s}).
\]
Substituting ~\ref{eq: vosem} into this expression, we get
\begin{equation}
\hat{\rm H}_{\pm} = \hat{\rm H}_{0,\pm} \mp
e_{\pm} b \hbar :\rho_{\pm}(0): + {\hbar} \frac{\rm L}{4\pi}
({\varepsilon}_{\pm}^{\rm F})^2,
\label{eq: hamil}
\end{equation}
where double dots indicate normal ordering with respect to
$|{\rm vac},A \rangle$ ,
\begin{eqnarray*}
\hat{\rm H}_{0,+} & = & \hbar \frac{2 \pi}{\rm L} \lim_{s \to 0}
\{ \sum_{k >[\frac{e_{+}b{\rm L}}{2 \pi}]} k a_k^{\dagger} a_k
|\lambda \varepsilon_{k,+}|^{-s} - \sum_{k \leq [\frac{e_{+}b{\rm L}}
{2 \pi}]} k a_k a_k^{\dagger} |\lambda \varepsilon_{k,+}|^{-s} \},\\
\hat{\rm H}_{0,-} & = & \hbar \frac{2 \pi}{\rm L} \lim_{s \to 0}
\{ \sum_{k>[\frac{e_{-}b{\rm L}}{2 \pi}]} k b_{k} b_{k}^{\dagger}
|\lambda \varepsilon_{k,-}|^{-s} - \sum_{k \leq [\frac{e_{-}b{\rm L}}
{2 \pi}]} k b_{k}^{\dagger} b_{k} |\lambda \varepsilon_{k,-}|^{-s} \}
\end{eqnarray*}
are free fermionic Hamiltonians, and
\begin{eqnarray*}
:\rho_{+} (0): & = & \lim_{s \to 0} \{ \sum_{k >[\frac{e_{+}b{\rm L}}
{2 \pi}]} a_k^{\dagger} a_k |\lambda \varepsilon_{k,+}|^{-s} -
\sum_{k \leq [\frac{e_{+}b{\rm L}}{2 \pi}]} a_k a_k^{\dagger}
|\lambda \varepsilon_{k,+}|^{-s} \}, \\
:\rho_{-} (0): & = & \lim_{s \to 0} \{ \sum_{k \leq [\frac{e_{-}b{\rm L}}
{2 \pi}]} b_{k}^{\dagger} b_{k} |\lambda \varepsilon_{k,-}|^{-s} -
\sum_{k>[\frac{e_{-}b{\rm L}}{2 \pi}]} b_{k} b_{k}^{\dagger}
|\lambda \varepsilon_{k,-}|^{-s} \}
\end{eqnarray*}
are charge operators for the positive and negative chirality fermion
fields respectively. The fermion momentum operators constructed
analogously are
\[
\hat{\rm P}_{\pm} = \hat{\rm H}_{0,\pm}.
\]
The operators $:\hat{\rm H}_{\pm}:$, $:\rho_{\pm}(0):$
and $\hat{\rm P}_{\pm}$ are
well defined when acting on finitely excited states which have only a
finite number of excitations relative to the Fock vacuum.
For the vacuum state,
\[
:\hat{\rm H}_{\pm}:|{\rm vac}; A;\pm \rangle =
:{\rho}_{\pm}(0):|{\rm vac}; A;\pm \rangle =0.
\]
Due to the normal ordering, the energy of the vacuum which is at the
same time the ground state of the fermionic Hamiltonians turns out
to be equal to zero ( we neglect an infinite energy of the filled
levels below the Fermi surfaces ${\varepsilon}_{\pm}^{\rm F}$).
The vacuum state can be considered also as a state of the zero charge.
Any other state of the same charge will have some of the levels above
${\varepsilon}_{+}^{\rm F}$ (${\varepsilon}_{-}^{\rm F}$) occupied
and some levels below ${\varepsilon}_{+}^{\rm F}$ (${\varepsilon}_{-}
^{\rm F}$) unoccupied. It is convenient to use the vacuum state
$|{\rm vac}; A \rangle$ as a reference, describing the removal
of a particle of positive (negative) chirality from one of the levels
below ${\varepsilon}_{+}^{\rm F}$ (${\varepsilon}_{-}^{\rm F}$) as
the creation of a "hole" \cite{dirac64},\cite{feyn72}.
Particles in the levels above
${\varepsilon}_{+}^{\rm F}$ (${\varepsilon}_{-}^{\rm F}$) are still
called particles. If a particle of positive (negative) chirality
is excited from the level $m$ below the Fermi surface to the level
$n$ above the Fermi surface, then we say that a hole of positive
chirality with energy $(-{\hbar}{\varepsilon}_{m,+})$ and
momentum $(-{\hbar}\frac{2\pi}{\rm L} m)$ ( or of negative chirality
with energy ${\hbar}{\varepsilon}_{m,-}$ and momentum
${\hbar}\frac{2\pi}{\rm L} m$) has been created as well as the
positive chirality particle with energy ${\hbar}{\varepsilon}_{n,+}$
and momentum ${\hbar}\frac{2\pi}{\rm L}n$ ( or the negative chirality
one with energy $(-{\hbar}{\varepsilon}_{n,-})$ and momentum
$(-{\hbar}\frac{2\pi}{\rm L}n)$ ). The operators $a_k (k \leq
[\frac{e_{+}b{\rm L}}{2\pi}])$ and $b_k (k>[\frac{e_{-}b{\rm L}}{2\pi}])$
behave like creation operators for the positive and negative chirality
holes correspondingly.
In the charge operator a hole counts as $-1$, so that, for example,
any state with one particle and one hole as well as the vacuum state
has vanishing charge.
The number of particles and holes of positive and negative chirality
outside the vacuum state is given by the operators
\begin{eqnarray*}
{\rm N}_{+} & = & \lim_{s \to 0} \{ \sum_{k>[\frac{e_{+}b{\rm L}}
{2\pi}]} a_k^{\dagger} a_k + \sum_{k \leq [\frac{e_{+}b{\rm L}}
{2\pi}]} a_k a_k^{\dagger} \} |{\lambda}{\varepsilon}_{k,+}|^{-s}, \\
{\rm N}_{-} & = & \lim_{s \to 0} \{ \sum_{k \leq [\frac{e_{-}b{\rm L}}
{2\pi}]} b_k^{\dagger} b_k + \sum_{k>[\frac{e_{-}b{\rm L}}{2\pi}]}
b_k b_k^{\dagger} \} |{\lambda}{\varepsilon}_{k,-}|^{-s},\\
\end{eqnarray*}
which count both particle and hole as $+1$.
Excited states are constructed by operating creation operators
on the vacuum. We start with $1$-particle states. Let us define the
states $|m; A;\pm \rangle$ as follows
$$
|m; A;+ \rangle \equiv \left\{
\begin{array}{cc}
a_m^{\dagger}|{\rm vac}; A;+
\rangle & {\rm for} \hspace{5 mm} m>[\frac{e_{+}b{\rm L}}{2\pi}], \\
a_m |{\rm vac}; A;+
\rangle & {\rm for} \hspace{5 mm} m \leq [\frac{e_{+}b{\rm L}}{2\pi}]
\end{array}
\right.
$$
and
$$
|m; A;- \rangle \equiv
\left\{ \begin{array}{cc}
b_m^{\dagger} |{\rm vac}; A;-
\rangle & {\rm for} \hspace{5 mm} m \leq [\frac{e_{-}b{\rm L}}{2\pi}],\\
b_m |{\rm vac}; A;- \rangle & {\rm for} \hspace{5 mm} m>[\frac{e_{-}b{\rm
L}}{2\pi}]. \end{array}
\right .
$$
The states $|m; A;\pm \rangle$ are orthonormalized,
\[
\langle m; A;\pm |n, A; \pm \rangle = \delta_{mn},
\]
and fulfil the completeness relation
\[
\sum_{m \in \cal Z} |m; A;\pm \rangle \cdot
\langle m; A;\pm| =1.
\]
It is easily checked that
\begin{eqnarray*}
:\hat{\rm H}_{\pm}: |m; A;\pm \rangle & = & {\hbar}{\varepsilon}
_{m,\pm} |m; A;\pm \rangle, \\
\hat{\rm P}_{\pm} |m; A;\pm \rangle & = & {\hbar}\frac{2\pi}{\rm L}
m |m; A;\pm \rangle, \\
:{\rho}_{\pm}(0): |m; A;\pm \rangle & = & \pm |m; A;\pm \rangle
\hspace{5 mm}
{\rm for}
\hspace{5 mm}
m > [\frac{e_{\pm}b{\rm L}}{2\pi}]
\end{eqnarray*}
and
\begin{eqnarray*}
:\hat{\rm H}_{\pm}: |m; A;\pm \rangle & = & - {\hbar}{\varepsilon}
_{m,\pm} |m; A;\pm \rangle, \\
\hat{\rm P}_{\pm} |m; A;\pm \rangle & = & -{\hbar} \frac{2\pi}{\rm L}
m |m; A;\pm \rangle, \\
:{\rho}_{\pm}(0): |m; A;\pm \rangle & = & \mp |m; A;\pm \rangle
\hspace{5 mm}
{\rm for}
\hspace{5 mm}
m \leq [\frac{e_{\pm}b{\rm L}}{2\pi}].
\end{eqnarray*}
We see that $|m; A;+ \rangle$ is a state with one particle of
positive chirality with energy ${\hbar}{\varepsilon}_{m,+}$ and
momentum ${\hbar}\frac{2\pi}{\rm L} m$ for $m>[\frac{e_{+}b{\rm L}}
{2\pi}]$ or a state with one hole of the same chirality with energy
$(-{\hbar}{\varepsilon}_{m,+})$ and momentum $(-\hbar \frac{2\pi}{\rm L}
m)$ for $m \leq [\frac{e_{+}b{\rm L}}{2\pi}]$. The negative chirality
state $|m; A;- \rangle$ is a state with one particle with energy
$(-\hbar {\varepsilon}_{m,-})$ and momentum $(-\hbar \frac{2\pi}{\rm L}
m)$ for $m \leq [\frac{e_{-}b{\rm L}}{2\pi}]$ or a state with one hole
with energy $\hbar {\varepsilon}_{m,-}$ and momentum $\hbar
\frac{2\pi}{\rm L} m$ for $m >[\frac{e_{-}b{\rm L}}{2\pi}]$. In any
case,
\[
{\rm N}_{\pm}|m; A;\pm \rangle = |m; A;\pm \rangle,
\]
that is why $|m; A;\pm \rangle$ are called $1$-particle states.
By applying $n$ creation operators to the vacuum states $|{\rm vac};
A;\pm \rangle$ we can get also $n$-particle states
\[
|m_1;m_2;...;m_n; A;\pm \rangle
\hspace{5 mm}
(m_1 < m_2 < ... <m_n),
\]
which are orthonormalized:
\[
\langle m_1;m_2;...;m_n; A;\pm |\overline{m}_1;\overline{m}_2;
...;\overline{m}_n; A;\pm \rangle =
{\delta}_{m_1 \overline{m}_1} {\delta}_{m_2 \overline{m}_2} ...
{\delta}_{m_n \overline{m}_n}.
\]
The completeness relation is written in the following form
\begin{equation}
\frac{1}{n!} \sum_{m_1 \in \cal Z} ... \sum_{m_n \in \cal Z}
|m_1;m_2;...;m_n; A;\pm \rangle \cdot
\langle m_1;m_2;...;m_n; A;\pm| =1.
\label{eq: polnota}
\end{equation}
Here the range of $m_i$ ($i=\overline{1,n}$) is not restricted by
the condition $(m_1<m_2<...<m_n)$, duplication of states being taken care
of by the $1/n!$ and the normalization. The $1$ on the right-hand side
of Eq.~\ref{eq: polnota} means the unit operator on the space of
$n$-particle states.
The case $n=0$ corresponds to the zero-particle states. They form a
one-dimensional space, all of whose elements are proportional to
the vacuum state.
The multiparticle Hilbert space is a direct sum of an infinite
sequence of the $n$-particle Hilbert spaces. The states of different
numbers of particles are defined to be orthogonal to each other.
The completeness relation in the multiparticle Hilbert space has the
form
\begin{equation}
\sum_{n=0}^{\infty} \frac{1}{n!} \sum_{m_1,m_2,...m_n \in \cal Z}
|m_1;m_2;...;m_n; A;\pm \rangle \cdot
\langle m_1;m_2;...;m_n; A;\pm| = 1,
\label{eq: plete}
\end{equation}
where "1" on the right-hand side means the unit operator on the
whole multiparticle space.
For $n$-particle states,
\[
:\hat{\rm H}_{\pm}: |m_1;m_2;...;m_n; A;\pm \rangle =
\hbar \sum_{k=1}^{n} {\varepsilon}_{m_k,\pm} \cdot {\rm sign}
({\varepsilon}_{m_k,\pm}) |m_1;m_2;...;m_n; A;\pm \rangle
\]
and
\[
:{\rho}_{\pm}(0): |m_1;m_2;...;m_n; A;\pm \rangle =
\pm \sum_{k=1}^{n} {\rm sign}({\varepsilon}_{m_k,\pm})
|m_1;m_2;...;m_n; A;\pm \rangle.
\]
\newpage
\section{Calculation of Berry phases}
\label{sec: berry}
In the adiabatic approach \cite{schiff68}-\cite{zwanz}, the
dynamical
variables are divided into two sets, one which we call fast variables
and the other which we call slow variables. In our case, we treat the
fermions as fast variables and the gauge fields as slow variables.
Let ${\cal A}^1$ be a manifold of all static gauge field
configurations ${A_1}(x)$. On ${\cal A}^1$ a time-dependent
gauge field ${A_1}(x,t)$ corresponds to a path and a periodic gauge
field to a closed loop.
We consider the fermionic part of the second-quantized Hamiltonian
$:\hat{\rm H}_{\rm F}:=:\hat{\rm H}_{+}: + :\hat{\rm H}_{-}:$
which depends on $t$ through the background
gauge field $A_1$ and so changes very slowly with time. We consider
next the periodic gauge field ${A_1}(x,t) (0 \leq t <T)$ . After a
time $T$ the periodic field ${A_1}(x,t)$ returns to its original
value: ${A_1}(x,0) = {A_1}(x,T)$, so that $:\hat{\rm H}_{\pm}:(0)=
:\hat{\rm H}_{\pm}:(T)$ .
At each instant $t$ we define eigenstates for $:\hat{\rm H}_{\pm}:
(t)$ by
\[
:\hat{\rm H}_{\pm}:(t) |{\rm F}, A(t);\pm \rangle =
{\varepsilon}_{{\rm F},\pm}(t) |{\rm F}, A(t);\pm \rangle.
\]
The state $|{\rm F}=0, A(t);\pm \rangle \equiv |{\rm vac}; A(t);\pm \rangle$
is a ground state of $:\hat{\rm H}_{\pm}:(t)$ ,
\[
:\hat{\rm H}_{\pm}:(t) |{\rm vac}; A(t);\pm \rangle =0.
\]
The Fock states $|{\rm F}, A(t) \rangle = |{\rm F},A(t);+ \rangle
\otimes |{\rm F},A(t);- \rangle $
depend on $t$ only through
their implicit dependence on $A_1$. They are assumed to be
orthonormalized,
\[
\langle {\rm F^{\prime}}, A(t)|{\rm F}, A(t) \rangle =
\delta_{{\rm F},{\rm F^{\prime}}},
\]
and nondegenerate.
The time evolution of the wave function of our system (fermions
in a background gauge field) is clearly governed by the Schrodinger
equation:
\[
i \hbar \frac{\partial \psi(t)}{\partial t} =
:\hat{\rm H}_{\rm F}:(t) \psi(t) .
\]
For each $t$, this wave function can be expanded in terms of the
"instantaneous" eigenstates $|{\rm F}, A(t) \rangle$ .
Let us choose ${\psi}_{\rm F}(0)=|{\rm F}, A(0) \rangle$, i.e.
the system is initially described by the eigenstate
$|{\rm F},A(0) \rangle$ . According to the adiabatic approximation,
if at $t=0$ our system starts in an stationary state $|{\rm F},A(0)
\rangle $ of $:\hat{\rm H}_{\rm F}:(0)$, then it will remain,
at any other instant of time $t$, in the corresponding eigenstate
$|{\rm F}, A(t) \rangle$ of the instantaneous Hamiltonian
$:\hat{\rm H}_{\rm F}:(t)$. In other words, in the adiabatic
approximation transitions to other eigenstates are neglected.
Thus, at some time $t$ later our system will be described up to
a phase by the same Fock state $|{\rm F}, A(t) \rangle $:
\[
\psi_{\rm F}(t) = {\rm C}_{\rm F}(t) \cdot |{\rm F},A(t) \rangle,
\]
where ${\rm C}_{\rm F}(t)$ is yet undetermined phase.
To find the phase, we insert $\psi_{\rm F}(t)$ into the
Schrodinger equation :
\[
\hbar \dot{\rm C}_{\rm F}(t) = -i {\rm C}_{\rm F}(t)
(\varepsilon_{{\rm F},+}(t) + \varepsilon_{{\rm F},-}(t))
- \hbar {\rm C}_{\rm F}(t)
\langle {\rm F},A(t)|\frac{\partial}{\partial t}|{\rm F},A(t) \rangle.
\]
Solving this equation, we get
\[
{\rm C}_{\rm F}(t) = \exp\{- \frac{i}{\hbar} \int_{0}^{t} d{t^{\prime}}
({\varepsilon}_{{\rm F},+}({t^{\prime}}) +
{\varepsilon}_{{\rm F},-}({t^{\prime}}) ) - \int_{0}^{t} d{t^{\prime}}
\langle {\rm F},A({t^{\prime}})|\frac{\partial}{\partial{t^{\prime}}}|
{\rm F},A({t^{\prime}}) \rangle \}.
\]
For $t=T$, $|{\rm F},A(T) \rangle =|{\rm F},A(0) \rangle$ ( the
instantaneous eigenfunctions are chosen to be periodic in time)
and
\[
{\psi}_{\rm F}(T) = \exp\{i {\gamma}_{\rm F}^{\rm dyn} +
i {\gamma}_{\rm F}^{\rm Berry} \}\cdot {\psi}_{\rm F}(0),
\]
where
\[ {\gamma}_{\rm F}^{\rm dyn} \equiv - \frac{1}{\hbar}
\int_{0}^{T} dt \cdot ({\varepsilon}_{{\rm F},+}(t)
+ {\varepsilon}_{{\rm F},-}(t)),
\]
while
\begin{equation}
{\gamma}_{\rm F}^{\rm Berry} = {\gamma}_{\rm F,+}^{\rm Berry} +
{\gamma}_{\rm F,-}^{\rm Berry},
\label{eq: summa}
\end{equation}
\[
{\gamma}_{{\rm F},\pm}^{\rm Berry} \equiv \int_{0}^{T} dt \int_{-{\rm L}/2}^
{{\rm L}/2} dx \dot{A_1}(x,t) \langle {\rm F},A(t);\pm|i \frac{\delta}
{\delta A_1(x,t)}|{\rm F},A(t);\pm \rangle
\]
is Berry's phase \cite{berry84}.
If we define the $U(1)$ connections
\begin{equation}
{\cal A}_{{\rm F},\pm}(x,t) \equiv \langle {\rm F},A(t);\pm|i \frac{\delta}
{\delta A_1(x,t)}|{\rm F},A(t);\pm \rangle,
\label{eq: dvatri}
\end{equation}
then
\[
{\gamma}_{{\rm F},\pm}^{\rm Berry} = \int_{0}^{T} dt \int_{-{\rm L}/2}^
{{\rm L}/2} dx \dot{A}_1(x,t) {\cal A}_{{\rm F},\pm}(x,t).
\]
We see that upon parallel transport around a closed loop on
${\cal A}^1$ the Fock states $|{\rm F},A(t);\pm \rangle$ acquire an
additional phase which is integrated exponential of ${\cal A}_{{\rm F},\pm}
(x,t)$. Whereas the dynamical phase ${\gamma}_{\rm F}^{\rm dyn}$
provides information about the duration of the evolution, the
Berry's phase reflects the nontrivial holonomy of the Fock states
on ${\cal A}^1$.
However, a direct computation of the diagonal matrix elements of
$\frac{\delta}{\delta A_1(x,t)}$ in ~\ref{eq: summa} requires a
globally single-valued basis for the eigenstates $|{\rm F},A(t);\pm \rangle$
which is not available. The connections ~\ref{eq: dvatri} can be
defined only locally on ${\cal A}^1$, in regions where
$[\frac{e_{+}b{\rm L}}{2 \pi}]$ is fixed. The values of $A_1$ in regions
of different $[\frac{e_{+}b{\rm L}}{2 \pi}]$ are connected by
topologically nontrivial gauge transformations.
If $[\frac{e_{+}b{\rm L}}{2 \pi}]$ changes, then
there is a nontrivial spectral flow , i.e. some of energy levels
of the first quantized fermionic Hamiltonians cross zero and change
sign. This means that the definition of the Fock vacuum of the second
quantized fermionic Hamiltonian changes (see Eq.~\ref{eq: deset}
and ~\ref{eq: odinodin}). Since the creation and annihilation operators
$a^{\dagger}, a$ (and $b^{\dagger}, b$ ) are
continuous functionals of $A_1(x)$, the definition of all excited
Fock states $|{\rm F},A(t) \rangle$ is also discontinuous. The
connections ${\cal A}_{{\rm F},\pm}$ are not therefore well-defined
globally.
Their global characterization necessiates the usual introduction of
transition functions.
Furthermore, ${\cal A}_{{\rm F},\pm}$ are not invariant under
$A$--dependent
redefinitions of the phases of the Fock states: $|{\rm F},A(t);\pm \rangle
\rightarrow \exp\{-i {\chi}_{\pm}[A]\} |{\rm F},A(t);\pm \rangle$, and
transform like a $U(1)$ vector potential
\[
{\cal A}_{{\rm F},\pm} \rightarrow {\cal A}_{{\rm F},\pm} +
\frac{\delta {\chi}_{\pm}[A]}{\delta A_1}.
\]
For these reasons, to calculate ${\gamma}_{\rm F}^{\rm Berry}$ it
is more convenient to compute first the $U(1)$ curvature tensors
\begin{equation}
{\cal F}_{\rm F}^{\pm}(x,y,t) \equiv \frac{\delta}{\delta A_1(x,t)}
{\cal A}_{{\rm F},\pm}(y,t) - \frac{\delta}{\delta A_1(y,t)}
{\cal A}_{{\rm F},\pm}(x,t)
\label{eq: dvacet}
\end{equation}
and then deduce ${\cal A}_{{\rm F},\pm}$.
i) $n$-particle states $(n \geq 3)$.
For $n$-particle states $|m_1;m_2;...;m_n; A;\pm \rangle$
$(m_1<m_2<...<m_n)$, the ${\rm U}(1)$ curvature tensors are
\[
{\cal F}_{m_1,m_2,...,m_n}^{\pm}(x,y,t)
= i \sum_{k=0}^{\infty}
\frac{1}{k!} \sum_{\overline{m}_1, \overline{m}_2, ...,
\overline{m}_k \in \cal Z} \{ \langle m_1;m_2:...;m_n; A;\pm|
\frac{\delta}{\delta {A}_1(x,t)}
\]
\[
|\overline{m}_1;\overline{m}_2;
...;\overline{m}_k; A;\pm \rangle
\cdot \langle \overline{m}_1; \overline{m}_2; ...; \overline{m}_k;
A;\pm| \frac{\delta}{\delta A_1(y,t)}|
m_1;m_2;...;m_n; A;\pm \rangle - (x \leftrightarrow y) \}
\]
where the completeness condition ~\ref{eq: plete} is inserted.
Since
\[
\langle m_1;m_2;...;m_n; A;\pm |\frac{\delta
:\hat{\rm H}_{\pm}:}{\delta A_1(x,t)}|
\overline{m}_1;\overline{m}_2;...;\overline{m}_k; A;\pm \rangle
= {\hbar} \{ \sum_{i=1}^k {\varepsilon}_{\overline{m}_i,\pm} \cdot
{\rm sign}({\varepsilon}_{\overline{m}_i,\pm})
\]
\[
-\sum_{i=1}^n {\varepsilon}_{m_i,\pm} \cdot
{\rm sign}({\varepsilon}_{m_i,\pm}) \} \cdot
\langle m_1;m_2;...;m_n;A;\pm|\frac{\delta}{\delta A_1(x,t)}|
\overline{m}_1; \overline{m}_2;...;\overline{m}_k; A;\pm \rangle
\]
and $:\hat{\rm H}_{\pm}:$ are quadratic in the positive and negative
chirality creation and annihilation operators, the matrix elements
$\langle m_1;m_2;...;m_n; A;\pm|\frac{\delta}{\delta A_1(x,t)}|
\overline{m}_1;\overline{m}_2;...;\overline{m}_k; A;\pm \rangle$
and so the corresponding curvature tensors
${\cal F}_{m_1,m_2,...,m_n}^{\pm}$ and Berry phases
${\gamma}_{m_1,m_2,...,m_n;\pm}^{\rm Berry}$ vanish for all values
of $m_i (i=\overline{1,n})$ for $n \geq 3$.
ii) $2$-particle states.
For $2$-particle states $|m_1;m_2; A;\pm \rangle$ $(m_1<m_2)$,
only the vacuum state survives in the completeness condition inserted
so that the curvature tensors ${\cal F}_{m_1m_2}^{\pm}$ take the form
\[
{\cal F}_{m_1m_2}^{\pm}(x,y,t) = \frac{i}{{\hbar}^2} \frac{1}
{({\varepsilon}_{m_1,\pm} \cdot {\rm sign}({\varepsilon}_{m_1,\pm}) +
{\varepsilon}_{m_2,\pm} \cdot {\rm sign}({\varepsilon}_{m_2,\pm}))^2}
\]
\[
\cdot \{ \langle m_1;m_2;A;\pm| \frac{\delta :\hat{\rm H}_{\pm}:}
{\delta A_1(y,t)}|{\rm vac}; A;\pm \rangle
\langle {\rm vac};A;\pm|\frac{\delta :\hat{\rm H}_{\pm}:}
{\delta A_1(x,t)}|m_1;m_2;A;\pm \rangle -
(x \leftrightarrow y) \}.
\]
With $:\hat{\rm H}_{\pm}:(t)$ given by ~\ref{eq: hamil},
${\cal F}_{m_1m_2}^{\pm}$ are evaluated as
$$
{\cal F}_{m_1m_2}^{\pm}= \left \{
\begin{array}{cc}
0 & \mbox{for $m_1,m_2 >[\frac{e_{\pm}b{\rm L}}
{2\pi}] \hspace{3 mm} {\rm and} \hspace{3 mm} m_1,m_2 \leq
[\frac{e_{\pm}b{\rm
L}}{2\pi}]$},\\ \mp \frac{e_{\pm}^2}{2{\pi}^2} \frac{1}{(m_2-m_1)^2}
\sin\{\frac{2\pi}{\rm L}(m_2-m_1)(x-y)\} & \mbox{for
$m_1 \leq [\frac{e_{\pm}b{\rm L}}{2\pi}], m_2>[\frac{e_{\pm}b{\rm L}}
{2\pi}]$},
\end{array}\right.
$$
i.e. the curvatures are nonvanishing only for states with one
particle and one hole.
The corresponding connections are easily deduced as
\[
{\cal A}_{m_1m_2}^{\pm}(x,t) =
-\frac{1}{2} \int_{-{\rm L}/2}^{{\rm L}/2} dy
{\cal F}_{m_1m_2}^{\pm}(x,y,t) A_1(y,t).
\]
The Berry phases become
\[
{\gamma}_{m_1m_2,\pm}^{\rm Berry} = - \frac{1}{2} \int_{0}^{\rm T} dt
\int_{-{\rm L}/2}^{{\rm L}/2} dx \int_{-{\rm L}/2}^{{\rm L}/2} dy
\dot{A}_1(x,t) {\cal F}_{m_1m_2}^{\pm}(x,y,t) A_1(y,t).
\]
If we introduce the Fourier expansion for the gauge field
\[
A_1(x,t) =b(t) + \sum_{\stackrel{p \in \cal Z}{p \neq 0}}
e^{i\frac{2\pi}{\rm L} px} {\alpha}_p(t),
\]
then in terms of the gauge field Fourier components the Berry phases
take the form
\[
{\gamma}_{m_1m_2,\pm}^{\rm Berry} =
\mp \frac{e_{\pm}^2{\rm L}^2}{8{\pi}^2} \frac{1}{(m_2-m_1)^2}
\int_{0}^{\rm T} dt i ({\alpha}_{m_2-m_1} \dot{\alpha}_{m_1-m_2}
- {\alpha}_{m_1-m_2} \dot{\alpha}_{m_2-m_1})
\]
for $m_1 \leq [\frac{e_{\pm}b{\rm L}}{2\pi}],
m_2>[\frac{e_{\pm}b{\rm L}}{2\pi}]$,
vanishing for $m_1,m_2 >[\frac{e_{\pm}b{\rm L}}{2\pi}]$ and
$m_1,m_2 \leq [\frac{e_{\pm}b{\rm L}}{2\pi}]$.
Therefore, a parallel transportation of the states $|m_1;m_2;A;\pm
\rangle$ with two particles or two holes around a closed loop in
$({\alpha}_p,{\alpha}_{-p})$-space $(p>0)$ yields back the same states,
while the states with one particle and one hole are multiplied by
the phases ${\gamma}_{m_1m_2,\pm}^{\rm Berry}$.
For the Schwinger model when ${\rm N}=1$ and $e_{+}=e_{-}$
as well as for axial electrodynamics when ${\rm N}=-1$ and
$e_{+}=-e_{-}$, the nonvanishing
Berry phases for the positive and negative chirality $2$-particle states
are opposite in sign,
\[
{\gamma}_{m_1m_2,+}^{\rm Berry} = - {\gamma}_{m_1m_2,-}^{\rm Berry},
\]
so that for the states $|m_1;m_2;A \rangle =
|m_1;m_2;A;+ \rangle \otimes |m_1;m_2;A;- \rangle$
the total Berry phase is zero.
iii) $1$-particle states.
For $1$-particle states $|m;A;\pm \rangle$, the ${\rm U}(1)$ curvature
tensors are
\[
{\cal F}_{m}^{\pm}(x,y,t) = i
\sum_{\stackrel{\overline{m} \in \cal Z}{\overline{m} \neq m}}
\frac{1}{{\hbar}^2}
\frac{1}{({\varepsilon}_{\overline{m},\pm} \cdot {\rm sign}
({\varepsilon}_{\overline{m},\pm}) - {\varepsilon}_{m,\pm} \cdot
{\rm sign}({\varepsilon}_{m,\pm}))^2}
\]
\[
\cdot \{ \langle m;A;\pm|
\frac{\delta : \hat{\rm H}_{\pm}:}{\delta A_1(y,t)}
|\overline{m};A;\pm \rangle
\langle \overline{m};A;\pm|
\frac{\delta :\hat{\rm H}_{\pm}:} {\delta A_1(x,t)}
|m;A;\pm \rangle - (x \longleftrightarrow y) \}. \\
\]
By a direct calculation we easily get
\begin{eqnarray*}
{\cal F}_{m>[\frac{e_{\pm}b{\rm L}}{2\pi}]}^{\pm} & = &
\sum_{\overline{m}=m-[\frac{e_{\pm}b{\rm L}}{2\pi}]}^{\infty}
{\cal F}_{0\overline{m}}^{\pm}, \\
{\cal F}_{m \leq [\frac{e_{\pm}b{\rm L}}{2\pi}]}^{\pm} & = &
\sum_{\overline{m}= [\frac{e_{\pm}b{\rm L}}{2\pi}] - m+1}^{\infty}
{\cal F}_{0\overline{m}}^{\pm},
\end{eqnarray*}
where ${\cal F}_{0\overline{m}}^{\pm}$ are curvature tensors for the
$2$-particle states $|0;\overline{m};A;\pm \rangle$ $(\overline{m}>0)$.
The Berry phases acquired by the states $|m;A;\pm \rangle$ by their
parallel transportation around a closed loop in $({\alpha}_p,
{\alpha}_{-p})$-space $(p>0)$ are
\begin{eqnarray*}
{\gamma}_{\pm}^{\rm Berry}(m>[\frac{e_{\pm}b{\rm L}}{2\pi}]) & = &
\sum_{\overline{m}=m - [\frac{e_{\pm}b{\rm L}}{2\pi}]}^{\infty}
{\gamma}_{0\overline{m};\pm}^{\rm Berry}, \\
{\gamma}_{\pm}^{\rm Berry}(m \leq [\frac{e_{\pm}b{\rm L}}{2\pi}]) & = &
\sum_{\overline{m}=[\frac{e_{\pm}b{\rm L}}{2\pi}] -m+1}^{\infty}
{\gamma}_{0\overline{m};\pm}^{\rm Berry},
\end{eqnarray*}
where ${\gamma}_{0\overline{m};\pm}^{\rm Berry}$ are phases
acquired by the states $|0;\overline{m};A;\pm \rangle$ by the same
transportation.
For the ${\rm N}=\pm 1$ models, the total $1$-particle curvature
tensor ${\cal F}_m ={\cal F}_m^{+} + {\cal F}_m^{-}$ and total Berry
phase ${\gamma}^{\rm Berry} ={\gamma}_{+}^{\rm Berry} +
{\gamma}_{-}^{\rm Berry}$ vanish.
iv) vacuum states.
For the vacuum case, only $2$-particle states contribute to the sum
of the completeness condition, so the vacuum curvature tensors are
\[
{\cal F}_{\rm vac}^{\pm}(x,y,t) = - \frac{1}{2}
\sum_{\overline{m}_1; \overline{m}_2 \in \cal Z}
{\cal F}_{\overline{m}_1 \overline{m}_2}(x,y,t).
\]
Taking the sums, we get
\begin{equation}
{\cal F}_{\rm vac}^{\pm} =
\pm \frac{e_{+}^2}{2{\pi}} \sum_{n>0}
( \frac{1}{2} \epsilon(x-y)
- \frac{1}{\rm L} (x-y) ).
\label{eq: dvasem}
\end{equation}
The total vacuum curvature tensor
\[
{\cal F}_{\rm vac} = {\cal F}_{\rm vac}^{+} + {\cal F}_{\rm vac}^{-}=
(1-{\rm N}^2) \frac{e_{+}^2}{2\pi} (\frac{1}{2} \epsilon(x-y) -
\frac{1}{\rm L} (x-y))
\]
vanishes for ${\rm N}=\pm 1$.
The corresponding ${\rm U}(1)$ connection is deduced as
\[
{\cal A}_{\rm vac}(x,t) = - \frac{1}{2} \int_{-{\rm L}/2}^{{\rm L}/2}
dy {\cal F}_{\rm vac}(x,y,t) A_1(y,t),
\]
so the total vacuum Berry phase is
\[
{\gamma}_{\rm vac}^{\rm Berry} = - \frac{1}{2} \int_{0}^{T} dt
\int_{-{\rm L}/2}^{{\rm L}/2} dx \int_{-{\rm L}/2}^{{\rm L}/2} dy
\dot{A_1}(x,t) {\cal F}_{\rm vac}(x,y,t) A_1(y,t),
\]
For ${\rm N}=0$ and in the limit ${\rm L} \to \infty$,
when the second term in ~\ref{eq: dvasem} may be neglected,
the $U(1)$ curvature tensor
coincides with that obtained in \cite{niemi86,semen87},
while the Berry phase becomes
\[
{\gamma}_{\rm vac}^{\rm Berry} = \int_{0}^{T} dt
\int_{- \infty}^{\infty} dx {\cal L}_{\rm nonlocal}(x,t),
\]
where
\[
{\cal L}_{\rm nonlocal}(x,t) \equiv - \frac{e_{+}^2}{8 {\pi}^2}
\int_{- \infty}^{\infty}
dy \dot{A_1}(x,t) \epsilon(x-y) A_1(y,t)
\]
is a non-local part of the effective Lagrange density of the CSM
\cite{sarad93}. The effective Lagrange density is a sum of the
ordinary Lagrange density of the CSM and the nonlocal part
${\cal L}_{\rm nonlocal}$. As shown in \cite{sarad93}, the effective
Lagrange density is equivalent to the ordinary one in the sense that
the corresponding preliminary Hamiltonians coincide on the constrained
submanifold ${\rm G} \approx 0$. This equivalence is valid at the
quantum level, too. If we start from the effective Lagrange density
and apply appropriately the Dirac quantization procedure, then we
come to a quantum theory which is exactly the quantum theory
obtained from the ordinary Lagrange density. We get therefore
that the Berry phase is an action and that the CSM can be defined
equivalently by both the effective action with the Berry phase
included and the ordinary one without the Berry phase.
In terms of the gauge field Fourier components, the connection
${\cal A}_{\rm vac}$ is rewritten as
\[
\langle {\rm vac};A(t)|\frac{d}{db(t)}|{\rm vac};A(t)\rangle =0,
\]
\[
\langle {\rm vac};A(t)|\frac{d}{d{\alpha}_{\pm p}(t)}|{\rm vac};A(t)\rangle
\equiv {\cal A}_{{\rm vac};\pm}(p,t)= \pm (1-{\rm N}^2)
\frac{e_{+}^2{\rm L}^2}{8{\pi}^2} \frac{1}{p} {\alpha}_{\mp p},
\]
so the nonvanishing vacuum curvature is
\[
{\cal F}_{\rm vac}(p) \equiv \frac{d}{d{\alpha}_{-p}}
{\cal A}_{{\rm vac};+} - \frac{d}{d{\alpha}_p}
{\cal A}_{{\rm vac};-} =
(1-{\rm N}^2) \frac{e_{+}^2{\rm L}^2}{4{\pi}^2} \frac{1}{p}.
\]
The total vacuum Berry phase becomes
\[
{\gamma}_{\rm vac}^{\rm Berry} = \int_{0}^{\rm T} dt
\sum_{p>0} {\cal F}_{\rm vac}(p) {\alpha}_p \dot{\alpha}_{-p}.
\]
For the ${\rm N} \neq \pm 1$ models where the local gauge symmetry
is known to be realized projectively \cite{sarad91},
the vacuum Berry phase is
non-zero. For ${\rm N}=\pm 1$ when the representation is unitary,
the curvature ${\cal F}_{\rm vac}(p)$ and the vacuum Berry phase
vanish.
The projective representation of the local gauge symmetry is
responsible for anomaly. In the full quantized theory of the
CSM when the gauge fields are also quantized the physical states
respond to gauge transformations from the zero topological class
with a phase \cite{sarad91}. This phase contributes to the
commutator of the Gauss law generators by a Schwinger term and
produces therefore an anomaly.
A connection of the nonvanishing vacuum Berry phase to the
projective representation can be shown in a more direct way.
Under the topologically trivial gauge transformations,
the gauge field Fourier components
${\alpha}_p, {\alpha}_{-p}$ transform as follows
\begin{eqnarray*}
{\alpha}_p & \stackrel{\tau}{\rightarrow} & {\alpha}_p - ip{\tau}_{-}(p),\\
{\alpha}_{-p} & \stackrel{\tau}{\rightarrow} & {\alpha}_{-p} -ip{\tau}_{+}(p),\\
\end{eqnarray*}
where ${\tau}_{\pm}(p)$ are smooth gauge parameters.
The nonlocal Lagrangian
\[
{\rm L}_{\rm nonlocal}(t) \equiv \int_{-{\rm L}/2}^{{\rm L}/2} dx
{\cal L}_{\rm nonlocal}(x,t) =
\sum_{p>0} {\cal F}_{\rm vac}(p)
i{\alpha}_{p} \dot{\alpha}_{-p}
\]
changes as
\[
{\rm L}_{\rm nonlocal}(t) \stackrel{\tau}{\rightarrow}
{\rm L}_{\rm nonlocal}(t) - 2{\pi} \frac{d}{dt} {\alpha}_1(A;{\tau}),
\]
where
\[
{\alpha}_1(A;{\tau}) \equiv - \frac{1}{4\pi}
\sum_{p>0} p{\cal F}_{\rm vac}(p) ({\alpha}_{-p} {\tau}_{-}
- {\alpha}_{p} {\tau}_{+})
\]
is just $1$--cocycle occuring in the projective
representation of the gauge group. This examplifies a connection
between the nonvanishing vacuum Berry phase and the fact that the local
gauge symmetry is realized projectively.
\newpage
\section{Conclusions}
\label{sec: con}
Let us summarize.
i) We have calculated explicitly the Berry phase and the corresponding
${\rm U}(1)$ connection and curvature for the fermionic vacuum and many
particle Fock states. For the ${\rm N} \neq \pm 1$ models, we get that
the Berry phase is non-zero for the vacuum, $1$-particle and $2$-particle
states with one particle and one hole. For all other many particle states
the Berry phase vanishes. This is caused by the form of the second
quantized fermionic Hamiltonian which is quadratic in the positive
and negative chirality creation and annihilation operators.
ii) For the ${\rm N}= \pm 1$ models without anomaly, i.e. for the SM and
axial electrodynamics, the Berry phases acquired by the negative and
positive chirality parts of the Fock states are opposite in sign
and cancel each other , so that
the total Berry phase for all Fock states is zero.
iii) A connection between the Berry phase and anomaly becomes more
explicit for the vacuum state. We have shown that for our model
the vacuum Berry phase contributes to the effective action, being
that additional part of the effective action which differs it from the
ordinary one. Under the topologically trivial gauge transformations
the corresponding addition in the effective Lagrangian changes by a
total
time derivative of the gauge group $1$-cocycle occuring in the projective
representation. This demonstrates an interrelation between the Berry
phase, anomaly and effective action.
\newpage
| proofpile-arXiv_065-366 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Models of the symmetric unitary matrix model are
solved exactly in the double scaling limit,
using orthogonal polynomials
on a circle.\cite{p}
The partition function is the form
$\int dU\exp\{-\frac{N}{\lambda}{\rm tr} V(U)\}$,
where $U$ is an $N\times N$ unitary matrix
and tr$V(U)$ is some well defined function of $U$.
When $V(U)$ is the self adjoint we call the model symmetric.\cite{b}
The simplest case is given by $V(U)=U+U^{\dag}$.
This unitary models has been studied in
connection with the large-$N$ approximation
to QCD in two dimensions.({\it one-plaquette model})\cite{gw}
For this model ``string equation'' is the
Painlev\'{e} II equation.
``Discrete string equation'' is called the discrete
Painlev\'{e} II equation.\cite{g}
When $V(U)$ is the anti-self adjoint, we call the model anti-symmetric
model.
The simplest case is given by $V(U)=U-U^{\dag}$.
This is the theta term in the two-dimensional QCD.\cite{kts}
It has the topological meaning.
The full non-reduced unitary model was first discussed
in \cite{m2}.
The full unitary model can be embedded in the
two-dimensional Toda Lattice hierarchy.
In this letter we shall try to reformulate
the full unitary matrix model
from the view points of
integrable equations and string equations.
These two view points are closely
connected each other
to describe this model.
We unify these view points
and clarify a relation
between these view
points.
This letter is organized as follows.
In the section 2
we present discrete string equations
for the full unitary matrix model.
Here we consider only the simplest case.
From the Virasoro constraints,
a relation between times $t_{1}$ and $t_{-1}$
is like complex conjugate.
Because of this symmetry, we can use the radial coordinate.
In the section 3
coupling the Toda equation and the discrete string equations,
we obtain the special Painlev\'e III equation.
In the section 4
we consider the reduced models, the symmetric and
the anti-symmetric model.
From the symmetric and the anti-symmetric model we can obtain the modified
Volterra equation and the discrete nonlinear Schr\"{o}dinger
equation respectively.
We study the relation of the symmetric and the anti-symmetric model.
In a special case, we can transform the symmetric model
into the anti-symmetric model.
Using this map, we can obtain B\"{a}cklund transformation
of the modified
Volterra equation and the discrete nonlinear Schr\"{o}dinger
equation.
The last section is devoted to concluding remarks.
\setcounter{equation}{0}
\section{Unitary Matrix model}
It is well known that the partition function $\tau_{n}$
of the unitary matrix model can be presented as
a product of norms of the biorthogonal polynomial system.
Namely, let us introduce a scalar product of the form
\begin{equation}
<A,B>=\oint\frac{d\mu(z)}{2\pi i z}
A(z)B(z^{-1}).
\end{equation}
where
\begin{equation}
d\mu(z)=dz \exp\{-\sum_{m>0}(t_{m}z^{m}+t_{-m}z^{-m})\}
\end{equation}
Let us define the system of the polynomials biorthogaonal
with respect to this scalar product
\begin{equation}
<\Phi_{n},\Phi_{k}^{*}>=h_{k}\delta_{nk}.
\label{or}
\end{equation}
Then, the partition function $\tau_{n}$ of the unitary matrix model
is equal to the product of $h_{n}$'s:
\begin{equation}
\tau_{n}=\prod_{k=0}^{n-1}h_{k},\;\;\;\tau_{0}=1.
\end{equation}
The polynomials are normalized as follows
(we should stress that superscript `*' does not mean the
complex conjugation):
\begin{equation}
\Phi_{n}=z^{n}+\cdots+S_{n-1},\;\;\Phi_{n}^{*}
=z^{n}+\cdots+S_{n-1}^{*},\;\;
S_{-1}=S_{-1}^{*}\equiv 1.
\label{2.4}
\end{equation}
Now it is easy to show that these polynomials satisfy the following
recurrent relations,
\begin{eqnarray}
\Phi_{n+1}(z)&=&z\Phi_{n}(z)+S_{n}z^{n}\Phi_{n}^{*}(z^{-1}),
\nonumber \\
\Phi_{n+1}^{*}(z^{-1})&=&z^{-1}\Phi_{n}^{*}(z^{-1})
+S_{n}^{*}z^{-n}\Phi_{n}(z),
\end{eqnarray}
and
\begin{equation}
\frac{h_{n+1}}{h_{n}}=1-S_{n}S_{n}^{*}.
\end{equation}
Note that $h_{n}$, $S_{n}$, $S_{n}^{*}$ ,$\Phi_{n}(z)$
and $\Phi_{n}^{*}$ depend parametrically on
$t_{1},t_{2},\cdots,$ and $t_{-1},t_{-2},\cdots,$ but for convenience
of notation we suppress this dependence.
Hereafter we call $t_{1},t_{2},\cdots,$ and $t_{-1},t_{-2},\cdots,$
time variables.
Using (\ref{or}) and integration by parts, we can obtain next relations:
\begin{eqnarray}
&-&\oint\frac{d\mu(z)}{2\pi i z} V'(z)\Phi_{n+1}(z)\Phi_{n}^{*}(z^{-1})
\nonumber\\
=& & -\oint\frac{d\mu(z)}{2\pi i z}\frac{\partial \Phi_{n+1}(z)}
{\partial z} \Phi_{n}^{*}(z^{-1})
-\oint\frac{d\mu(z)}{2\pi i z} \Phi_{n+1}(z)
\frac{\partial \Phi_{n}^{*}(z^{-1})}{\partial z}
+\oint\frac{d\mu(z)}{2\pi i z}\frac{\Phi_{n+1}(z)\Phi_{n}^{*}}{z}
\nonumber \\
=& & (n+1)(h_{n+1}-h_{n}),\label{se1}
\end{eqnarray}
and
\begin{eqnarray}
& &\oint\frac{d\mu(z)}{2\pi i z} z^{2}V'(z)\Phi_{n+1}^{*}(z^{-1})\Phi_{n}(z)
\nonumber\\
&=& \oint\frac{d\mu(z)}{2\pi i z} z^{2}\frac{\partial \Phi_{n+1}^{*}
(z^{-1})}{\partial z}\Phi_{n}(z)
+\oint\frac{d\mu(z)}{2\pi i z} z^{2}\Phi_{n+1}^{*}(z^{-1})
\frac{\partial \Phi_{n}(z)}{\partial z}
+\oint\frac{d\mu(z)}{2\pi i z} z\Phi_{n+1}^{*}(z^{-1})\Phi_{n}(z)
\nonumber \\
&=& (n+1)(h_{n+1}-h_{n}).
\label{se2}
\end{eqnarray}
(\ref{se1}) and (\ref{se2}) are string equations for
the full unitary matrix model.
If $t_{1}$ and $t_{-1}$
are free variables while
$t_{2}=t_{3}=\cdots=0$ and $t_{-2}=t_{-3}=\cdots=0$,
(\ref{se1}) and (\ref{se2})
become
\begin{equation}
(n+1)S_{n}S_{n}^{*}=t_{-1}(S_{n}S_{n+1}^{*}+S_{n}^{*}S_{n-1})
(1-S_{n}S_{n}^{*}),
\label{edp1}
\end{equation}
\begin{equation}
(n+1)S_{n}S_{n}^{*}=t_{1}(S_{n}^{*}S_{n+1}+S_{n}S_{n-1}^{*})
(1-S_{n}S_{n}^{*}).
\label{edp2}
\end{equation}
Next we introduce a useful relation.
Using (\ref{or}) and integration by parts, we can show
\begin{eqnarray}
& &\oint\frac{d\mu(z)}{2\pi i z}zV'(z)\Phi_{n}(z)\Phi_(n)^{*}(z^{-1})
\nonumber \\
&=&
\oint\frac{d\mu(z)}{2\pi i z}z\frac{\partial \Phi_{n}(z)}
{\partial z}\Phi_{n}^{*}(z^{-1})
+\oint\frac{d\mu(z)}{2\pi i z}z\Phi_{n}(z)
\frac{\partial \Phi_{n}^{*}(z^{-1})}{\partial z}
\nonumber \\
&=& nh_{n}-nh_{n}=0.
\label{1}
\end{eqnarray}
This corresponds to the Virasoro constraint:\cite{b}
\begin{equation}
L_{0}^{-cl}
=\sum_{k=-\infty}^{\infty}
kt_{k}\frac{\partial}{\partial t_{n}}.
\end{equation}
This relation constrains a symmetry like complex conjugate
between $t_{k}$ and $t_{-k}$.
It is important in the next section.
If we set that $t_{1}$ and $t_{-1}$
are free variables while
$t_{2}=t_{3}=\cdots=0$ and $t_{-2}=t_{-3}=\cdots=0$,
from (\ref{1}) we get
\begin{equation}
t_{1}S_{n}S_{n-1}^{*}=t_{-1}S_{n}^{*}S_{n-1}.
\label{2}
\end{equation}
Using (\ref{2}), (\ref{edp1}) and (\ref{edp2}) can be written
\begin{equation}
(n+1)S_{n}=(t_{1}S_{n+1}+t_{-1}S_{n-1})
(1-S_{n}S_{n}^{*}),
\label{edp3}
\end{equation}
\begin{equation}
(n+1)S_{n}^{*}=(t_{-1}S_{n+1}^{*}+t_{1}S_{n-1}^{*})
(1-S_{n}S_{n}^{*}).
\label{edp4}
\end{equation}
\setcounter{equation}{0}
\section{Toda equation and String equations}
Using the orthogonal conditions, it is also possible to obtain the
equations which describe the time dependence of $\Phi_{n}(z)$
and $\Phi_{n}^{*}(z)$.
Namely, differentiating (\ref{or}) with respect to times
$t_{1}$ and $t_{-1}$ gives the following evolution equations:
\begin{equation}
\frac{\partial \Phi_{n}(z)}{\partial t_{1}}
=-\frac{S_{n}}{S_{n-1}}\frac{h_{n}}{h_{n-1}}
(\Phi_{n}(z)-z\Phi_{n-1}),
\end{equation}
\begin{equation}
\frac{\partial \Phi_{n}(z)}{\partial t_{-1}}
=\frac{h_{n}}{h_{n-1}}\Phi_{n-1}(z),
\end{equation}
\begin{equation}
\frac{\partial \Phi_{n}^{*}(z^{-1})}{\partial t_{1}}
=\frac{h_{n}}{h_{n-1}}\Phi_{n-1}^{*}(z^{-1}),
\end{equation}
\begin{equation}
\frac{\partial \Phi_{n}^{*}(z^{-1})}{\partial t_{-1}}
=-\frac{S_{n}^{*}}{S_{n-1}^{*}}\frac{h_{n}}{h_{n-1}}
(\Phi_{n}^{*}(z^{-1})-z^{-1}\Phi_{n-1}^{*}),
\end{equation}
The compatibility condition gives the following nonlinear evolution equations:
\begin{equation}
\frac{\partial S_{n}}{\partial t_{1}}=-S_{n+1}\frac{h_{n+1}}{h_{n}},
\;\;\;
\frac{\partial S_{n}}{\partial t_{-1}}=S_{n-1}\frac{h_{n+1}}{h_{n}},
\label{11}
\end{equation}
\begin{equation}
\frac{\partial S^{*}_{n}}{\partial t_{1}}=S_{n+1}^{*}\frac{h_{n+1}}{h_{n}},
\;\;\;
\frac{\partial S^{*}_{n}}{\partial t_{-1}}=-S_{n-1}^{*}\frac{h_{n+1}}{h_{n}},
\label{22}
\end{equation}
\begin{equation}
\frac{\partial h_{n}}{\partial t_{1}}=S_{n}S_{n-1}^{*}h_{n},
\;\;\;
\frac{\partial h_{n}}{\partial t_{-1}}=S_{n}^{*}S_{n-1}h_{n},
\end{equation}
Here we define $a_{n}$, $b_{n}$ and $b_{n}^{*}$:
\begin{equation}
a_{n}\equiv 1-S_{n}S_{n}^{*}=\frac{h_{n+1}}{h_{n}},
\end{equation}
\begin{equation}
b_{n}\equiv S_{n}S_{n-1}^{*},
\end{equation}
\begin{equation}
b_{n}^{*}\equiv S_{n}^{*}S_{n-1}.
\end{equation}
Notice that from the definitions $a_{n}$, $b_{n}$ and $b_{n}^{*}$
satisfy the following identity:
\begin{equation}
b_{n}b_{n}^{*}=(1-a_{n})(1-a_{n-1}).
\label{*1}
\end{equation}
It can be shown using (\ref{2}) that
\begin{equation}
t_{1}b_{n}=t_{-1}b_{n}^{*}.
\label{*2}
\end{equation}
In terms of $a_{n}$, $b_{n}$ and $b_{n}^{*}$,
(\ref{11}) and (\ref{22}) become the two-dimensional Toda equations:
\begin{equation}
\frac{\partial a_{n}}{\partial t_{1}}
=a_{n}(b_{n+1}-b_{n}),\;\;\;
\frac{\partial b_{n}}{\partial t_{-1}}
=a_{n}-a_{n-1},
\label{t1}
\end{equation}
and
\begin{equation}
\frac{\partial a_{n}}{\partial t_{-1}}
=a_{n}(b_{n+1}^{*}-b_{n}^{*}),\;\;\;
\frac{\partial b_{n}^{*}}{\partial t_{1}}
=a_{n}-a_{n-1}.
\label{t2}
\end{equation}
Using $a_{n}$, $b_{n}$ and $b_{n}^{*}$,
we rewrite (\ref{edp1}) and (\ref{edp2})
\begin{equation}
\frac{n+1}{t_{-1}}\frac{1-a_{n}}{a_{n}}
=b_{n+1}^{*}+b_{n}^{*}.
\label{s2}
\end{equation}
and
\begin{equation}
\frac{n+1}{t_{1}}\frac{1-a_{n}}{a_{n}}
=b_{n+1}+b_{n},
\label{s1}
\end{equation}
From (\ref{t1}) and (\ref{s1})
we eliminate $b_{n+1}$,
\begin{equation}
2b_{n}=\frac{1}{a_{n}}[\frac{n+1}{t_{1}}(1-a_{n})-
\frac{\partial a_{n}}{\partial t_{1}}].
\label{ww1}
\end{equation}
In the same way, from (\ref{t2}) and (\ref{s2})
we eliminate $b_{n+1}^{*}$,
\begin{equation}
2b_{n}^{*}=\frac{1}{a_{n}}[\frac{n+1}{t_{-1}}(1-a_{n})-
\frac{\partial a_{n}}{\partial t_{-1}}].
\label{ww2}
\end{equation}
Using (\ref{*1}) and (\ref{*2}),
(\ref{t1}) and (\ref{t2}) can be written
\begin{equation}
\frac{\partial b_{n}}{\partial t_{-1}}
=
(a_{n}-1)+\frac{t_{1}}{t_{-1}}\frac{b_{n}^{2}}{1-a_{n}},
\label{w1}
\end{equation}
\begin{equation}
\frac{\partial b_{n}^{*}}{\partial t_{1}}
=
(a_{n}-1)+\frac{t_{-1}}{t_{1}}\frac{(b_{n}^{*})^{2}}{1-a_{n}}.
\label{w2}
\end{equation}
Using (\ref{ww1}) and (\ref{w1})
to eliminate $b_{n}$
we obtain a second order ODE for $a_{n}$
\begin{eqnarray}
\frac{\partial ^{2}a_{n}}{\partial t_{1}\partial t_{-1}}
&=&
\frac{n+1}{t_{-1}a_{n}}\frac{\partial a_{n}}{\partial t_{1}}
-\frac{n+1}{t_{1}a_{n}}\frac{\partial a_{n}}{\partial t_{-1}}
-2a_{n}(a_{n}-1)
+\frac{(n+1)^{2}}{2t_{1}t_{-1}}
\frac{a_{n}-1}{a_{n}}
\nonumber \\
& &
+
\frac{1}{a_{n}}\frac{\partial a_{n}}{\partial t_{1}}
\frac{\partial a_{n}}{\partial t_{-1}}
+
\frac{1}{2}\frac{t_{1}}{t_{-1}}
\frac{1}{(a_{n}-1)a_{n}}
(\frac{\partial a_{n}}{\partial t_{1}})^{2}.
\label{k1}
\end{eqnarray}
In the same way, we eliminate $b_{n}^{*}$
using (\ref{ww2}) and (\ref{w1})
and obtain an ODE for $a_{n}$
\begin{eqnarray}
\frac{\partial ^{2}a_{n}}{\partial t_{1}\partial t_{-1}}
&=&
\frac{n+1}{t_{1}a_{n}}\frac{\partial a_{n}}{\partial t_{-1}}
-\frac{n+1}{t_{-1}a_{n}}\frac{\partial a_{n}}{\partial t_{1}}
-2a_{n}(a_{n}-1)
+\frac{(n+1)^{2}}{2t_{1}t_{-1}}
\frac{a_{n}-1}{a_{n}}
\nonumber \\
& &
+
\frac{1}{a_{n}}\frac{\partial a_{n}}{\partial t_{1}}
\frac{\partial a_{n}}{\partial t_{-1}}
+
\frac{1}{2}\frac{t_{-1}}{t_{1}}
\frac{1}{(a_{n}-1)a_{n}}
(\frac{\partial a_{n}}{\partial t_{-1}})^{2}.
\label{k2}
\end{eqnarray}
The equality of (\ref{k1}) and (\ref{k2})
implies that
\begin{equation}
t_{1}\frac{\partial a_{n}}{\partial t_{1}}
=
t_{-1}\frac{\partial a_{n}}{\partial t_{-1}}
\end{equation}
Also this constraint can be shown from
(\ref{2}), (\ref{t1}) and (\ref{t2}) directly.
So $a_{n}$ are functions of the radial coordinate
\begin{equation}
x=t_{1}t_{-1},
\end{equation}
only.
Then from (\ref{k1}) and (\ref{k2})
we can obtain
\begin{equation}
\frac{\partial ^{2}a_{n}}{\partial x^{2}}
=
\frac{1}{2}(\frac{1}{a_{n}-1}+\frac{1}{a_{n}})
(\frac{\partial a_{n}}{\partial x})^{2}
-\frac{1}{x}\frac{\partial a_{n}}{\partial x}
-\frac{2}{x}a_{n}(a_{n}-1)
+\frac{(n+1)^{2}}{2x^{2}}
\frac{a_{n}-1}{a_{n}}.
\label{p3}
\end{equation}
This is an expression of the Painlev\'{e} V
equation (PV) with
\begin{equation}
\alpha_{V}=0,\;\;
\beta_{V}=-\frac{(n+1)^{2}}{2},\;\;
\gamma_{V}= 2,\;\;
\delta_{V}=0.
\end{equation}
(\ref{p3}) is related to the usual one
through
\begin{equation}
a_{n} \longrightarrow c_{n}=\frac{a_{n}}{a_{n}-1}.
\end{equation}
(\ref{p3}) is the Painlev\'{e} III
equation (P III) with
(see \cite{o})
\begin{equation}
\alpha_{III}=\\ 4(n+1)\;\;
\beta_{III}=-4n,\;\;
\gamma_{III}=4,\;\;
\delta_{III}=-4.
\end{equation}
\setcounter{equation}{0}
\section{Symmetric and Anti-symmetric model}
In this section we consider reduced unitary matrix models.
The following reductions of the time variables $t_{k}$ leads to
the symmetric and the anti-symmetric model:
\begin{equation}
t_{k}=t_{-k}=t^{+}_{k}\;\;\;k=1,2,\cdots,\;\;\;({\rm symmetric\;\;model})
\end{equation}
and
\begin{equation}
t_{k}=-t_{-k}=t^{-}_{k}\;\;\;k=1,2,\cdots,
\;\;\;({\rm anti-symmetric \;\;model})
\end{equation}
If $t_{1}^{+}$ are free variables while $t_{2}^{+}=t_{3}^{+}
=\cdots=0$,
from (\ref{2}) $S_{n}=S_{n}^{*}$.
From (\ref{1}) and (\ref{2}) the string equation becomes
\begin{equation}
(n+1)S_{n}=t_{1}^{+}(S_{n+1}+S_{n-1})(1-S_{n}^{2}).
\label{sse}
\end{equation}
This is called the discrete Painlev\'{e} II (dP II) equation.
Appropriate continuous limit of (\ref{sse})
yields the Painlev\'{e} II (P II) equation.
From (\ref{11}) and (\ref{22}) we can obtain the modified Volterra
equation:\cite{m2}
\begin{equation}
\frac{\partial S_{n}}{\partial t_{1}^{+}}
=
-(1-S_{n}^{2})(S_{n+1}-S_{n-1}).
\label{mve}
\end{equation}
Appropriate continuous limit of (\ref{mve})
yields the modified KdV equation.
(\ref{sse}) and (\ref{mve}) can be written in the form
\begin{equation}
2S_{n+1}=\frac{1}{1-S_{n}^{2}}
(\frac{n+1}{t_{1}^{+}}S_{n}-
\frac{\partial S_{n}}{\partial t_{1}^{+}}),
\label{a}
\end{equation}
and
\begin{equation}
2S_{n-1}=\frac{1}{1-S_{n}^{2}}
(\frac{n+1}{t_{1}^{+}}S_{n}+
\frac{\partial S_{n}}{\partial t_{1}^{+}}),
\label{b}
\end{equation}
Writing (\ref{b}) as
\begin{equation}
2S_{n}=\frac{1}{1-S_{n+1}^{2}}
(\frac{n+2}{t_{1}^{+}}S_{n+1}+
\frac{\partial S_{n+1}}{\partial t_{1}^{+}}),
\label{c}
\end{equation}
and using (\ref{a}) to eliminate $S_{n+1}$ we obtain
a second order ODE for $S_{n}$.
\begin{equation}
\frac{\partial^{2} S_{n}}{\partial t_{1}^{+2}}
=
-\frac{S_{n}}{1-S_{n}^{2}}
(\frac{\partial S_{n}}{\partial t_{1}^{+}})^{2}
-\frac{1}{t_{1}^{+}}\frac{\partial S_{n}}{\partial t_{1}^{+}}
+\frac{(n+1)^{2}}{t_{1}^{+2}}
\frac{S_{n}}{1-S_{n}^{2}}
-4S_{n}(1-S_{n}^{2}).
\end{equation}
It is important to keep in mind
that the relevant function is $1-S_{n}^{2}=a_{n}$.
$a_{n}$ satisfies
\begin{equation}
\frac{\partial ^{2}a_{n}}{\partial t_{1}^{+2}}
=
\frac{1}{2}(\frac{1}{a_{n}-1}+\frac{1}{a_{n}})
(\frac{\partial a_{n}}{\partial t_{1}^{+}})^{2}
-\frac{1}{t_{1}^{+}}\frac{\partial a_{n}}{\partial t_{1}^{+}}
-8a_{n}(a_{n}-1)
+\frac{2(n+1)^{2}}{t_{1}^{+2}}
\frac{a_{n}-1}{a_{n}}.
\label{p32}
\end{equation}
If we set $x=(t_{1}^{+})^{2}$, (\ref{p32})
is the same as (\ref{p3}).
Then, we obtain (\ref{p32}) which is the special case of
Painlev\'{e} III.
In conclusion, coupling the modified Volterra and
the dP II, we can obtain the P III.
The double limit
\begin{equation}
n\rightarrow\infty,\;\;\;t_{1}^{+}\rightarrow\infty,\;\;\;
\frac{t^{+2}_{1}}{n}=O(1),
\end{equation}
maps
P III(\ref{p32}) to P II.
Clearly, this kind of limit can be
discussed independently of the connection with
the modified Volterra and the modified KdV equation.
Next we consider the anti-symmetric model.
If $t_{1}^{-}$ are free variables while $t_{2}^{-}=t_{3}^{-}
=\cdots=0$,
from (\ref{2})
\begin{equation}
S_{n}S_{n-1}^{*}=S_{n}^{*}S_{n-1}.
\label{2.5}
\end{equation}
From (\ref{1}) and (\ref{2}) the string equations
become
\begin{eqnarray}
(n+1)S_{n}&=&t_{1}^{-}(-S_{n+1}+S_{n-1})(1-S_{n}S_{n}^{*}),
\nonumber \\
(n+1)S_{n}^{*}&=&t_{1}^{-}(S_{n+1}^{*}-S_{n-1}^{*})(1-S_{n}S_{n}^{*}).
\label{as1}
\end{eqnarray}
On the other hand, from (\ref{11}) and (\ref{22})
we obtain the discrete nonlinear Schr\"{o}dinger (NLS) equation:\cite{m3}
\begin{eqnarray}
\frac{\partial S_{n}}{\partial t_{1}^{-}}
&=&
-(1-S_{n}S_{n}^{*})(S_{n+1}+S_{n-1}),
\nonumber \\
\frac{\partial S_{n}^{*}}{\partial t_{1}^{-}}
&=&
(1-S_{n}S_{n}^{*})(S_{n+1}^{*}+S_{n-1}^{*}).
\label{as2}
\end{eqnarray}
Using the same method in the symmetric model case
we can obtain the P III.
Coupling the discrete NLS and the string equation,
we can obtain P III.
Through the transformation
\begin{equation}
z\rightarrow iz\;\;\;it_{1}^{-}\rightarrow t_{1}^{+},
\label{tf}
\end{equation}
the anti-symmetric model is transformed into the symmetric model.
Then we get the B\"{a}cklund transformation
from the discrete NLS to the modified Volterra equation:
\begin{equation}
S_{n}\longrightarrow (i)^{n+1}S_{n}.
\end{equation}
However we restrict $t_{1}^{-}$ to a real number,
we can not transform the anti-symmetric model
into the symmetric model.
We change variables $a_{n}\rightarrow u_{n}=\ln a_{n}$.
Then (\ref{t1}) and (\ref{t2}) become
\begin{equation}
\frac{\partial^{2} u_{n}}{\partial t_{1}\partial t_{-1}}
=
e^{u_{n+1}}-2e^{u_{n}}+e^{u_{n-1}}.
\label{te}
\end{equation}
In the anti-symmetric model
from (\ref{2.4}) and (\ref{2.5})
we can get
\begin{eqnarray}
S_{n}&=&S_{n}^{*},\;\;\;\;a_{n}=1-S_{n}^{2},\;\;\;\;\;\;\;(n={\rm odd}),
\nonumber \\
S_{n}&=&-S_{n}^{*},\;\;\;\;a_{n}=1+S_{n}^{2},\;\;\;\;\;\;\;(n={\rm even}).
\label{sid}
\end{eqnarray}
In the case that $t_{1}^{-}$ is real,
we can see the oscillation of $a_{n}$.
This phenomenon can be seen only in the anti-symmetric model.
Here we consider the continuum limit near the anti-symmetric model.
We are interested in
$S_{n}^{2}=\epsilon g_{n}$, $\epsilon\rightarrow 0$
and $n\rightarrow \infty$.
We assume $
t_{1}=-t_{-1}+2\epsilon/n$ and define $g_{n+1}-g_{n}=\epsilon g_{n}'$ .
Then the continuum limit yields
\begin{equation}
u\equiv u_{n}=-u_{n+1}+\epsilon
(\pm g'_{n}+g^{2}_{n})+O(\epsilon^{2}),
\end{equation}
where
$\pm$ corresponds to $n=$odd and $n=$even respectively.
So in the continuous limit
(\ref{te}) becomes well known the 1D sinh Gordon equation
\begin{equation}
\frac{\partial^{2} u}{\partial t_{1}\partial t_{-1}}
=
-2\sinh 2u,
\end{equation}
where $t_{1}=-t_{-1}$.
Here we introduce the radial coordinate
\begin{equation}
r=\sqrt{-t_{1}t_{-1}}.
\end{equation}
$u$ obeys an ODE of the form
\begin{equation}
\frac{{\rm d}^{2} u}{{\rm d} r^{2}}
+\frac{1}{r}\frac{{\rm d} u}{{\rm d}r}
=
2\sinh 2u.
\end{equation}
This is the P III with
\begin{equation}
\alpha_{III}=0\;\;
\beta_{III}=0,\;\;
\gamma_{III}=1,\;\;
\delta_{III}=-1.
\end{equation}
This equation is obtained from the 2 states Toda field equation, too.\cite{n}
Because of the oscillation of $a_{n}$, in the continuous limit
$u_{n}$ looks like having 2 states.
At last we consider the relation between
the symmetric and anti-symmetric model
from the determinant form.
The partition function of the symmetric model is
\begin{equation}
\tau_{N}^{+}={\rm det}_{ij}I_{i-j}(t_{1}^{+}),
\end{equation}
where $I_{m}$ is the modified Bessel function of
order $m$.\cite{ed}
In the same way, we can calculate the partition function of
the anti-symmetric model:
\begin{equation}
\tau_{N}^{-}={\rm det}_{ij}J_{i-j}(t_{1}^{-}),
\end{equation}
where $J_{m}$ is the Bessel function of
order $m$.
(\ref{tf}) is also the transformation between
the Bessel and the modified Bessel function.
(\ref{sid}) comes from the oscillation of the Bessel function.
\setcounter{equation}{0}
\section{Concluding remarks}
We try to reformulate
the full unitary matrix model
from the view points of
integrable equations and string equations.
Coupling the Toda equation and the string equations,
we obtain the P III equation.
Because of the Virasoro constraint,
$t_{1}$ and $t_{-1}$ have the symmetry.
This symmetry is like complex conjugate.
Then we can use the radial coordinate.
This PIII also describe the phase transition
between the week and strong coupling region.
Next we consider the relation among the symmetric, anti-symmetric
model and the P III equation.
If $t_{1}^{-}$ is a purely imaginary number, the anti-symmetric model can be transformed into the anti-symmetric model.
Using this map we construct the B\"{a}cklund
transformation from the discrete nonlinear Schr\"{o}dinger equation
to the modified Volterra equation .
This map is also the transformation between the
Bessel and the modified Bessel function.
If we restrict $t_{1}^{-}$ to a real number,
the symmetric and the anti-symmetric are different.
| proofpile-arXiv_065-367 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{Int}
\neu{Int-1}
Let $\sigma$ be a rational, polyhedral cone. It induces a (normal) affine toric
variety $Y_\sigma$ which may have singularities. We would like to investigate
its deformation theory.
The vector space $T^1_Y$ of infinitesimal deformations is multigraded, and
its homogeneous pieces can be determined by combinatorial formulas developed
in \cite{T1}.\\
If $Y_\sigma$ only has an isolated Gorenstein singularity, then we can say
even more (cf.\ \cite{Tohoku}, \cite{T2}):
$T^1$ is concentrated in one single multidegree,
the corresponding homogeneous piece
allows an elementary geometric description in terms of Minkowski
summands of a certain lattice polytope, and it is even possible (cf.\ \cite{versal})
to obtain the entire versal deformation of $Y_\sigma$.\\
\par
\neu{Int-2}
The first aim of the present paper is to provide a geometric interpretation
of the $T^1$-formula for arbitrary toric singularities in every multidegree.
This can be done again in terms of Minkowski summands of certain polyhedra.
However, they neither need to be compact, nor do their vertices have to be
contained in the lattice anymore (cf.\ \zitat{T1}{7}).\\
In \cite{Tohoku} we have studied so-called toric deformations
only existing in negative (i.e.\ $\in -\sigma^{\scriptscriptstyle\vee}$) multidegrees. They are
genuine deformations with smooth parameter space, and they are characterized
by the fact that their total space is still toric.
Now, having a new description of $T^1_Y$, we will describe in Theorem \zitat{Gd}{3} the
Kodaira-Spencer map in these terms.\\
Moreover, using partial modifications of our singularity $Y_\sigma$, we
extend in \zitat{Gd}{5}
the construction of genuine deformations to non-negative degrees.
Despite the fact that the total spaces are no longer toric, we can still
describe them and their Kodaira-Spencer map combinatorially.\\
\par
\neu{Int-3}
Afterwards, we focus on three-dimensional, toric Gorenstein singularities. As
already mentioned, everything is known in the isolated case. However, as soon
as $Y_\sigma$ contains one-dimensional singularities (which then have to be of
transversal type A$_k$), the situation changes dramatically.
In general,
$T^1_Y$ is spread into infinitely many multidegrees.
Using our geometric description of the $T^1$-pieces, we detect
in \zitat{3G}{3} all
non-trivial ones and determine their dimension (which will be one in most
cases). The easiest example of that
kind is the cone over the weighted projective plane $I\!\!P(1,2,3)$
(cf.\ \zitat{3G}{4}).\\
At least at the moment, it seems to be impossible to describe the entire versal
deformation; it is an infinite-dimensional space. However, the
infinitesimal deformations corresponding to the one-dimensional homogeneous
pieces of $T^1_Y$ are unobstructed, and we lift them in \zitat{3G}{5} to genuine
one-parameter families. Since the corresponding multidegrees are in general
non-negative, this can be done using the construction introduced in
\zitat{Gd}{5}. See section \zitat{3G}{8} for a corresponding sequel of
example $I\!\!P(1,2,3)$.\\
Those one-parameter families form a kind of skeleton of the entire versal deformation. The
most important open questions are the following: Which of them belong
to a common irreducible component of the base space? And, how could those
families be combined to find a general fiber (a smoothing of $Y_\sigma$)
of this component? The answers to these questions would provide important
information about three-dimensional flips.\\
\par
\section{Visualizing $T^1$}\label{T1}
\neu{T1-1} {\em Notation:}
As usual when dealing with toric varieties, denote by $N$, $M$ two mutually
dual lattices (i.e.\ finitely generated, free abelian groups), by
$\langle\,,\,\rangle:N\times M\toZ\!\!\!Z$ their perfect pairing, and by
$N_{I\!\!R}$, $M_{I\!\!R}$ the corresponding $I\!\!R$-vector spaces obtained by extension of
scalars.\\
Let $\sigma\subseteq N_{I\!\!R}$ be the polyhedral cone with apex in $0$ given
by the fundamental generators $a^1,\dots,a^M\in N$.
They are assumed to be primitive, i.e.\ they are not proper multiples
of other elements from $N$. We will write $\sigma=\langle a^1,\dots,a^M\rangle$.\\
The dual cone
$\sigma^{\scriptscriptstyle\vee}:=\{r\in M_{I\!\!R}\,|\; \langle\sigma,r\rangle\geq 0\}$
is given by the inequalities assigned to $a^1,\dots,a^M$.
Intersecting $\sigma^{\scriptscriptstyle\vee}$ with the lattice $M$ yields a
finitely generated semigroup.
Denote by $E\subseteq\sigma^{\scriptscriptstyle\vee}\cap M$ its minimal generating set,
the so-called Hilbert basis. Then, the affine toric variety
$Y_\sigma:=\mbox{Spec}\,I\!\!\!\!C[\sigma^{\scriptscriptstyle\vee}\cap M]\subseteq\,I\!\!\!\!C^E$ is given by
equations assigned to the linear dependencies among elements of $E$. See
\cite{Oda} for a detailed introduction into the subject of toric varieties.\\
\par
\neu{T1-2}
Most of the relevant rings and modules for $Y_\sigma$ are $M$-(multi)graded.
So are the modules $T^i_Y$, which are important for describing infinitesimal
deformations and their obstructions. Let $R\in M$, then in \cite{T1} and \cite{T2} we
have defined the sets
\[
E_j^R:=\{ r\in E\,|\; \langle a^j,r\rangle<\langle a^j,R\rangle\}
\quad
(j=1,\dots,M)\,.
\]
They provide the main tool for building a complex
$\mbox{span}(E^R)_{\bullet}$ of free Abelian groups with the usual differentials
via
\[
\mbox{span}(E^R)_{-k} := \!\!\bigoplus_{\begin{array}{c}
\tau<\sigma \mbox{ face}\\ \mbox{dim}\, \tau=k \end{array}}
\!\!\!\!\!\mbox{span}(E^R_{\tau})\quad
\mbox{with} \quad
\renewcommand{\arraystretch}{1.5}
\begin{array}[t]{rcl}
E_0^R &:=& \bigcup_{j=1}^N E_j^R\; ,\mbox{ and}\\
E^R_{\tau} &:=& \bigcap_{a^j \in \tau} E_j^R \; \mbox{ for faces }
\tau < \sigma\,.
\end{array}
\]
{\bf Theorem:} (cf.\ \cite{T1}, \cite{T2})
{\em
For $i=1$ and, if $Y_\sigma$ is additionally smooth in codimension two, also for $i=2$,
the homogeneous pieces of $T^i_Y$ in degree $-R$ are
\[
T^i_Y(-R)=H^i\Big(\mbox{\em span}(E^R)_\bullet^\ast\otimes_{Z\!\!\!Z}\,I\!\!\!\!C\Big)\,.
\vspace{-1ex}
\]
}
\par
In particular, to obtain $T^1_Y(-R)$, we need to determine the vector spaces
$\mbox{span}_{\,I\!\!\!\!C}E_j^R$ and $\mbox{span}_{\,I\!\!\!\!C}E_{jk}^R$, where
$a^j$, $a^k$ span a two-dimensional face of $\sigma$. The first one
is easy to get:
\[
\mbox{span}_{\,I\!\!\!\!C}E_j^R =
\left\{
\begin{array}{ll}
0 & \mbox{if } \langle a^j ,R\rangle \leq 0\\
(a^j )^\bot & \mbox{if } \langle a^j ,R\rangle =1\\
M_{\,I\!\!\!\!C} & \mbox{if } \langle a^j ,R\rangle \geq 2\, .
\end{array}
\right.
\]
The latter is always contained in
$(\mbox{span}_{\,I\!\!\!\!C}E_j^R)\cap(\mbox{span}_{\,I\!\!\!\!C}E_k^R)$ with codimension
between $0$ and $2$. As we will see in the upcoming example, its actual size
reflects the infinitesimal deformations
of the two-dimensional cyclic quotient singularity assigned to
the plane cone spanned by $a^j$, $a^k$.
(These singularities are exactly the transversal types of the two-codimensional
ones of $Y_\sigma$.)\\
\par
\neu{T1-3}
{\bf Example:}
If $Y(n,q)$ denotes the two-dimensional
quotient of $\,I\!\!\!\!C^2$ by the $^{\kdZ\!\!\!Z}\!\!/_{\!\!\displaystyle nZ\!\!\!Z}$-action
via
$\left(\!\begin{array}{cc}\xi& 0\\ 0& \xi^q\end{array}\!\right)$
($\xi$ is a primitive $n$-th root of unity),
then $Y(n,q)$ is a toric variety and may be given by the cone $\sigma=\langle(1,0);
(-q,n)\rangle\subseteq I\!\!R^2$.
The set
$E\subseteq \sigma^{\scriptscriptstyle\vee}\cap Z\!\!\!Z^2$ consists of the lattice points
$r^0,\ldots,r^w$ along the compact faces of the boundary of
$\mbox{conv}\big((\sigma^{\scriptscriptstyle\vee}\setminus\{0\})\capZ\!\!\!Z^2\big)$. There are
integers $a_v\geq 2$ such that
$r^{v-1}+r^{v+1}=a_v\,r^v$ for $v=1,\dots,w-1$. They
may be obtained by expanding
$n/(n-q)$ into a negative continued fraction
(cf.\ \cite{Oda}, \S (1.6)).\\
Assume $w\geq 2$, let $a^1=(1,0)$ and $a^2=(-q,n)$.
Then, there are only two sets $E_1^R$ and $E_2^R$ involved, and the previous theorem
states
\[
T^1_Y(-R) = \left( \left. ^{\displaystyle (\mbox{span}_{\,I\!\!\!\!C}E^R_1)\cap (\mbox{span}_{\,I\!\!\!\!C}E^R_2)}\!
\right/ \! {\displaystyle \mbox{span}_{\,I\!\!\!\!C}(E_1^R\cap E_2^R)}\right)^\ast\,.
\]
Only three different types of $R\inZ\!\!\!Z^2$ provide a non-trivial contribution
to $T^1_Y$:
\begin{itemize}
\item[(i)] $R=r^1$ (or analogously $R=r^{w-1}):\;$
$\mbox{span}_{\,I\!\!\!\!C}E^R_1 =(a^1)^\bot$,
$\mbox{span}_{\,I\!\!\!\!C}E^R_2 = \,I\!\!\!\!C^2 \;(\mbox{or }(a^2)^\bot,\,\mbox{if } w=2)$,
and $\mbox{span}_{\,I\!\!\!\!C}E^R_{12}=0$.
Hence, $\mbox{dim}\, T^1(-R)=1$ (or $=0$, if $w=2$).
\item[(ii)] $R=r^v$ $(2\le v\le w-2)$:\quad
$\mbox{span}_{\,I\!\!\!\!C}E^R_1 = \mbox{span}_{\,I\!\!\!\!C}E^R_2 = \,I\!\!\!\!C^2\,$, and
$\mbox{span}_{\,I\!\!\!\!C}E^R_{12}=0$.
Hence, we obtain $\mbox{dim}\, T^1(-R)=2$.
\item[(iii)] $R=p\cdot r^v$ ($1\le v\le w-1$, $\;2\le p<a_v$ for $w\ge 3$;
or $v=1=w-1$, $\;2\le p\le a_1$ for $w=2$):\quad
$\mbox{span}_{\,I\!\!\!\!C}E^R_1 = \mbox{span}_{\,I\!\!\!\!C}E^R_2 = \,I\!\!\!\!C^2\,$, and
$\mbox{span}_{\,I\!\!\!\!C}E^R_{12}=\,I\!\!\!\!C\cdot R\,$.
In particular, $\mbox{dim}\, T^1(-R)=1$.
\vspace{1ex}
\end{itemize}
\neu{T1-4}
{\bf Definition:}
{\em For two polyhedra $Q', Q''\subseteq I\!\!R^n$ we define their Minkowski sum
as the polyhedron
$Q'+Q'':= \{p'+p''\,|\; p'\in Q', p''\in Q''\}$. Obviously, this notion also makes
sense for translation classes of polyhedra in arbitrary affine spaces.}\\
\par
\begin{center}
\unitlength=0.40mm
\linethickness{0.4pt}
\begin{picture}(330.00,40.00)
\put(0.00,0.00){\line(1,0){20.00}}
\put(20.00,0.00){\line(1,1){20.00}}
\put(40.00,20.00){\line(0,1){20.00}}
\put(40.00,40.00){\line(-1,0){20.00}}
\put(20.00,40.00){\line(-1,-1){20.00}}
\put(0.00,20.00){\line(0,-1){20.00}}
\put(80.00,10.00){\line(1,0){20.00}}
\put(100.00,10.00){\line(0,1){20.00}}
\put(100.00,30.00){\line(-1,-1){20.00}}
\put(140.00,30.00){\line(0,-1){20.00}}
\put(140.00,10.00){\line(1,1){20.00}}
\put(160.00,30.00){\line(-1,0){20.00}}
\put(200.00,10.00){\line(1,0){20.00}}
\put(260.00,10.00){\line(0,1){20.00}}
\put(310.00,10.00){\line(1,1){20.00}}
\put(60.00,20.00){\makebox(0,0)[cc]{$=$}}
\put(120.00,20.00){\makebox(0,0)[cc]{$+$}}
\put(240.00,20.00){\makebox(0,0)[cc]{$+$}}
\put(180.00,20.00){\makebox(0,0)[cc]{$=$}}
\put(290.00,20.00){\makebox(0,0)[cc]{$+$}}
\end{picture}
\end{center}
Every polyhedron $Q$ is decomposable into the Minkowski sum
$Q=Q^{\mbox{\footnotesize c}}+Q^{\infty}$ of a (compact) polytope $Q^{\mbox{\footnotesize c}}$
and the so-called cone of unbounded directions $Q^{\infty}$.
The latter one is uniquely determined by $Q$,
whereas the compact summand is not. However,
we can take for $Q^{\mbox{\footnotesize c}}$ the minimal one - given as the convex hull
of the vertices of $Q$ itself.
If $Q$ was already compact, then $Q^{\mbox{\footnotesize c}}=Q$ and $Q^{\infty}=0$.
\vspace{1ex}\\
{\em
A polyhedron $Q'$ is called a Minkowski summand of $Q$ if there is a $Q''$ such
that $Q=Q'+Q''$ and if, additionally, $(Q')^{\infty}= Q^{\infty}$.}\\
In particular, Minkowski summands always have the same cone of unbounded directions and,
up to dilatation (the factor $0$ is allowed), the same compact edges as the
original polyhedron.\\
\par
\neu{T1-5}
The {\em setup for the upcoming sections} is the following:
Consider the cone $\sigma\subseteq N_{I\!\!R}$ and fix some element $R\in M$.
Then $\A{R}{}:= [R=1]:= \{a\in N_{I\!\!R}\,|\; \langle a,R\rangle =1\}\subseteq N_{I\!\!R}$ is
an affine space; if $R$ is primitive, then it comes with a lattice
$\G{R}{}:= [R=1]\cap N$. The assigned vector space is $\A{R}{0}:=[R=0]$; it is always
equipped with the lattice $\G{R}{0}:= [R=0]\cap N$.
We define the cross cut of $\sigma$ in degree $R$ as the polyhedron
\[
Q(R):= \sigma\cap [R=1]\subseteq \A{R}{}\,.
\]
It has the
cone of unbounded directions $Q(R)^{\infty}=\sigma\cap \A{R}{0}\subseteq N_{I\!\!R}$.
The compact part $Q(R)^{\mbox{\footnotesize c}}$ is given by its vertices
$\bar{a}^j:=a^j/\langle a^j,R\rangle$, with $j$
meeting $\langle a^j,R\rangle\geq 1$.
A trivial but nevertheless important observation is the following:
The vertex $\bar{a}^j $ is a lattice point (i.e.\ $\bar{a}^j \in \G{R}{}$),
if and only if $\langle a^j , R \rangle =1$.\\
Fundamental generators of $\sigma$ contained in
$R^\bot$ can still be ``seen'' as edges in $Q(R)^{\infty}$, but those with
$\langle \bullet, R\rangle <0$ are ``invisible'' in $Q(R)$. In particular, we can
recover the cone $\sigma$ from $Q(R)$ if and only if $R\in \sigma^{\scriptscriptstyle\vee}$.\\
\par
\neu{T1-6}
Denote by $d^1,\dots,d^N\in R^\bot\subseteq N_{I\!\!R}$ the compact edges of $Q(R)$.
Similar to \cite{versal}, \S 2, we assign to each compact 2-face
$\varepsilon<Q(R)$ its sign vector $\underline{\varepsilon}\in \{0,\pm 1\}^N$ by
\[
\varepsilon_i := \left\{
\begin{array}{cl}
\pm 1 & \mbox{if $d^i$ is an edge of $\varepsilon$}\\
0 & \mbox{otherwise}
\end{array} \right.
\]
such that the oriented edges $\varepsilon_i\cdot d^i$ fit into a cycle along the boundary
of $\varepsilon$. This determines $\underline{\varepsilon}$ up to sign, and any choice will do.
In particular, $\sum_i \varepsilon_i d^i =0$.\\
\par
{\bf Definition:}
{\em
For each $R\in M$ we define the vector spaces
\vspace{-2ex}
\[
\renewcommand{\arraystretch}{1.5}
\begin{array}{rcl}
V(R) &:=& \{ (t_1,\dots,t_N)\, |\; \sum_i t_i \,\varepsilon_i \,d^i =0\;
\mbox{ for every compact 2-face } \varepsilon <Q(R)\}\\
W(R) &:=& I\!\!R^{\#\{\mbox{\footnotesize $Q(R)$-vertices not belonging to $N$}\}}\,.
\end{array}
\vspace{-2ex}
\]}
Measuring the dilatation of each compact edge, the cone
$C(R):=V(R)\cap I\!\!R^N_{\geq 0}$ parametrizes exactly the Minkowski summands
of positive multiples of $Q(R)$.
Hence, we will call elements of $V(R)$ ``generalized Minkowski summands'';
they may have edges of negative length.
(See \cite{versal}, Lemma (2.2) for a discussion of the compact case.)
The vector space $W(R)$ provides
coordinates $s_j $ for each vertex $\bar{a}^j \in Q(R)\setminus N$, i.e.\
$\langle a^j ,R\rangle \geq 2$.\\
\par
\neu{T1-7}
To each compact edge $d^{jk}=\overline{\bar{a}^j \bar{a}^k }$ we
assign a set of equations $G_{jk}$ which act on elements of
$V(R)\oplus W(R)$. These sets are of one of the following three types:
\begin{itemize}
\item[(0)]
$G_{jk}=\emptyset$,
\item[(1)]
$G_{jk} = \{ s_j -s_k=0\}$ provided both coordinates exist in $W(R)$,
set $G_{jk}=\emptyset$ otherwise, or
\item[(2)]
$G_{jk} = \{t_{jk}-s_j=0 ,\; t_{jk}-s_k=0\}$,
dropping equations that do not make sense.
\end{itemize}
Restricting $V(R)\oplus W(R)$ to the (at most) three coordinates
$t_{jk}$, $s_j$, $s_k$,
the actual choice of $G_{jk}$ is made such that these equations yield a
subspace of dimension $1+\mbox{dim}\,T^1_{\langle a^j,a^k\rangle}(-R)$.
Notice that the dimension of $T^1(-R)$ for the two-dimensional quotient singularity
assigned to the plane cone $\langle a^j,a^k\rangle$ can be obtained
from Example \zitat{T1}{3}.\\
\par
{\bf Theorem:}
{\em
The infinitesimal deformations of $Y_\sigma$ in degree $-R$ equal
\[
T^1_Y(-R)=
\{ (\underline{t},\,\underline{s})\in V_{\,I\!\!\!\!C}(R)\oplus W_{\,I\!\!\!\!C}(R)\,|\;
(\underline{t},\,\underline{s}) \mbox{ fulfills the equations } G_{jk}\}
\;\big/\; \,I\!\!\!\!C\cdot (\underline{1},\, \underline{1})\,.
\vspace{-1ex}
\]
}
In some sense, the vector space $V(R)$ (encoding Minkowski summands)
may be considered the main tool to describe infinitesimal deformations.
The elements of $W(R)$ can (depending on the type of the $G_{jk}$'s)
be either additional parameters, or they provide conditions excluding
Minkowski summands not having some prescribed type.\\
If $Y$ is smooth in codimension two, then $G_{jk}$ is always of type (2).
In particular, the variables $\underline{s}$ are completely determined by the
$\underline{t}$'s, and we obtain the\\
\par
{\bf Corollary:}
{\em If $Y$ is smooth in codimension two, then
$T^1_Y(-R)$ is contained in $V_{\,I\!\!\!\!C}(R) \big/ \,\,I\!\!\!\!C\cdot (\underline{1})$. It is built from those
$\underline{t}$ such that $t_{jk}=t_{kl}$ whenever $d^{jk}$, $d^{kl}$
are compact edges with a common non-lattice vertex $\bar{a}^k$ of $Q(R)$.\\
Thus, $T^1_Y(-R)$ equals the set of equivalence classes of those Minkowski
summands of $I\!\!R_{\geq 0}\cdot Q(R)$ that preserve up to homothety the stars
of non-lattice vertices of $Q(R)$.
}\\
\par
\neu{T1-8}
{\bf Proof:}\quad (of previous theorem)\\
{\em Step 1:}\quad
From Theorem \zitat{T1}{2} we know that $T^1_Y(-R)$ equals the
complexification of the cohomology of the complex
\[
N_{I\!\!R}\rightarrow
\oplus_j \left(\mbox{span}_{I\!\!R} E_j^R \right)^\ast
\rightarrow
\oplus_{\langle a^j ,a^k \rangle <\sigma}
\left(\mbox{span}_{I\!\!R} E^R_{jk} \right)^\ast\,.
\]
According to \zitat{T1}{2}, elements of
$\oplus_j \left(\mbox{span}_{I\!\!R} E_j^R \right)^\ast$
can be represented by a family of
\[
b^j \in N_{I\!\!R}\; \mbox{ (if } \langle a^j ,R\rangle \geq 2)\quad
\mbox{ and }
\quad b^j \in N_{I\!\!R}\big/I\!\!R\cdot a^j\; \mbox{ (if }
\langle a^j ,R\rangle =1).
\]
Dividing by the image of $N_{I\!\!R}$ means to shift this family by common
vectors $b\in N_{I\!\!R}$.
On the other hand, the family $\{b^j \}$ has to map onto $0$ in the complex,
i.e.\ for each compact edge
$\overline{\bar{a}^j ,\bar{a}^k }<Q$ the functions $b^j $ and
$b^k $ are
equal on $\mbox{span}_{I\!\!R}E_{jk}^R$. Since
\[
(a^j ,a^k )^\bot \subseteq \mbox{span}_{I\!\!R}E_{jk}^R \subseteq
(\mbox{span}_{I\!\!R}E_j^R) \cap (\mbox{span}_{I\!\!R}E_k^R)\,,
\]
we immediately obtain the necessary condition
$b^j -b^k \in I\!\!R a^j + I\!\!R a^k $.
However, the actual behavior of $\mbox{span}_{I\!\!R}E_{jk}^R$
will require a closer look (in the third step).\\
\par
{\em Step 2:}\quad
We introduce new ``coordinates'':
\begin{itemize}
\item
$\bar{b}^j := b^j -\langle b^j , R \rangle \,\bar{a}^j \in R^\bot$,
being well defined even in the case $\langle a^j , R \rangle =1$;
\item
$s_j :=-\langle b^j , R\rangle$
for $j$ meeting $\langle a^j , R \rangle \geq 2$ (inducing an element
of $W(R)$).
\end{itemize}
The shift of the $b^j$ by an element $b\in N_{I\!\!R}$ (i.e.\
$(b^j )'=b^j +b$) appears in these new coordinates as
\[
\renewcommand{\arraystretch}{1.5}
\begin{array}{rcl}
(\bar{b}^j )' &=& (b^j )' - \langle (b^j )', R \rangle \,\bar{a}^j
\;=\; b^j +b - \langle b^j ,R \rangle \,\bar{a}^j -
\langle b,R \rangle\,\bar{a}^j
\vspace{-0.5ex}\\
&=& \bar{b}^j + b-\langle b,R \rangle \,\bar{a}^j \,,\\
s_j '&=& -\langle (b^j )',R \rangle
\;=\; s_j -\langle b,R\rangle\,.
\end{array}
\]
In particular, an element $b\in R^\bot$ does not change the $s_j$,
but shifts the points $\bar{b}^j$ inside the hyperplane $R^\bot$. Hence,
the set of the $\bar{b}^j $ should be considered modulo translation
inside $R^\bot$ only.\\
On the other hand, the condition $b^j -b^k \in I\!\!R a^j + I\!\!R a^k $
changes into
$\bar{b}^j -\bar{b}^k \in I\!\!R \bar{a}^j + I\!\!R \bar{a}^k $ or even
$\bar{b}^j -\bar{b}^k \in I\!\!R (\bar{a}^j - \bar{a}^k )$ (consider the values
of $R$). Hence, the $\bar{b}^j $'s form the vertices of an at least
generalized Minkowski summand of $Q(R)$. Modulo translation, this summand
is completely described by the dilatation factors $t_{jk}$ obtained from
\[
\bar{b}^j -\bar{b}^k = t_{jk}\cdot (\bar{a}^j - \bar{a}^k )\,.
\]
Now, the remaining part of
the action of $b\in N_{I\!\!R}$ comes down to an action of
$\langle b,R\rangle\inI\!\!R$ only:
\[
\begin{array}{rcl}
t_{jk}' &=& t_{jk} - \langle b,R \rangle \quad \mbox{ and}\\
s_j ' &=& s_j - \langle b,R \rangle \,, \mbox{ as we already know}.
\end{array}
\]
Up to now, we have found that
$T^1_Y(-R)\subseteq V_{\,I\!\!\!\!C}(R)\oplus W_{\,I\!\!\!\!C}(R)/(\underline{1},\underline{1})$.\\
\par
{\em Step 3:}\quad
Actually, the elements $b^j $ and $b^k $ have to be equal on
$\mbox{span}_{I\!\!R} E^R_{jk}$, which may be a larger space than just
$(a^j ,a^k )^\bot$.
To measure the difference we consider the factor
$\mbox{span}_{I\!\!R} E^R_{jk}\big/ (a^j ,a^k )^\bot$
contained in the two-dimensional vector space
$M_{I\!\!R}\big/ (a^j ,a^k )^\bot =\mbox{span}_{I\!\!R}(a^j ,a^k )^\ast$.
Since this factor coincides with the set $\mbox{span}_{I\!\!R}E^{\bar{R}}_{jk}$
assigned to the two-dimensional cone
$\langle a^j ,a^k \rangle \subseteq \mbox{span}_{I\!\!R}(a^j ,a^k )$,
where $\bar{R}$
denotes the image of $R$ in $\mbox{span}_{I\!\!R}(a^j ,a^k )^\ast$,
we may assume
that $\sigma=\langle a^1, a^2\rangle$ (i.e.\ $j=1,\,k=2$)
represents a two-dimensional cyclic
quotient singularity. In particular, we only need to discuss the three cases
(i)-(iii) from Example \zitat{T1}{3}:\\
In (i) and (ii) we have $\mbox{span}_{I\!\!R} E^R_{12}=0$, i.e.\ no additional
equation is needed. This means $G_{12}=\emptyset$
is of type (0). On the other hand, if $T^1_Y=0$, then the
vector space $I\!\!R^3_{(t_{12},s_1,s_2)}\big/I\!\!R\cdot (\underline{1})$ has to be killed
by identifying the three variables $t_{12}$, $s_1$, and $s_2$;
we obtain type (2).\\
Case (iii) provides $\mbox{span}_{I\!\!R} E^R_{12}=I\!\!R\cdot R$. Hence, as an
additional condition we obtain that $b^1$ and $b^2$ have to be equal on $R$.
By the definition of $s_j$ this means $s_1=s_2$, and $G_{12}$ has
to be of type (1).
\hfill$\Box$\\
\par
\section{Genuine deformations}\label{Gd}
\neu{Gd-1}
In \cite{Tohoku} we have studied so-called toric deformations in a given
multidegree $-R\in M$. They are genuine deformations in the sense that they are
defined over smooth parameter spaces;
they are characterized by the fact that the total spaces
together with the embedding of the special fiber still belong to the toric
category. Despite the fact they look so special, it seems that toric deformations
cover a big part of the versal deformation of $Y_\sigma$. They do only
exist in negative degrees (i.e.\ $R\in\sigma^{\scriptscriptstyle\vee}\cap M$), but here they
form a kind of skeleton. If $Y_\sigma$ is an isolated toric Gorenstein
singularity, then toric deformations even provide all irreducible components
of the versal deformation (cf.\ \cite{versal}).\\
After a quick reminder of the idea of this construction, we
describe the Kodaira-Spencer map of toric deformations in terms of the new
$T^1_Y$-formula presented in \zitat{T1}{2}. It is followed by the investigation
of non-negative degrees: If $R\notin\sigma^{\scriptscriptstyle\vee}\cap M$, then we are still
able to construct genuine deformations of $Y_\sigma$; but they are no longer toric.\\
\par
\neu{Gd-2}
Let $R\in\sigma^{\scriptscriptstyle\vee}\cap M$. Then, following \cite{Tohoku} \S 3,
toric $m$-parameter deformations of $Y_\sigma$ in degree $-R$ correspond
to splittings of $Q(R)$ into a Minkowski sum
\vspace{-0.5ex}
\[
Q(R) \,=\, Q_0 + Q_1 + \dots +Q_m
\vspace{-1ex}
\]
meeting the following conditions:
\begin{itemize}
\item[(i)]
$Q_0\subseteq \A{R}{}$ and $Q_1,\dots,Q_m\in \A{R}{0}$ are polyhedra with $Q(R)^\infty$
as their common cone of unbounded directions.
\item[(ii)]
Each supporting hyperplane $t$ of $Q(R)$
defines faces
$F(Q_0,t),\dots, F(Q_m,t)$ of the indicated polyhedra; their Minkowski sum
equals $F\big(Q(R),t\big)$.
With at most one exception (depending on $t$), these faces should contain
lattice vertices, i.e.\ vertices belonging to $N$.
\end{itemize}
{\bf Remark:}
In \cite{Tohoku} we have distinguished between the case of primitive
and non-primitive
elements $R\in M$: If $R$ is a multiple of some element of $M$, then $\A{R}{}$
does not
contain lattice points at all. In particular, condition (ii) just means that
$Q_1,\dots,Q_m$ have to be lattice polyhedra.\\
On the other hand, for primitive $R$, the $(m+1)$ summands $Q_i$ have
equal rights
and may be put into the same space $\A{R}{}$. Then, their Minkowski sum has to
be interpreted inside this affine space.\\
\par
If a Minkowski decomposition is given, {\em how do we obtain the assigned toric
deformation?}\\
Defining $\tilde{N}:= N\oplus Z\!\!\!Z^m$ (and $\tilde{M}:=M\oplus Z\!\!\!Z^m$),
we have to embed the summands as $(Q_0,\,0)$,
$(Q_1,\,e^1),\dots, (Q_m,\,e^m)$ into the vector space
$\tilde{N}_{I\!\!R}$; $\{e^1,\dots,e^m\}$ denotes
the canonical basis of $Z\!\!\!Z^m$. Together with $(Q(R)^\infty,\,0)$, these
polyhedra generate
a cone ${\tilde{\sigma}}\subseteq \tilde{N}$ containing $\sigma$ via
$N\hookrightarrow \tilde{N}$, $a\mapsto (a;\langle a,R\rangle,\dots,\langle a,R\rangle)$.
Actually, $\sigma$ equals ${\tilde{\sigma}}\cap N_{I\!\!R}$, and we obtain an inclusion
$Y_\sigma\hookrightarrow X_{{\tilde{\sigma}}}$ between the associated toric varieties.\\
On the other hand, $[R,0]:\tilde{N}\to Z\!\!\!Z$ and $\mbox{pr}_{Z\!\!\!Z^m}:\tilde{N}\toZ\!\!\!Z^m$ induce
regular
functions $f:X_{{\tilde{\sigma}}}\to \,I\!\!\!\!C$ and $(f^1,\dots,f^m):X_{{\tilde{\sigma}}}\to \,I\!\!\!\!C^m$,
respectively. The resulting map $(f^1-f,\dots,f^m-f):X_{{\tilde{\sigma}}}\to \,I\!\!\!\!C^m$ is flat
and has $Y_\sigma\hookrightarrow X_{{\tilde{\sigma}}}$ as special fiber.\\
\par
\neu{Gd-3}
Let $R\in\sigma^{\scriptscriptstyle\vee}\cap M$ and $Q(R) = Q_0 + \dots +Q_m$ be a decomposition
satisfying (i) and (ii) mentioned above. Denote by $(\bar{a}^j)_i$ the vertex
of $Q_i$ induced from $\bar{a}^j\in Q(R)$, i.e.\
$\bar{a}^j=(\bar{a}^j)_0 + \dots + (\bar{a}^j)_m$.\\
\par
{\bf Theorem:}
{\em
The Kodaira-Spencer map of the corresponding toric deformation
$X_{{\tilde{\sigma}}}\to \,I\!\!\!\!C^m$ is
\[
\varrho: \,I\!\!\!\!C^m\,=\, T_{\,I\!\!\!\!C^m,0} \longrightarrow T^1_Y(-R)
\subseteq V_{\,I\!\!\!\!C}(R)\oplus W_{\,I\!\!\!\!C}(R)\big/\,I\!\!\!\!C\cdot (\underline{1},\underline{1})
\]
sending $e^i $ onto the pair $[Q_i ,\,\underline{s}^i ]\in V(R)\oplus W(R)$
($i=1,\dots,m$) with
\[
s^i_j :=\left\{ \begin{array}{ll}
0 & \mbox{ if the vertex } (\bar{a}^j)_i \mbox{ of } Q_i
\mbox{ belongs to the lattice } N\\
1 & \mbox{ if } (\bar{a}^j)_i \mbox{ is not a lattice point.}
\end{array}\right.
\]
}
\par
{\bf Remark:}
Setting $e^0:=-(e^1+\dots +e^m )$, we obtain
$\varrho (e^0)= [Q_0,\, \underline{s}^0]$ with $\underline{s}^0$ defined similar to $\underline{s}^i $
in the previous theorem.\\
\par
\neu{Gd-4}
{\bf Proof} (of previous theorem):
We would like to derive the above formula for the Kodaira-Spencer map from the
more technical one presented in \cite{Tohoku}, Theorem (5.3).
Under additional use of \cite{T2} (6.1),
the latter one describes $\varrho(e^i)\in
T^1_Y(-R)=H^1\big(\mbox{span}_{\,I\!\!\!\!C}(E^R)_\bullet^\ast\big)$
in the following way:\\
Let $E=\{r^1,\dots,r^w\}\subseteq \sigma^{\scriptscriptstyle\vee}\cap M$. Its elements
may be lifted via $\tilde{M}\longrightarrow\hspace{-1.5em}\longrightarrow M$
to $\tilde{r}^v\in{\tilde{\sigma}}^{\scriptscriptstyle\vee}\cap\tilde{M}$ ($v=1,\dots,w$); denote their $i$-th
entry of the $Z\!\!\!Z^m$-part by $\eta^v_i$, respectively.
Then, given elements $v^j\in \mbox{span} E_j^R$, we may represent them
as $v^j=\sum_v q^j_v\,r^v$ ($q^j\inZ\!\!\!Z^{E_j^R}$), and $\varrho(e^i)$
assigns to $v^j$ the integer $-\sum_v q^j_v\,\eta_i^v$.
Using our notation from \zitat{T1}{8} for $\varrho(e^i)$, this means that
$b^j$ sends elements $r^v\in E_j^R$ onto $-\eta_i^v\inZ\!\!\!Z$. \\
By construction of ${\tilde{\sigma}}$, we have inequalities
\[
\Big\langle \big( (\bar{a}^j)_0,\,0\big),\, \tilde{r}^v \Big\rangle \geq 0
\quad\mbox{ and }\quad
\Big\langle \big( (\bar{a}^j)_i,\,e^i\big),\, \tilde{r}^v \Big\rangle \geq 0
\quad (i=1,\dots,m)
\]
summing up to $\big\langle \bar{a}^j,\, r^v \big\rangle =
\big\langle \big( \bar{a}^j,\,\underline{1}\big),\, \tilde{r}^v \big\rangle \geq 0$.
On the other hand, the fact $r^v\in E_j^R$ is equivalent to
$\big\langle \bar{a}^j,\, r^v \big\rangle <1$. Hence, whenever
$(\bar{a}^j)_i\in Q_i$ belongs to the lattice, the
corresponding inequality ($i=0,\dots,m$) becomes an equality.
With at most one exception, this always has to be the case. Hence,
\[
\Big\langle (\bar{a}^j)_i,\, r^v \Big\rangle + \eta_i^v
= \; \left\{
\begin{array}{ll}
0& \mbox{ if } (\bar{a}^j)_i \in N\\
\langle \bar{a}^j,\, r^v \rangle & \mbox{ if } (\bar{a}^j)_i \notin N
\end{array}
\right.
\quad(i=1,\dots,m)
\]
meaning that $b^j= (\bar{a}^j)_i\,$ or
$\,b^j= (\bar{a}^j)_i - \bar{a}^j$, respectively.
By the
definitions of $\bar{b}^j$ and $s_j$ given in \zitat{T1}{8}, we are done.
\hfill$\Box$\\
\par
\neu{Gd-5}
Now we treat the case of non-negative degrees; let $R\in M\setminus \sigma^{\scriptscriptstyle\vee}$.
The easiest way to solve a problem is to change the question until there is no problem
left. We can do so by changing our cone $\sigma$ into some $\tau^R$ such that the
degree $-R$ becomes negative. We define
\[
\tau:=\tau^R:= \sigma \cap \,[R\geq 0]\quad
\mbox{ that is } \quad
\tau^{\scriptscriptstyle\vee}=\sigma^{\scriptscriptstyle\vee}+I\!\!R_{\geq 0}\cdot R\,.
\]
The cone $\tau$ defines an affine toric variety $Y_\tau$. Since
$\tau\subseteq\sigma$, it comes with a map $g:Y_\tau\to Y_\sigma$, i.e.\
$Y_\tau$ is an open part of a modification of $Y_\sigma$. The important
observation is
\[
\renewcommand{\arraystretch}{1.5}
\begin{array}{rcccl}
\tau \cap \,[R=0] &=& \sigma\cap\, [R=0] &=& Q(R)^\infty\quad \mbox{ and}\\
\tau \cap \,[R=1] &=& \sigma\cap\, [R=1] &=& Q(R)\;,
\end{array}
\]
implying $T^1_{Y_\tau}(-R)=T^1_{Y_\sigma}(-R)$ by Theorem \zitat{T1}{7}. Moreover,
even the genuine toric deformations $X_{\tilde{\tau}}\to\,I\!\!\!\!C^m$ of $Y_\tau$ carry over to
$m$-parameter (non-toric) deformations $X\to\,I\!\!\!\!C^m$ of $Y_\sigma$:\\
\par
{\bf Theorem:}
{\em
Each Minkowski decomposition $Q(R) = Q_0 + Q_1 + \dots +Q_m$
satisfying (i) and (ii) of \zitat{Gd}{2}
provides an $m$-parameter deformation $X\to\,I\!\!\!\!C^m$ of $Y_\sigma$. Via some
birational map $\tilde{g}:X_{\tilde{\tau}}\to X$ it is compatible with the
toric deformation $X_{\tilde{\tau}}\to \,I\!\!\!\!C^m$ of $Y_\tau$ presented in
\zitat{Gd}{2}.
\[
\dgARROWLENGTH=0.4em
\begin{diagram}
\node[2]{\,I\!\!\!\!C^m}
\arrow[3]{e,t}{\mbox{\footnotesize id}}
\node[3]{\,I\!\!\!\!C^m}\\
\node{X_{\tilde{\tau}}}
\arrow{ne}
\arrow[3]{e,t}{\tilde{g}}
\node[3]{X}
\arrow[2]{e}
\arrow{ne}
\node[2]{Z_{{\tilde{\sigma}}}}\\[2]
\node{Y_\tau}
\arrow[2]{n}
\arrow[3]{e,t}{g}
\node[3]{Y_\sigma}
\arrow[2]{n}
\arrow[2]{ne}
\end{diagram}
\]
The total space $X$ is not toric anymore, but it sits via birational maps between
$X_{\tilde{\tau}}$ and some affine toric variety $Z_{{\tilde{\sigma}}}$
still containing $Y_\sigma$ as a closed subset.
}\\
\par
\neu{Gd-6}
{\bf Proof:}
First, we construct $\tilde{N}$, $\tilde{M}$, and ${\tilde{\tau}}\subseteq\tilde{N}_{I\!\!R}$ by the recipe
of \zitat{Gd}{2}. In particular, $N$ is contained in $\tilde{N}$, and the projection
$\pi:\tilde{M}\to M$ sends $[r;g_1,\dots,g_m]$ onto $r+(\sum_i g_i)\,R$.
Defining ${\tilde{\sigma}}:= {\tilde{\tau}} + \sigma$
(hence ${\tilde{\sigma}}^{\scriptscriptstyle\vee}={\tilde{\tau}}^{\scriptscriptstyle\vee}\cap \pi^{-1}(\sigma^{\scriptscriptstyle\vee})$), we obtain
the commutative diagram
\[
\dgARROWLENGTH=0.5em
\begin{diagram}
\node{\,I\!\!\!\!C[{\tilde{\tau}}^{\scriptscriptstyle\vee}\cap \tilde{M}]}
\arrow{s,r}{\pi}
\node[3]{\,I\!\!\!\!C[{\tilde{\sigma}}^{\scriptscriptstyle\vee}\cap \tilde{M}]}
\arrow{s,r}{\pi}
\arrow[3]{w}\\
\node{\,I\!\!\!\!C[\tau^{\scriptscriptstyle\vee}\cap M]}
\node[3]{\,I\!\!\!\!C[\sigma^{\scriptscriptstyle\vee}\cap M]}
\arrow[3]{w}
\end{diagram}
\]
with surjective vertical maps. The canonical elements
$e_1,\dots,e_m\inZ\!\!\!Z^m\subseteq\tilde{M}$ together with $[R;0]\in\tilde{M}$ are preimages
of $R\in M$. Hence, the corresponding monomials
$x^{e_1},\dots,x^{e_m},x^{[R,0]}$ in the semigroup algebra
$\,I\!\!\!\!C[{\tilde{\tau}}^{\scriptscriptstyle\vee}\cap \tilde{M}]$ (called $f^1,\dots,f^m,f$ in \zitat{Gd}{2})
map onto $x^R\in\,I\!\!\!\!C[\tau^{\scriptscriptstyle\vee}\cap M]$ which is not regular on $Y_\sigma$.
We define $Z_{\tilde{\sigma}}$ as the affine toric variety assigned to ${\tilde{\sigma}}$ and
$X$ as
\[
X:=\mbox{Spec}\,B \quad \mbox{ with } \quad
B:=\,I\!\!\!\!C[{\tilde{\sigma}}^{\scriptscriptstyle\vee}\cap\tilde{M}][f^1-f,\dots,f^m-f]\subseteq
\,I\!\!\!\!C[{\tilde{\tau}}^{\scriptscriptstyle\vee}\cap\tilde{M}]\,.
\]
That means, $X$ arises from $X_{\tilde{\tau}}$ by eliminating all variables except
those lifted from $Y_\sigma$ or the deformation parameters themselves.
By construction of $B$, the vertical algebra homomorphisms $\pi$ induce
a surjection $B\longrightarrow\hspace{-1.5em}\longrightarrow \,I\!\!\!\!C[\sigma^{\scriptscriptstyle\vee}\cap M]$.\\
\par
{\em Lemma:} Elements of $\,I\!\!\!\!C[{\tilde{\tau}}^{\scriptscriptstyle\vee}\cap\tilde{M}]$ may uniquely be written
as sums
\vspace{-1ex}
\[
\sum_{(v_1,\dots,v_m)\inI\!\!N^m} c_{v_1,\dots,v_m}\cdot
(f^1-f)^{v_1}\cdot\dots\cdot (f^m-f)^{v_m}
\vspace{-1.5ex}
\]
with $c_{v_1,\dots,v_m}\in\,I\!\!\!\!C[{\tilde{\tau}}^{\scriptscriptstyle\vee}\cap\tilde{M}]$
such that $s-e_i\notin {\tilde{\tau}}^{\scriptscriptstyle\vee}$ ($i=1,\dots,m$) for any
of its monomial terms $x^s$. Moreover, those sums belong to the subalgebra $B$,
if and only if their coefficients $c_{v_1,\dots,v_m}$ do.
\vspace{1ex}\\
{\em Proof:
(a) Existence.}
Let $s-e_i\in{\tilde{\tau}}^{\scriptscriptstyle\vee}$ for some $s,i$. Then, with
$s^\prime:=s-e_i+[R,0]$ we obtain
\vspace{-1ex}
\[
x^s = x^{s^\prime} + x^{s-e_i}\,(x^{e_i}-x^{[R,0]}) =
x^{s^\prime} + x^{s-e_i}\,(f^i-f)\,.
\vspace{-1ex}
\]
Since $e_i=1$ and $[R,0]=0$ if evaluated on $(Q_i,e^i)\subseteq{\tilde{\tau}}$,
this process eventually stops.
\vspace{1ex}\\
{\em (b) $B$-Membership.}
For the previous reduction step we have to show that if
$s\in\,I\!\!\!\!C[{\tilde{\sigma}}^{\scriptscriptstyle\vee}\cap\tilde{M}]$, then the same is true for $s^\prime$ and
$s-e_i$.
Since $\pi(s^\prime)=\pi(s)\in\sigma^{\scriptscriptstyle\vee}$, this is clear for $s^\prime$.
It remains to
check that $\pi(s-e_i)\in\sigma^{\scriptscriptstyle\vee}$.
Let $a\in\sigma$ be an arbitrary test element; we distinguish two cases:\\
Case 1: $\langle a,R\rangle\geq 0$. Then $a$ belongs to the subcone
$\tau$, and $\pi(s-e_i)\in \tau^{\scriptscriptstyle\vee}$ yields
$\langle a, \pi(s-e_i)\rangle\geq 0$.\\
Case 2: $\langle a,R\rangle\leq 0$. This fact implies
$\langle a, \pi(s-e_i)\rangle = \langle a, s\rangle - \langle a,R\rangle
\geq \langle a, s\rangle \geq 0$.
\vspace{1ex}\\
{\em (c) Uniqueness.} Let $p:=\sum c_{v_1,\dots,v_m}\cdot
(f^1-f)^{v_1}\cdot\dots\cdot (f^m-f)^{v_m}$ (meeting the above
conditions) be equal to $0$ in $\,I\!\!\!\!C[{\tilde{\tau}}^{\scriptscriptstyle\vee}\cap\tilde{M}]$. Using the projection
$\pi:\tilde{M}\to M$, everything becomes $M$-graded. Since the factors
$(f^i-f)$ are homogeneous (of degree $R$), we may assume this fact also
for $p$, hence for its coefficients $c_{v_1,\dots,v_m}$.
\vspace{0.5ex}\\
{\em Claim:} These coefficients are just monomials. Indeed, if
$s,s^\prime\in{\tilde{\tau}}^{\scriptscriptstyle\vee}$ had the same image via $\pi$, then we
could assume that some $e_i$-coordinate of $s^\prime$
would be smaller than that
of $s$. Hence, $s-e_i$ would still be equal to $s$ on $(Q_0,0)$ and on
any $(Q_j,e^j)$ ($j\neqi$), but even greater or equal than $s^\prime$
on $(Q_i,e^i)$. This would imply $s-e_i\in{\tilde{\tau}}^{\scriptscriptstyle\vee}$,
contradicting our assumption for $p$.
\vspace{0.5ex}\\
Say $c_{v_1,\dots,v_m}=\lambda_{v_1,\dots,v_m}\,x^\bullet$;
we use the projection $\tilde{M}\toZ\!\!\!Z^m$ for carrying $p$ into the ring
$\,I\!\!\!\!C[Z\!\!\!Z^m]=\,I\!\!\!\!C[y_1^{\pm 1},\dots,y_m^{\pm 1}]$. The elements
$x^\bullet$, $f^i$, $f$ map onto $y^\bullet$, $y_i$, and $1$,
respectively. Hence, $p$ turns into
\vspace{-1ex}
\[
\bar{p}=\sum_{(v_1,\dots,v_m)\inI\!\!N^m} \lambda_{v_1,\dots,v_m}\cdot
y^\bullet\cdot(y_1-1)^{v_1}\cdot\dots\cdot (y_m-1)^{v_m}\,.
\vspace{-1ex}
\]
By induction through $I\!\!N^m$, we obtain that vanishing of $\bar{p}$
implies the vanishing of its coefficients: Replace $y_i-1$ by $z_i$,
and take partial derivatives.
\hfill$(\Box)$\\
\par
Now, we can easily see that $X\to\,I\!\!\!\!C^m$ is flat and has $Y_\sigma$ as
special fiber:
The previous lemma means that for $k=0,\dots,m$ we have inclusions
\[
{\displaystyle B}\big/_{\displaystyle (f^1-f,\dots,f^k-f)}
\quad\raisebox{-0.5ex}{$\hookrightarrow\;$}\quad
{\displaystyle \,I\!\!\!\!C[{\tilde{\tau}}^{\scriptscriptstyle\vee}\cap\tilde{M}]\,}\big/_{\displaystyle (f^1-f,\dots,f^k-f)}\,.
\]
The values $k < m$ yield that $(f^1-f,\dots,f^m-f)$ forms a regular
sequence even in the subring $B$, meaning that $X\to\,I\!\!\!\!C^m$ is flat.
With $k=m$ we obtain that the surjective map
$B/(f^1-f,\dots,f^m-f)\to\,I\!\!\!\!C[\sigma^{\scriptscriptstyle\vee}\cap M]$ is also injective.
\hfill$\Box$\\
\par
\section{Three-dimensional toric Gorenstein singularities}\label{3G}
\neu{3G-1}
By \cite{Ish}, Theorem (7.7),
toric Gorenstein singularities always arise from the following construction:
Assume we are given a {\em lattice polytope $P\subseteq I\!\!R^n$}.
We embed the whole space
(including $P$) into height one of $N_{I\!\!R}:=I\!\!R^n\oplusI\!\!R$ and take for
$\sigma$ the
cone generated by $P$; denote by $M_{I\!\!R}:=(I\!\!R^n)^\ast\oplusI\!\!R$
the dual space and by $N$, $M$ the natural lattices.
Our polytope $P$ may be recovered from $\sigma$ as
\[
P\, =\, Q(R^\ast)\subseteq\A{R^\ast}{}\quad
\mbox{ with} \quad R^\ast:=[\underline{0},1]\in M\,.
\hspace{-3em}
\raisebox{-35mm}{
\unitlength=0.5mm
\linethickness{0.6pt}
\begin{picture}(130,80)
\thinlines
\put(0.00,30.00){\line(1,0){80.00}}
\put(0.00,30.00){\line(5,3){42.00}}
\put(42.00,55.33){\line(1,0){80.00}}
\put(122.00,55.33){\line(-5,-3){42.00}}
\put(10.00,5.00){\line(3,5){15.00}}
\put(33.00,43.00){\line(3,5){19.67}}
\put(10.00,5.00){\line(1,1){25.00}}
\put(38.00,35.00){\line(1,1){27.00}}
\put(10.00,5.00){\line(5,3){42.00}}
\put(60.00,35.00){\line(5,3){35.00}}
\put(10.00,5.00){\line(2,1){50.00}}
\put(87.00,46.00){\line(2,1){35.00}}
\put(10.00,5.00){\line(4,3){33.00}}
\put(79.00,54.00){\line(4,3){30.00}}
\put(57.00,42.00){\makebox(0,0)[cc]{$P$}}
\put(91.00,16.00){\makebox(0,0)[cc]{$\mbox{cone}(P)$}}
\put(33.00,43.00){\circle*{2.00}}
\put(38.00,35.00){\circle*{2.00}}
\put(60.00,35.00){\circle*{2.00}}
\put(87.00,46.00){\circle*{2.00}}
\put(79.00,54.00){\circle*{2.00}}
\thicklines
\put(32.67,43.00){\line(2,-3){5.33}}
\put(38.00,35.00){\line(1,0){22.00}}
\put(60.00,35.00){\line(5,2){27.00}}
\put(87.00,46.00){\line(-1,1){8.00}}
\put(79.00,54.00){\line(-4,-1){46.00}}
\end{picture}
}
\]
The fundamental generators $a^1,\dots,a^M\in\G{R^\ast}{}$ of $\sigma$
coincide with the vertices of $P$. (This involves a slight abuse of notation;
we use the same symbol $a^j$ for both $a^j\inZ\!\!\!Z^n$ and $(a^j,1)\in M$.)\\
If $\overline{a^j a^k}$ forms an edge of the polytope,
we denote by $\ell(j,k)\inZ\!\!\!Z$ its ``length'' induced from the
lattice structure $Z\!\!\!Z^n\subseteqI\!\!R^n$. Every edge provides a
two-codimensional singularity of $Y_\sigma$ with transversal type
A$_{\ell(j,k)-1}$. In particular, $Y_\sigma$ is smooth in codimension
two if and only if all edges of $P$ are primitive, i.e.\ have length
$\ell=1$.\\
\par
\neu{3G-2}
As usual, we fix some element $R\in M$. From \zitat{T1}{6} we know what the
vector spaces $V(R)$ and $W(R)$ are; we introduce the subspace
\[
V^\prime(R):=\{\underline{t}\in V(R)\,|\; t_{jk}\neq 0 \mbox{ implies }
1\leq \langle a^j,R \rangle = \langle a^k,R \rangle \leq \ell(j,k)\}
\]
representing Minkowski summands of $Q(R)$ that have killed any compact edge
{\em not} meeting the condition
$\langle a^j,R \rangle = \langle a^k,R \rangle \leq \ell(j,k)$.\\
\par
{\bf Theorem:}
{\em
For $T^1_Y(-R)$, there are two different types of $R\in M$ to distinguish:
\begin{itemize}
\item[(i)]
If $R\leq 1$ on $P$ (or equivalently $\langle a^j,R\rangle\leq 1$ for
$j=1,\dots,M$), then $T^1_Y(-R)=V_{\,I\!\!\!\!C}(R)\big/(\underline{1})$. Moreover,
concerning Minkowski summands, we may replace the polyhedron
$Q(R)$ by its compact part $P\cap [R=1]$ (being a face of $P$).
\item[(ii)]
If $R$ does not satisfy the previous condition, then
$T^1_Y(-R)=V^\prime(R)$.
\vspace{1ex}
\end{itemize}
}
{\bf Proof:}
The first case follows from Theorem \zitat{T1}{7} just because $W(R)=0$.
For (ii), let us assume there are vertices $a^j$ contained in the affine
half space $[R\geq 2]$.
They are mutually connected inside this half space via paths along edges of $P$.\\
The two-dimensional cyclic quotient singularities corresponding to edges
$\overline{a^j a^k}$ of $P$ are Gorenstein themselves. In the language of
Example \zitat{T1}{3} this means $w=2$, and we obtain
\[
\mbox{dim}\, T^1_{\langle a^j,a^k \rangle} (-R)\,=\,
\left\{ \begin{array}{ll}
1 & \mbox{ if } \langle a^j,R \rangle = \langle a^k,R \rangle
=2,\dots,\ell(j,k)
\quad\mbox{(case (iii) in \zitat{T1}{3})}\\
0 & \mbox{ otherwise.}
\end{array}\right.
\]
In particular, $T^1_{\langle a^j,a^k \rangle} (-R)$ cannot be two-dimensional,
and (using the notation of \zitat{T1}{7})
the equations $s_j -s_k=0$ belong to $G_{jk}$ whenever
$\langle a^j,R \rangle ,\, \langle a^k,R \rangle\geq 2$.
This means for elements of
\[
T^1_Y\subseteq \Big(V_{\,I\!\!\!\!C}(R)\oplus W_{\,I\!\!\!\!C}(R)\Big)\Big/ \,I\!\!\!\!C\cdot (\underline{1},\underline{1})
\]
that all entries
of the $W_{\,I\!\!\!\!C}(R)$-part have to be mutually equal, or even zero after dividing by
$\,I\!\!\!\!C\cdot (\underline{1},\underline{1})$.
Moreover, if not both $\langle a^j,R \rangle$ and $\langle a^k,R \rangle$
equal one, vanishing of $T^1_{\langle a^j,a^k \rangle} (-R)$
implies that $G_{jk}$ also contains the equation $t_{jk} -s_\bullet=0$.
\hfill$\Box$\\
\par
{\bf Corollary:}
{\em
Condition \zitat{Gd}{2}(ii) to build genuine deformations becomes easier for
toric Gorenstein singularities: $Q_1,\dots,Q_m$ just have to be lattice
polyhedra.
\vspace{-1ex}}\\
\par
{\bf Proof:}
If $R\leq 1$ on $P$, then $Q(R)$ itself is a lattice polyhedron. Hence,
condition (ii) automatically comes down to this simpler form.\\
In the second case, there is some $W(R)$-part involved in $T^1_Y(-R)$.
On the one hand, it
indicates via the Kodaira-Spencer map which
vertices of which polyhedron $Q_i$ belong to the lattice. On the other,
we have observed in the previous proof that the entries of $W(R)$
are mutually equal. This implies exactly our claim.
\hfill$\Box$\\
\par
\neu{3G-3}
In accordance with the title of the section, we focus now on {\em plane lattice polygons
$P\subseteq I\!\!R^2$}. The vertices $a^1,\dots,a^M$ are arranged in a cycle.
We denote by $d^j:=a^{j+1}-a^j\in \G{R^\ast}{0}$ the edge going from $a^j$
to $a^{j+1}$, and by $\ell(j):=\ell(j,j+1)$ its length ($j\inZ\!\!\!Z/\anZ\!\!\!Z$).\\
Let $s^1,\dots,s^M$ be the fundamental generators of the dual cone $\sigma^{\scriptscriptstyle\vee}$
such that $\sigma\cap(s^j)^\bot$ equals the face spanned by
$a^j, a^{j+1}\in\sigma$. In particular, skipping the last coordinate of
$s^j$ yields the (primitive) inner normal vector at the edge $d^j$ of $P$.
\vspace{-1ex}\\
\par
{\bf Remark:}
Just for convenience of those who prefer living in $M$ instead of $N$, we
show how to see the integers $\ell(j)$ in the dual world:
Choose a fundamental generator $s^j$ and denote by $r, r^\prime\in M$
the closest (to $s^j$) elements from the Hilbert bases of the two adjacent
faces of $\sigma^{\scriptscriptstyle\vee}$, respectively. Then, $\{R^\ast,s^j\}$
together with either $r$ or $r^\prime$ form a basis of the lattice $M$, and
$(r+r^\prime)-\ell(j)\,R^\ast$ is a positive multiple of $s^j$.
See the figure in \zitat{3G}{7}.\\
\par
In the very special case of plane lattice polygons (or three-dimensional toric
Gorenstein singularities), we can describe $T^1_Y$ and the genuine deformations (for
fixed $R\in M$) explicitly. First, we can easily spot the degrees carrying
infinitesimal deformations:
\vspace{-1ex}\\
\par
{\bf Theorem:}
{\em
In general (see the upcoming exceptions), $T^1_Y(-R)$ is non-trivial only for
\begin{itemize}
\item[(1)]
$R=R^\ast\,$ with $\,\mbox{\em dim}\,T^1_Y(-R)=M-3$;
\item[(2)]
$R= qR^\ast$ ($q\geq 2)\,$ with
$\,\mbox{\em dim}\,T^1_Y(-R)= \mbox{\em max}\,\{0\,;\;
\#\{j\,|\; q\leq \ell(j)\}-2\,\}$, and
\item[(3)]
$R=qR^\ast - p\,s^j\,$ with $\,2\leq q\leq \ell(j)$ and
$p\inZ\!\!\!Z$ sufficiently large such that $R\notin\mbox{\rm int}(\sigma^{\scriptscriptstyle\vee})$.
In this case, $T^1_Y(-R)$ is one-dimensional.
\end{itemize}
Additional degrees
\vspace{-1ex}
exist only in the following two (overlapping) exceptional cases:
\begin{itemize}
\item[(4)]
Assume $P$ contains a pair of parallel edges $d^j$, $d^k$, both longer
than every other edge. Then $\mbox{\rm dim}\,T^1_Y(-q\,R^\ast)=1$ for
$\mbox{\rm max}\{\ell(l)\,|\;l\neqj,k\}<q\leq
\mbox{\rm min}\{\ell(j),\ell(k)\}$.
\vspace{-1ex}
\item[(5)]
Assume $P$ contains a pair of parallel edges $d^j$, $d^k$ with distance
$d$ ($d:=\langle a^j, s^k\rangle = \langle a^k,s^j\rangle$).
If $\ell(k)>d \;(\geq \mbox{\rm max}\{\ell(l)\,|\;l\neqj,k\})$, then
$\mbox{\rm dim}\,T^1_Y(-R)=1$ for
$R=qR^\ast +p\,s^j$ with
$1\leq q\leq\ell(j)$ and $1\leq p\leq \big(\ell(k)-q\big)/d$.
\end{itemize}
}
\par
The cases (1), (2), (4), and (5) yield at most finitely many
(negative) $T^1_Y$-degrees. Type (3) consists of $\ell(j)\!-\!1$ infinite series
to any vertex $a^j\in P$, respectively;
up to maybe the leading elements ($R$ might sit on
$\partial\sigma^{\scriptscriptstyle\vee}$), they contain only non-negative degrees.\\
\par
{\bf Proof:}
The previous claims are straight consequences of Theorem \zitat{3G}{2}.
Hence, the following short remark should be sufficient: The condition
$\langle a^j, R\rangle =\langle a^{j+1},R\rangle$ means
$d^j\in R^\bot$. Moreover, if $R\notin Z\!\!\!Z\cdot R^\ast$, then there is at most
one edge (or a pair of parallel ones) having this property.
\hfill$\Box$\\
\par
\neu{3G-4}
{\bf Example:}
A typical example of a non-isolated, three-dimensional toric Gorenstein singularity
is the cone over the weighted projective space $I\!\!P(1,2,3)$. We will use it to
demonstrate our calculations of $T^1$ as well as the upcoming construction of
genuine one-parameter families.
$P$ has the vertices $(-1,-1)$, $(2,-1)$, $(-1,1)$, i.e.\ $\sigma$ is generated
from
\[
a^1 =(-1,-1;1)\,,\quad a^2=(2,-1;1)\,,\quad a^3=(-1,1;1)\,.
\]
Since our singularity is a cone over a projective variety, $\sigma^{\scriptscriptstyle\vee}$ appears
as a cone over some lattice polygon, too. Actually, in our example, $\sigma$ and
$\sigma^{\scriptscriptstyle\vee}$ are even isomorphic. We obtain
\[
\sigma^{\scriptscriptstyle\vee}=\langle s^1,s^2,s^3\rangle
\quad \mbox{with}\quad
s^1=[0,1;1]\,,\; s^2=[-2,-3;1]\,,\; s^3=[1,0;1]\,.
\]
The Hilbert basis $E\subseteq \sigma^{\scriptscriptstyle\vee}\capZ\!\!\!Z^3$ consists of these three
fundamental generators together with
\[
R^\ast=[0,0;1]\,, \quad v^1=[-1,-2;1]\,,\quad v^2=[0,-1;1]\,,\quad w=[-1,-1;1]\,.
\vspace{1ex}
\]
\begin{center}
\unitlength=0.7mm
\linethickness{0.4pt}
\begin{picture}(146.00,65.00)
\put(10.00,20.00){\circle*{1.00}}
\put(10.00,30.00){\circle*{1.00}}
\put(10.00,40.00){\circle*{1.00}}
\put(10.00,50.00){\circle*{1.00}}
\put(10.00,60.00){\circle*{1.00}}
\put(20.00,20.00){\circle*{1.00}}
\put(20.00,30.00){\circle*{1.00}}
\put(20.00,40.00){\circle*{1.00}}
\put(20.00,50.00){\circle*{1.00}}
\put(20.00,60.00){\circle*{1.00}}
\put(30.00,20.00){\circle*{1.00}}
\put(30.00,30.00){\circle*{1.00}}
\put(30.00,40.00){\circle*{1.00}}
\put(30.00,50.00){\circle*{1.00}}
\put(30.00,60.00){\circle*{1.00}}
\put(40.00,20.00){\circle*{1.00}}
\put(40.00,30.00){\circle*{1.00}}
\put(40.00,40.00){\circle*{1.00}}
\put(40.00,50.00){\circle*{1.00}}
\put(40.00,60.00){\circle*{1.00}}
\put(50.00,20.00){\circle*{1.00}}
\put(50.00,30.00){\circle*{1.00}}
\put(50.00,40.00){\circle*{1.00}}
\put(50.00,50.00){\circle*{1.00}}
\put(50.00,60.00){\circle*{1.00}}
\put(110.00,20.00){\circle*{1.00}}
\put(110.00,30.00){\circle*{1.00}}
\put(110.00,40.00){\circle*{1.00}}
\put(110.00,50.00){\circle*{1.00}}
\put(110.00,60.00){\circle*{1.00}}
\put(120.00,20.00){\circle*{1.00}}
\put(120.00,30.00){\circle*{1.00}}
\put(120.00,40.00){\circle*{1.00}}
\put(120.00,50.00){\circle*{1.00}}
\put(120.00,60.00){\circle*{1.00}}
\put(130.00,20.00){\circle*{1.00}}
\put(130.00,30.00){\circle*{1.00}}
\put(130.00,40.00){\circle*{1.00}}
\put(130.00,50.00){\circle*{1.00}}
\put(130.00,60.00){\circle*{1.00}}
\put(140.00,20.00){\circle*{1.00}}
\put(140.00,30.00){\circle*{1.00}}
\put(140.00,40.00){\circle*{1.00}}
\put(140.00,50.00){\circle*{1.00}}
\put(140.00,60.00){\circle*{1.00}}
\put(20.00,50.00){\line(0,-1){20.00}}
\put(20.00,30.00){\line(1,0){30.00}}
\put(50.00,30.00){\line(-3,2){30.00}}
\put(110.00,20.00){\line(1,2){20.00}}
\put(130.00,60.00){\line(1,-1){10.00}}
\put(140.00,50.00){\line(-1,-1){30.00}}
\put(15.00,25.00){\makebox(0,0)[cc]{$a^1$}}
\put(55.00,25.00){\makebox(0,0)[cc]{$a^2$}}
\put(25.00,55.00){\makebox(0,0)[cc]{$a^3$}}
\put(104.00,15.00){\makebox(0,0)[cc]{$s^2$}}
\put(146.00,50.00){\makebox(0,0)[cc]{$s^3$}}
\put(130.00,65.00){\makebox(0,0)[cc]{$s^1$}}
\put(116.00,42.00){\makebox(0,0)[cc]{$w$}}
\put(124.00,27.00){\makebox(0,0)[cc]{$v^1$}}
\put(134.00,37.00){\makebox(0,0)[cc]{$v^2$}}
\put(128.00,46.00){\makebox(0,0)[cc]{$R^\ast$}}
\put(30.00,8.00){\makebox(0,0)[cc]{$\sigma=\mbox{cone}\,(P)$}}
\put(125.00,8.00){\makebox(0,0)[cc]{$\sigma^{\scriptscriptstyle\vee}$}}
\end{picture}
\vspace{-2ex}
\end{center}
In particular, $Y_\sigma$ has embedding dimension $7$.
The edges of $P$ have length $\ell(1)=3$, $\ell(2)=1$, and $\ell(3)=2$.
Hence, $Y_\sigma$ contains one-dimensional singularities of transversal type
A$_2$ and A$_1$.\\
According to the previous theorem, $Y_\sigma$ admits only
infinitesimal deformations of the third type. Their degrees come in three series:
\begin{itemize}
\item[($\alpha$)]
$2R^\ast-p_\alpha\,s^3$ with $p_\alpha\geq 1$. Even the leading
element $R^\alpha=[-1,0,1]$ is not contained in~$\sigma^{\scriptscriptstyle\vee}$.
\item[($\beta$)]
$2R^\ast-p_\beta\,s^1$ with $p_\beta\geq 1$. The leading element
equals $R^\beta=v^2=[0,-1,1]$ and sits on the boundary of $\sigma^{\scriptscriptstyle\vee}$.
\item[($\gamma$)]
$3R^\ast-p_\gamma\,s^1$ with $p_\gamma\geq 2$. The leading
element is $R^\gamma=[0,-2,1]\notin\sigma^{\scriptscriptstyle\vee}$.
\end{itemize}
\begin{center}
\unitlength=0.70mm
\linethickness{0.4pt}
\begin{picture}(80.00,50.50)
\put(15.00,10.00){\circle*{1.00}}
\put(15.00,20.00){\circle*{1.00}}
\put(15.00,30.00){\circle*{1.00}}
\put(15.00,40.00){\circle*{1.00}}
\put(15.00,50.00){\circle*{1.00}}
\put(25.00,10.00){\circle*{1.00}}
\put(25.00,20.00){\circle*{1.00}}
\put(25.00,30.00){\circle*{1.00}}
\put(25.00,40.00){\circle*{1.00}}
\put(25.00,50.00){\circle*{1.00}}
\put(35.00,10.00){\circle*{1.00}}
\put(35.00,20.00){\circle*{1.00}}
\put(35.00,30.00){\circle*{1.00}}
\put(35.00,40.00){\circle*{1.00}}
\put(35.00,50.00){\circle*{1.00}}
\put(45.00,10.00){\circle*{1.00}}
\put(45.00,20.00){\circle*{1.00}}
\put(45.00,30.00){\circle*{1.00}}
\put(45.00,40.00){\circle*{1.00}}
\put(45.00,50.00){\circle*{1.00}}
\put(15.00,10.00){\line(1,2){20.00}}
\put(35.00,50.00){\line(1,-1){10.00}}
\put(45.00,40.00){\line(-1,-1){30.00}}
\put(90.00,30.00){\makebox(0,0)[cc]{$\sigma^{\scriptscriptstyle\vee}\subseteq M_{I\!\!R}$}}
\put(25.00,40.00){\circle{2.00}}
\put(35.00,30.00){\circle{2.00}}
\put(35.00,20.00){\circle{2.00}}
\put(21.00,43.00){\makebox(0,0)[cc]{$R^\alpha$}}
\put(39.00,27.00){\makebox(0,0)[cc]{$R^\beta$}}
\put(39.00,17.00){\makebox(0,0)[cc]{$R^\gamma$}}
\end{picture}
\vspace{-2ex}
\end{center}
\par
\neu{3G-5}
Each degree belonging to type (3)
(i.e.\ $R=qR^\ast-p\,s^j$ with $2\leq q \leq \ell(j)$) provides an
infinitesimal deformation. To show that they are unobstructed by describing
how they
lift to genuine one-parameter deformations should be no problem: Just split
the polygon $Q(R)$ into a Minkowski sum meeting conditions
(i) and (ii) of \zitat{Gd}{2}, then construct ${\tilde{\tau}}$, ${\tilde{\sigma}}$, and
$(f^1-f)$ as in \zitat{Gd}{2} and \zitat{Gd}{5}. However, we prefer to present
the result for our special case all at once by using new coordinates.\\
Let $P\subseteq \A{R^\ast}{}=I\!\!R^2\times\{1\}\subseteq I\!\!R^3=N_{I\!\!R}$ be a lattice
polygon as
in \zitat{3G}{3}, let $R=qR^\ast-p\,s^j$ be as just mentioned. Then
$\sigma, \tau\subseteq N_{I\!\!R}$ are the cones over $P$ and $P\cap [R\geq 0]$,
respectively, and the one-parameter family in degree $-R$ is obtained as follows:\\
\par
{\bf Proposition:}
{\em
The cone ${\tilde{\tau}}\subseteq N_{I\!\!R}\oplusI\!\!R=I\!\!R^4$ is generated by the elements
\begin{itemize}
\item[(i)]
$(a,0)-\langle a,R\rangle\, (\underline{0},1)$, if $a\in P\cap [R\geq 0]$ runs through
the vertices from the $R^\bot$-line until $a^j$,
\item[(ii)]
$(a,0)-\langle a,R\rangle \,(d^j/\ell(j),1)$, if $a\in P\cap [R\geq 0]$ runs
from $a^{j+1}$ until the $R^\bot$-line again, and
\item[(iii)]
$(\underline{0},1)$ and $(d^j/\ell(j),1)$.
\end{itemize}
The vector space $N_{I\!\!R}$ containing $\sigma$
sits in $N_{I\!\!R}\oplusI\!\!R$ as $N_{I\!\!R}\times\{0\}$. Via this embedding, one obtains
${\tilde{\sigma}}={\tilde{\tau}}+\sigma$ as usual. The monomials $f$ and $f^1$ are given by their
exponents $[R,0], [R,1]\in M\oplusZ\!\!\!Z$, respectively.
}
\begin{center}
\unitlength=0.7mm
\linethickness{0.4pt}
\begin{picture}(200.00,66.00)
\thicklines
\put(10.00,60.00){\line(1,-5){4.67}}
\put(14.67,37.00){\line(6,-5){22.33}}
\put(37.00,18.33){\line(3,-1){37.00}}
\put(74.00,6.00){\line(6,1){40.00}}
\put(114.00,12.67){\line(3,4){27.33}}
\put(141.00,49.00){\line(0,1){13.00}}
\put(141.00,62.00){\circle*{2.00}}
\put(141.00,49.00){\circle*{2.00}}
\put(114.00,13.00){\circle*{2.00}}
\put(74.00,6.00){\circle*{2.00}}
\put(37.00,18.00){\circle*{2.00}}
\put(15.00,36.00){\circle*{2.00}}
\put(96.00,4.00){\makebox(0,0)[cc]{$s^j$}}
\put(70.00,2.00){\makebox(0,0)[cc]{$a^j$}}
\put(119.00,8.00){\makebox(0,0)[cc]{$a^{j+1}$}}
\put(158.00,66.00){\makebox(0,0)[cc]{$R^\bot$}}
\put(36.00,63.00){\makebox(0,0)[cc]{$P\subseteq \A{R^\ast}{}$}}
\put(46.00,33.00){\makebox(0,0)[cc]{$P\cap [R\geq 0]$}}
\put(88.00,12.00){\makebox(0,0)[cc]
{\raisebox{0.5ex}{$\frac{q}{\ell(j)}$}$\, \overline{a^j a^{j+1}}$}}
\thinlines
\put(5.00,40.00){\line(6,1){146.00}}
\multiput(74.00,6.00)(2.04,12.24){4}{\line(1,6){1.53}}
\multiput(81.67,53.00)(6.755,-11.258){4}{\line(3,-5){5.066}}
\put(150.00,33.00){\makebox(0,0)[tl]{\parbox{11em}{
In $\A{R^\ast}{}$, the mutually parallel
lines $R^\bot$ and $a^j a^{j+1}$ have distance $q/p$ from each other.}}}
\end{picture}
\end{center}
Geometrically, one can think about ${\tilde{\tau}}$ as generated by the interval $I$
with vertices as in (iii) and by the polygon $P^\prime$
obtained as follows: ``Tighten'' $P\cap[R\geq 0]$ along $R^\bot$ by a cone with
base $q/\ell(j)\cdot \overline{a^j a^{j+1}}$ and some top on the $R^\bot$-line;
take $-\langle \bullet, R\rangle$ as an additional, fourth coordinate.
Then, $[R^\ast,0]$ is still $1$ on $P^\prime$ and equals $0$ on $I$.
Moreover, $[R,0]$ vanishes on $I$ and on the $R^\bot$-edge of $P^\prime$;
$[R,1]$ vanishes on the whole $P^\prime$.\\
\par
{\bf Proof:}
We change coordinates. If $g:=\mbox{gcd}(p,q)$ denotes the ``length'' of $R$,
then we can find an $s\in M$ such that $\{s,\,R/g\}$ forms a basis of
$M\cap (d^j)^\bot$. Adding some $r\in M$ with
$\langle d^j/\ell(j),r\rangle =1$
($r$ from Remark \zitat{3G}{3} will do) yields a $Z\!\!\!Z$-basis for the whole
lattice $M$. We consider the following commutative diagram:
\[
\dgARROWLENGTH=0.3em
\begin{diagram}
\node{N}
\arrow[4]{e,tb}{(s,r,R/g)}{\sim}
\arrow{s,l}{(\mbox{\footnotesize id},\,0)}
\node[4]{Z\!\!\!Z^3}
\arrow{s,r}{(\mbox{\footnotesize id},\,g\cdot\mbox{\footnotesize pr}_3)}\\
\node{N\oplusZ\!\!\!Z}
\arrow[4]{e,tb}{([s,0],\,[r,0],\,[R/g,0],\,[R,1])}{\sim}
\node[4]{Z\!\!\!Z^3\oplusZ\!\!\!Z}
\end{diagram}
\]
The left hand side contains the data being relevant for our proposition.
Carrying them to the right yields:
\begin{itemize}
\item
$[0,0,g]\in (Z\!\!\!Z^3)^\ast$ as the image of $R$;
\item
$[0,0,g,0], [0,0,0,1]\in (Z\!\!\!Z^4)^\ast$ as the images of $[R,0]$ and
$[R,1]$, respectively;
\item
$\tau$ becomes a cone with affine cross cut
\vspace{-1ex}
\[
Q([0,0,g])=\mbox{conv}\Big(\big(\langle a,s\rangle/\langle a,R\rangle;\,
\langle a,r\rangle/\langle a,R\rangle;\,1/g\big)\,\Big|\;
a\in P\cap [R\geq 0]\Big)\,;
\vspace{-1.2ex}
\]
\item
$I$ changes into the unit interval $(Q_1,1)$ reaching from $(0,0,0,1)$ to
$(0,1,0,1)$;
\item
finally, $\mbox{cone}(P^\prime)$ maps onto the cone spanned by the convex hull
$(Q_0,0)$ of the points
$\big(\langle a,s\rangle/\langle a,R\rangle;\,
\langle a,r\rangle/\langle a,R\rangle;\,1/g;\,0\big)$ for $a\in P\cap [R\geq 0]$
on the $a^j$-side and\\
$\big(\langle a,s\rangle/\langle a,R\rangle;\,
\langle a,r\rangle/\langle a,R\rangle-1;\,1/g;\,0\big)$ for $a$ on the
$a^{j+1}$-side, respectively.
\end{itemize}
Since $Q([0,0,g])$ equals the Minkowski sum of the interval
$Q_1\subseteq \A{[0,0,g]}{0}$
and the polygon $Q_0\subseteq\A{[0,0,g]}{}$, we are done by \zitat{Gd}{2}.
\hfill$\Box$\\
\par
\neu{3G-6}
To see how the original equations of the singularity $Y_\sigma$ will be
perturbed, it is useful to study first the dual cones
${\tilde{\tau}}^{\scriptscriptstyle\vee}$ or ${\tilde{\sigma}}^{\scriptscriptstyle\vee}={\tilde{\tau}}^{\scriptscriptstyle\vee}\cap\pi^{-1}(\sigma^{\scriptscriptstyle\vee})$:
\vspace{-1ex}\\
\par
{\bf Proposition:}
{\em
If $s\in\sigma^{\scriptscriptstyle\vee}\cap M$, then the $(M\oplusZ\!\!\!Z)$-element
\[
S:= \left\{ \begin{array}{ll}
[s,\,0] & \mbox{ if } \langle d^j,s\rangle\geq 0\\
{}[s,\, -\langle d^j/\ell(j),\,s\rangle]
& \mbox{ if } \langle d^j,s\rangle\leq 0
\end{array} \right.
\]
is a lift of $s$ into ${\tilde{\sigma}}^{\scriptscriptstyle\vee}\cap (M\otimesZ\!\!\!Z)$.
(Notice that it does not depend on $p,q$, but only on $j$.)
Moreover, if $s^v$ runs through the edges of $P\cap[R\geq 0]$, the elements
$S^v$ together with $[R,0]$ and $[R,1]$ form the fundamental generators of
${\tilde{\tau}}^{\scriptscriptstyle\vee}$.
\vspace{-1ex}
}\\
\par
{\bf Proof:} Since we know ${\tilde{\tau}}$ from the previous proposition, the
calculations are straightforward and will be omitted.
\hfill$\Box$\\
\par
\neu{3G-7}
Recall from \zitat{T1}{1} that $E$ denotes the
minimal set generating the semigroup $\sigma^{\scriptscriptstyle\vee}\cap M$.
To any $s\in E$ there is a assigned
variable $z_s$, and $Y_\sigma\subseteq \,I\!\!\!\!C^E$ is given by binomial equations
arising from linear relations among elements of $E$.
Everything will be clear by considering an
example: A linear relation such as $s^1+2s^3=s^2+s^4$ transforms into
$z_1\,z_3^2=z_2\,z_4$.\\
The fact that $\sigma$ defines a Gorenstein variety (i.e.\ $\sigma$ is a cone
over a lattice polytope) implies that $E$ consists
only of $R^\ast$ and elements of
$\partial\sigma^{\scriptscriptstyle\vee}$ including the fundamental generators $s^v$. If
$E\cap\partial\sigma^{\scriptscriptstyle\vee}$ is ordered clockwise, then any two adjacent elements
form together with $R^\ast$ a $Z\!\!\!Z$-basis of the three-dimensional lattice $M$.\\
In particular, any three sequenced elements of $E\cap\partial\sigma^{\scriptscriptstyle\vee}$
provide a unique linear relation among them and $R^\ast$.
(We met this fact already in Remark \zitat{3G}{3}; there $r$, $s^j$, and
$r^\prime$ were those elements.)
The resulting ``boundary'' equations do not generate the ideal of
$Y_\sigma\subseteq\,I\!\!\!\!C^E$. Nevertheless, for describing a deformation
of $Y_\sigma$, it is sufficient to know about perturbations of this subset only.
Moreover, if one has to avoid boundary equations ``overlapping'' a certain spot
on $\partial\sigma^{\scriptscriptstyle\vee}$, then it will even be possible to drop up to
two of them from the list.
\vspace{1ex}
\begin{center}
\unitlength=0.6mm
\linethickness{0.4pt}
\begin{picture}(122.00,68.00)
\put(19.00,55.00){\line(-1,-3){11.67}}
\put(7.33,20.00){\line(4,-1){42.67}}
\put(50.00,9.33){\line(3,1){22.00}}
\put(72.00,16.67){\line(1,1){15.00}}
\put(19.00,55.00){\line(5,3){20.00}}
\put(7.00,20.00){\circle*{2.00}}
\put(13.00,37.00){\circle*{2.00}}
\put(24.00,16.00){\circle*{2.00}}
\put(46.00,68.00){\makebox(0,0)[cc]{$\dots$}}
\put(88.00,39.00){\makebox(0,0)[cc]{$\vdots$}}
\put(38.00,42.00){\makebox(0,0)[cc]{$R^\ast$}}
\put(45.00,39.00){\circle*{2.00}}
\multiput(7.00,20.00)(10.968,5.484){8}{\line(2,1){8.226}}
\put(92.00,62.50){\circle*{2.00}}
\put(94.00,68.00){\makebox(0,0)[cc]{$R$}}
\put(7.00,40.00){\makebox(0,0)[cc]{$r$}}
\put(20.00,8.00){\makebox(0,0)[cc]{$r^\prime$}}
\put(3.00,13.00){\makebox(0,0)[cc]{$s^j$}}
\put(13.00,58.00){\makebox(0,0)[cc]{$s^{j-1}$}}
\put(50.00,3.00){\makebox(0,0)[cc]{$s^{j+1}$}}
\put(122.00,21.00){\makebox(0,0)[cc]{$\sigma^{\scriptscriptstyle\vee}$}}
\put(62.00,58.00){\makebox(0,0)[cc]{$[d^j\geq 0]$}}
\put(78.00,46.00){\makebox(0,0)[cc]{$[d^j\leq 0]$}}
\end{picture}
\end{center}
{\bf Theorem:}
{\em
The one-parameter deformation of $Y_\sigma$ in degree $-(q\,R^\ast-p\,s^j)$
is completely determined by the following perturbations:
\begin{itemize}
\item[(i)]
(Boundary) equations involving only variables that are induced from
$[d^j\geq 0]\subseteq\sigma^{\scriptscriptstyle\vee}$ remain unchanged. The same statement holds
for $[d^j\leq 0]$.
\item[(ii)]
The boundary equation
$z_r\,z_{r^\prime}-z_{R^\ast}^{\ell(j)}\,z_{s^j}^k=0$
assigned to the triple $\{r,s^j,r^\prime\}$
is perturbed
into $\big(z_r\,z_{r^\prime}-z_{R^\ast}^{\ell(j)}\,z_{s^j}^k\big)
- t\,z_{R^\ast}^{\ell(j)-q}\,z_{s^j}^{k+p}=0$. Divide everything by
$z^k_{s^j}$ if $k<0$.
\vspace{2ex}
\end{itemize}
}
\par
{\bf Proof:}
Restricted to either $[d^j\geq 0]$ or $[d^j\leq 0]$, the map $s\mapsto S$
lifting $E$-elements into ${\tilde{\sigma}}\cap(M\oplusZ\!\!\!Z)$ is linear. Hence, any linear
relation remains true, and part (i) is proven.\\
For the second part, we consider the boundary relation
$r+r^\prime=\ell(j)\,R^\ast+k\,s^j$ with a suitable $k\inZ\!\!\!Z$.
By Lemma \zitat{3G}{6}, the
summands involved lift to the elements $[r,0]$, $[r^\prime,1]$, $[R^\ast,0]$,
and $[s^j,0]$, respectively. In particular, the relation breaks down and has to be
replaced by
\[
\renewcommand{\arraystretch}{1.5}
\begin{array}{rcl}
[r,0]+[r^\prime,1]&=&
[R,1] + \big(\ell(j)-q\big)\, [R^\ast,0] + (k+p)\, [s^j,0]
\quad \mbox{ and}\\
\ell(j)\,[R^\ast,0]+k\,[s^j,0] &=&
[R,0] + \big(\ell(j)-q\big)\, [R^\ast,0] + (k+p)\, [s^j,0]\,.
\end{array}
\]
The monomials corresponding to $[R,1]$ and $[R,0]$ are $f^1$ and $f$, respectively.
They are {\em not} regular on the total space $X$, but their difference
$t:=f^1-f$ is. Hence, the difference of the monomial versions of both
equations yields the result.\\
Finally, we should remark that (i) and (ii) cover all boundary equations except
those overlapping the intersection of $\partial\sigma^{\scriptscriptstyle\vee}$ with
$\overline{R^\ast R}$.
\hfill$\Box$\\
\par
\neu{3G-8}
We return to Example \zitat{3G}{4} and discuss the one-parameter
deformations occurring in degree $-R^\alpha$, $-R^\beta$, and $-R^\gamma$,
respectively:
\vspace{1ex}\\
{\em Case $\alpha$:}\quad
$R^\alpha=[-1,0,1]=2R^\ast-s^3$ means $j=3$, $q=\ell(3)=2$, and $p=1$. Hence,
the line $R^\bot$ has distance $q/p=2$ from its parallel through $a^3$ and
$a^1$. In particular, $\tau=\langle a^1, c^1, c^3, a^3\rangle$ with
$c^1=(1,-1,1)$ and $c^3=(3,-1,3)$.
\begin{center}
\unitlength=0.8mm
\linethickness{0.4pt}
\begin{picture}(100.00,47.00)
\put(20.00,15.00){\circle*{1.00}}
\put(20.00,25.00){\circle*{1.00}}
\put(20.00,35.00){\circle*{1.00}}
\put(30.00,15.00){\circle*{1.00}}
\put(30.00,25.00){\circle*{1.00}}
\put(40.00,15.00){\circle*{1.00}}
\put(50.00,15.00){\circle*{1.00}}
\put(20.00,35.00){\line(0,-1){20.00}}
\put(20.00,15.00){\line(1,0){30.00}}
\put(50.00,15.00){\line(-3,2){30.00}}
\put(40.00,10.00){\line(0,1){30.00}}
\put(44.00,41.00){\makebox(0,0)[cc]{$R^\bot$}}
\put(44.00,24.00){\makebox(0,0)[cc]{$c^3$}}
\put(15.00,10.00){\makebox(0,0)[cc]{$a^1$}}
\put(37.00,12.00){\makebox(0,0)[cc]{$c^1$}}
\put(55.00,10.00){\makebox(0,0)[cc]{$a^2$}}
\put(15.00,40.00){\makebox(0,0)[cc]{$a^3$}}
\put(90.00,25.00){\makebox(0,0)[cc]{$\tau\subseteq\sigma$}}
\end{picture}
\vspace{-2ex}
\end{center}
We construct the generators of ${\tilde{\tau}}$ by the recipe of Proposition \zitat{3G}{5}:
$a^3$ treated via (i) and $a^1$ treated via (ii) yield the same element
$A:=(-1,1,1,-2)$; from the $R^\bot$-line we obtain $C^1:=(1,-1,1,0)$ and
$C^3:=(3,-1,3,0)$; finally (iii) provides $X:=(0,0,0,1)$ and $Y:=(0,-1,0,1)$.
Hence, ${\tilde{\tau}}$ is the cone over the pyramid with plane base $X\,Y\,C^1\,C^3$
and $A$ as top. (The relation between the vertices of the quadrangle
equals $3C^1+2X=C^3+2Y$.)
Moreover, ${\tilde{\sigma}}$ equals ${\tilde{\sigma}}={\tilde{\tau}}+I\!\!R_{\geq 0}a^2$ with $a^2:=(a^2,0)$.
Since $A+2X+2a^2=C^3$ and $A+2Y+2a^2=3C^1$, ${\tilde{\sigma}}$ is a simplex generated by
$A$, $X$, $Y$, and $a^2$.\\
Denoting the variables assigned to $s^1, s^2, s^3, R^\ast, v^1, v^2, w \in E
\subseteq \sigma^{\scriptscriptstyle\vee}\cap M$ by $Z_1$, $Z_2$, $Z_3$, $U$, $V_1$, $V_2$, and
$W$, respectively, there are six boundary equations:
\vspace{-1ex}
\[
Z_3WZ_1-U^3\,=\, Z_1Z_2-W^2\,=\,
WV_1-UZ_2\,=\, Z_2V_2-V_1^2\,=\, V_1Z_3-V_2^2\,=\,
V_2Z_1-U^2\,=\,0\,.
\vspace{-1ex}
\]
Only the four latter ones are covered by Theorem \zitat{3G}{7}. They will be
perturbed into
\vspace{-1ex}
\[
WV_1-UZ_2 \,=\,Z_2V_2-V_1^2\,=\, V_1Z_3-V_2^2\,=\,
V_2Z_1-U^2-t_\alpha Z_3\,=\,0\,.
\vspace{1ex}
\]
\par
{\em Case $\beta$:}\quad
$R^\beta=[0,-1,1]=2R^\ast-s^1$ means $j=1$, $\ell(1)=3$, $q=2$, and $p=1$.
Hence,
$R^\bot$ still has distance $2$, but now from the line $a^1a^2$.
\vspace{-2ex}
\begin{center}
\unitlength=0.8mm
\linethickness{0.4pt}
\begin{picture}(100.00,47.00)
\put(20.00,15.00){\circle*{1.00}}
\put(20.00,25.00){\circle*{1.00}}
\put(20.00,35.00){\circle*{1.00}}
\put(30.00,15.00){\circle*{1.00}}
\put(30.00,25.00){\circle*{1.00}}
\put(40.00,15.00){\circle*{1.00}}
\put(50.00,15.00){\circle*{1.00}}
\put(20.00,35.00){\line(0,-1){20.00}}
\put(20.00,15.00){\line(1,0){30.00}}
\put(50.00,15.00){\line(-3,2){30.00}}
\put(10.00,35.00){\line(1,0){50.00}}
\put(65.00,35.00){\makebox(0,0)[cc]{$R^\bot$}}
\put(15.00,10.00){\makebox(0,0)[cc]{$a^1$}}
\put(55.00,10.00){\makebox(0,0)[cc]{$a^2$}}
\put(20.00,40.00){\makebox(0,0)[cc]{$a^3$}}
\put(90.00,25.00){\makebox(0,0)[cc]{$\tau=\sigma$}}
\end{picture}
\vspace{-2ex}
\end{center}
We obtain ${\tilde{\tau}}=\langle (-1,-1,1,-2); (0,-1,1,-2); (-1,1,1,0);
(0,0,0,1); (1,0,0,1) \rangle$.\\
The boundary equation corresponding to
Theorem \zitat{3G}{7}(ii) is $Z_3WZ_1-U^3=0$; it perturbs into
$Z_3WZ_1-U^3-t_\beta UZ_1=0$.\\
\par
{\em Case $\gamma$:}\quad
$R^\gamma=[0,-2,1]=3R^\ast-2s^1$ means $j=1$, $q=\ell(1)=3$, and $p=2$.
\vspace{-2ex}
\begin{center}
\unitlength=0.8mm
\linethickness{0.4pt}
\begin{picture}(100.00,47.00)
\put(20.00,15.00){\circle*{1.00}}
\put(20.00,25.00){\circle*{1.00}}
\put(20.00,35.00){\circle*{1.00}}
\put(30.00,15.00){\circle*{1.00}}
\put(30.00,25.00){\circle*{1.00}}
\put(40.00,15.00){\circle*{1.00}}
\put(50.00,15.00){\circle*{1.00}}
\put(20.00,35.00){\line(0,-1){20.00}}
\put(20.00,15.00){\line(1,0){30.00}}
\put(50.00,15.00){\line(-3,2){30.00}}
\put(10.00,30.00){\line(1,0){50.00}}
\put(65.00,30.00){\makebox(0,0)[cc]{$R^\bot$}}
\put(15.00,10.00){\makebox(0,0)[cc]{$a^1$}}
\put(55.00,10.00){\makebox(0,0)[cc]{$a^2$}}
\put(20.00,40.00){\makebox(0,0)[cc]{$a^3$}}
\put(90.00,25.00){\makebox(0,0)[cc]{$\tau\subseteq\sigma$}}
\end{picture}
\vspace{-2ex}
\end{center}
Here, we have ${\tilde{\tau}}=\langle (-1,-1,1,-3); (-2,1,2,0); (-1,2,4,0);
(0,0,0,1); (1,0,0,1) \rangle$, and the previous boundary equation provides
$Z_3WZ_1-U^3-t_\gamma Z_1^2=0$.\\
\par
| proofpile-arXiv_065-368 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{ Introduction }
There has been a strong revival of interest, recently, in the physics
of magnetic vortices in type II and high-temperature superconductors
\cite{reviews}. Most research efforts have been devoted to phenomena
relating to the nature of the mixed phase of a superconductor in some
externally applied magnetic field and supercurrent. Issues connected
with the pinning of the flux lines by defects have been widly studied.
We \cite{ieju}, as well as Ao and Thouless \cite{aoth} and Stephen
\cite{stephen}, have addressed the problem of the quantum dynamics of
vortices in the absence of an external field but in the presence of
an externally driven supercurrent, quantum dissipation and pinning.
This leads to the decay of a supercurrent, or a residual zero-temperature
resistance in the superconductor. Whilst most of the dissipation seems
to be ascribed to vortices tunneling in the sample from the edge, an
interesting novel possibility also explored by us in depth is that of
a residual resistance arising from spontaneous vortex-antivortex pair
creation in the bulk of a thin film. This is the mesoscopic counterpart
of electron-positron pair production of two-dimensional (2D) quantum
electrodynamics (QED) in the presence of static e.m. fields, which in a
superconductor arise from the static and velocity-dependent components of
the Magnus force acting on the vortices. Exploiting this analogy with QED,
a powerful ``relativistic'' quantum field theory approach has been
set up to study vortex nucleation in the 2D geometry in the presence of
quantum dissipation and of pinning potentials. The central result is that
the nucleation rate $\Gamma$ has a strong exponential dependence on the
number current density $J$, given by
\begin{equation}
\Gamma{\propto}\eta^{1/2}\eta_{eff}J^{-1}
\exp\{-\eta_{eff}{\cal E}_{0R}^2/4\pi J^2\}
\label{rate0}
\end{equation}
\noindent
Here $\eta_{eff}$ is an effective viscosity coefficient as renormalised by
the magnetic-like part of the Magnus force, and ${\cal E}_{0R}$ is the rest-
or nucleation-energy of a single vortex as renormalized by screened
Coulomb interactions and (fake) Landau-level corrections. This
exponential dependence would make the vortex nucleation (folded, e.g.,
into the sample's resistance) observable in a rather narrow range of
$J$-values. Thus, normally the superconductor is essentially resistance-free.
However, the high values of $J$ that can be reached in the high-$T_c$
materials make the possibility of observing pair creation in static fields
within reach for thin films. One particular feature that would uniquely
relate the residual resistance to the phenomenon of spontaneous vortex-pair
creation is the presence of {\em oscillations} in the $J$-dependence of
$\Gamma(J)$ in case a {\em periodic} pinning potential is artificially
created in the film. These oscillations are in fact strictly connected to
the pinning-lattice spacing $d=2\pi/k$ of the periodic potential (we assume
a square lattice), e.g.
\begin{equation}
U({\bf q}(t))=U_0 \sum_{a=1}^2 \left [ 1 - \cos \left ( kq_a(t)
\right ) \right ]
\label{potent}
\end{equation}
\noindent
acting on the nucleating vortex-pairs described by a coordinate ${\bf q}$.
The problem of quantum dissipation for a particle moving in a periodic
potential has some interesting features in its own right
\cite{schmid,ghm,fizw}. It is characterised by a localization phase
transition driven by dissipation; accordingly, two phases can occur
depending on whether the dissipation coefficient \cite{cale} $\eta$ is
greater (confined phase) or smaller (mobile phase) than a critical
value $\eta_c=k^2/2\pi=2\pi/d^2$. This localization transition is described
by a Kosterlitz-type renormalization group (RG) approach, yet with some
important differences that will be recalled below. We have implemented
the RG approach for the evaluation of the dependence of the spontaneous
nucleation rate of vortex-antivortex pairs on the external parameters for
our own quantum dynamical system. A remnant of the dissipation-driven
phase transition is observed and the pair production rate $\Gamma$ can
be derived in both phases by means of a frequency-space RG procedure
leading to observable current-oscillations if $\eta > \eta_c$.
\section{ RG approach to dissipative localization transition }
First, we briefly recall the RG description of the localization
transition driven by quantum dissipation \cite{fizw}. The effective
action for a particle diffusing in a periodic potential and subject to
quantum dissipation of the Caldeira-Leggett type \cite{cale} is, in
Fourier frequency space:
\begin{equation}
{\cal S}=\int_0^{\tau}{\cal L}({\bf q})=\tau
\sum_n \{ \frac{1}{2}m\omega_n^2+\frac{1}{2}\eta |\omega_n| \}
\bar{q}_a(\omega_n)\bar{q}_a(-\omega_n)+\int_0^{\tau} dt U({\bf q})
\label{action0}
\end{equation}
\noindent
where $m$ is the mass of the quantum particle and $\eta$ the
phenomenological friction coefficient. In the low-frequency limit the
effects of inertia can be neglected and the problem would acquire the same
phenomenology as for the sine-Gordon model (in (0+1)-dimensions),
except for the peculiar $\omega_n$-dependence of the propagator reflecting
the broken time-reversal symmetry of quantum dissipation. When the RG
procedure is applied to Eq. (\ref{action0}) a renormalization of the
potential amplitude $U_0$ occurs, but not of the friction coefficient
$\eta$ since only local operators in the time variable can be generated
within a RG transformation. In terms of the dimensionless parameters
($\Omega$ is a large frequency-cutoff) ${\cal U}=U_0/\Omega$ and
$\alpha=2\pi\eta/k^2$, the RG recursion relations read
\begin{equation}
\frac{d{\cal U}}{d\ell}=\left ( 1-\frac{1}{\alpha} \right ) {\cal U}
+ \cdots, \qquad
\frac{d\alpha}{d\ell}=0
\label{recrel}
\end{equation}
\noindent
with $e^{-\ell}$ the frequency-scale renormalization parameter. These have
the simple solution
\begin{equation}
{\cal U}(\ell)={\cal U}(0)e^{(1-\eta_c/\eta)\ell}, \qquad
\alpha(\ell)=\alpha(0)
\label{rgflow}
\end{equation}
\noindent
displaying the localization transition for $\eta=\eta_c=k^2/2\pi=2\pi/d^2$.
The potential's amplitude vanishes
under a RG change of time scale for $\eta < \eta_c$, but for
$\eta > \eta_c$ it tends to diverge and the RG procedure must be
interrupted. Unlike in the Kosterlitz RG scheme, this cannot be done
unequivocally in the present situation, for there is no true characteristic
correlation time or frequency owing to the fact that one never moves away
from the neighbourhood of the critical point $\eta_c$. An alternative
strategy for the confined phase is to resort to a variational treatment
\cite{fizw}, which dynamically generates a correlation time.
In this procedure the action of Eq. (\ref{action0}) is replaced by a
trial Gaussian form (neglecting inertia)
\begin{equation}
{\cal S}_{tr}=\frac{\eta}{4\pi} \int_0^{\tau} dt \int_{-\infty}^{+\infty} dt'
\left ( \frac{{\bf q}(t)-{\bf q}(t')}{t-t'} \right )^2 + \frac{1}{2} M^2
\int_0^{\tau} dt {\bf q}(t)^2
\label{actiontr}
\end{equation}
\noindent
where $M^2$ is determined by minimising self-consistently the free energy
$F_{tr}+\langle S-S_{tr} \rangle_{tr}$. This leads to the equation
\begin{equation}
M^2=U_0k^2 \exp \left \{ -\frac{k^2}{2\tau} \sum_n \frac{1}{\eta|\omega_n|
+M^2} \right \}
\end{equation}
\noindent
having a solution $M^2{\neq}0$ only in the confined phase ($\eta > \eta_c$),
since introducing the cutoff $\Omega$ in the (continuous) sum over frequency
modes $\omega_n=2{\pi}n/\tau$, the equation for $M^2$ leads to (for
$M^2\rightarrow 0$)
\begin{equation}
M^2=\eta\Omega \left ( \frac{2\pi U_0}{\Omega} \frac{\eta_c}{\eta}
\right )^{\eta/(\eta-\eta_c)}{\equiv}\eta\Omega\mu
\label{mass}
\end{equation}
\noindent
This spontaneously generated ``mass'' interrupts the divergent
renormalization of the periodic potential amplitude $U_0$, which in the
RG limit $\ell{\rightarrow}{\infty}$ tends to
\begin{equation}
U_0(\ell)=U_0 \left ( \frac{e^{-\ell}+\mu}{1+\mu} \right )^{\eta_c/\eta}
{\rightarrow}U_0 \left ( \frac{\mu + 1/n^{*}}
{\mu + 1} \right )^{\eta_c/\eta}
\end{equation}
\noindent
Here, we have put $\Omega=2\pi n^{*}/\tau=n^{*}\omega_1$ and
$\mu=M^2/\Omega\eta$.
\section{ RG treatment of vortex-antivortex pair-creation in the presence
of a periodic pinning potential }
We begin by recalling the need for a relativistic description of the
process. This leads \cite{ieju} to a Schwinger-type formula for the decay
of the ``vacuum'', represented by a thin superconducting film in which static
e.m.-like fields arise when a supercurrent is switched on. The quantum
fluctuations of these fields are vortex-antivortex pairs, nucleating at a
rate given by
\begin{equation}
\frac{\Gamma}{L^2}=\frac{2}{L^2T} Im \int_{\epsilon}^{\infty}
\frac{d\tau}{\tau} e^{-{\cal E}_0^2\tau} \int
{\cal D}q(t) \exp\{ -\int_0^{\tau} dt {\cal L}_E \}
\label{rate}
\end{equation}
\noindent
where $L^2T$ is the space-time volume of the sample and ${\cal E}_0$ the
vortex-nucleation energy (suitably renormalised by vortex-screening effects).
Also
\begin{eqnarray}
{\cal L}_E&=&\frac{1}{2}m_{\mu}\dot{q}_{\mu}\dot{q}_{\mu}-\frac{1}{2}i
\dot{q}_{\mu}F_{\mu\nu}q_{\nu} + V({\bf q}) \nonumber \\
&+&\sum_k \left \{ \frac{1}{2}m_k\dot{\bf x}_k^2
+\frac{1}{2}m_k\omega_k^2 \left( {\bf x}_k+\frac{c_k}{m_k\omega_k^2}{\bf q}
\right )^2 \right \}
\label{lagran}
\end{eqnarray}
\noindent
is the Euclidean single-particle relativistic Lagrangian, incorporating the
pinning potential $V({\bf q})=2{\cal E}_0U({\bf q})$ and the Caldeira-Leggett
mechanism \cite{cale}. In the absence of the pinning potential, the
relativistic action is quadratic and the path integral in Eq. (\ref{rate})
can be evaluated exactly. The leading term in the expression for $\Gamma$
follows from the lowest pole in the $\tau$-integral and this can be obtained
exactly in the (non-relativistic) limit in which
$m_1=m_2=\frac{\gamma}{2}{\rightarrow}0$, with
$\frac{1}{\gamma}={\cal E}_0/m{\rightarrow}{\infty}$ playing the role of the
square of the speed of light. The result \cite{ieju} is Eq. (\ref{rate0}).
We now come to the evaluation of $\Gamma$ in the presence of the periodic
potential, which calls for the RG approach of Section 2. Integrating out the
Euclidean ``time''-like component $q_3(t)$, we reach a formulation in which
the electric-like and the magnetic-like Magnus field components are
disentangled. In terms of Fourier components, dropping the magnetic-like part
and for $\gamma{\rightarrow}0$:
\begin{equation}
\int_0^{\tau} dt {\cal L}_E({\bf q})=\tau\sum_{n\neq 0} \{ \frac{1}{2}\eta
|\omega_n| - E^2{\delta}_{a1} \} \bar{q}_a(\omega_n) \bar{q}_a(-\omega_n)
+\int_0^{\tau} dt V({\bf q})
\label{lagranr}
\end{equation}
\noindent
with $E=2\pi J$ the electric-like field due to the supercurrent donsity $J$.
We have shown \cite{ieju} that the only role of the magnetic-like field is
to renormalize the nucleation energy and the friction coefficient, hence our
problem amounts to an effective one-dimensional system in the presence of
${\bf E}$ and dissipation. The evaluation of the Feynman Path Integral (FPI)
proceeds by means of integrating out the zero-mode, $\bar{q}_0$, as well as
the high-frequency modes $\bar{q}_n$ with $n>1$, since again the leading
term for $\Gamma$ in Eq. (\ref{rate}) comes from the divergence of the FPI
associated with the lowest mode coupling to ${\bf E}$. The effect of
$\bar{q}_n$ with $n > 1$ is taken into account through the frequency-shell
RG method of Section 2, leading to a renormalization of the amplitude
$V_0=2{\cal E}_0U_0$ of the (relativistic) pinning potential. The
renormalization has to be carried out from the outer shell of radius $\Omega$
to $\omega_1=2\pi/\tau$. In the mobile phase ($\eta < \eta_c$) this implies
$e^{\ell}=\Omega\tau/2\pi=n^{*}$ in Eq. (\ref{rgflow}), with (from the leading
pole of the FPI) $\tau=\pi\eta/E^2$ (${\rightarrow}\infty$ for relatively
weak currents). In the more interesting confined phase ($\eta > \eta_c$) we
must integrate out the massive $n > 1$ modes with a Lagrangian
${\cal L}(\bar{q}_n)=\tau \left ( \frac{1}{2}\eta |\omega_n|+\frac{1}{2}
M^2-E^2 \right ) \bar{q}_n\bar{q}_n^{*}$. This leads to an additional,
entropy-like renormalization of the activation energy ${\cal E}_{0R}$, beside
the renormalization of $V_0$. We are therefore left with the integration
over the modes $\bar{q}_0$ and $\bar{q}_1$, with a renormalised potential
\begin{eqnarray}
&&\int_0^{\tau} dt V_R(q_0,q_1(t)) = V_0\tau - V_{0R}\int_0^{\tau} dt
\cos ( k(q_0+q_1(t)) ) \nonumber \\
&&{\simeq} V_0\tau - V_{0R}\tau J_0(2k|\bar{q}_1|)\cos ( kq_0 )
\label{potentr}
\end{eqnarray}
\noindent
Here, $J_0$ is the Bessel function and the renormalised amplitude $V_{0R}$ is
\begin{eqnarray}
V_{0R}= \left \{ \begin{array}{ll}
V_0 \left ( \frac{\Omega\tau}{2\pi} \right )^{-\eta_c/\eta}
& \mbox{if $\eta < \eta_c$} \\
V_0 \left ( \frac{\mu +1/n^{*}}{ \mu +1} \right )^{\eta_c/\eta}
& \mbox{if $\eta > \eta_c$}
\end{array} \right.
\label{amplitr}
\end{eqnarray}
\noindent
In Eq. (\ref{potentr}) the phase of the $\bar{q}_1$ mode has been integrated
out, allowing us to integrate out the $\bar{q}_0$-mode exactly; this leads
to the expression
\begin{equation}
\frac{\Gamma}{2L^2}= Im \int_{\epsilon}^{\infty} d{\tau} {\cal N}(\tau)
e^{-({\cal E}_{0R}^2+V_0)\tau}
\int_0^{\infty} d|\bar{q}_1|^2 e^{-(\pi\eta-E^2\tau)|\bar{q}_1|^2}
I_0 \left (
V_{0R}\tau J_0(2k|\bar{q}_1|) \right )
\label{rate1}
\end{equation}
\noindent
where $I_0$ is the modified Bessel function. It is clear that the singularity
from the $\bar{q}_1$-integral occurs at $\tau=\pi\eta/E^2$; evaluating the
normalization factor ${\cal N}(\tau)$, we finally arrive at
\begin{eqnarray}
&&\Gamma=\Gamma_0K(J) \\
\label{final}
&&K(J)=e(1+\mu) \left ( 1+\frac{\mu\Omega\eta}{8\pi^2 J^2} \right )
I_0 \left ( \frac{V_{0R}\eta}{4\pi J^2} J_0(2k{\ell}_N) \right ) \nonumber
\end{eqnarray}
\noindent
where $\Gamma_0$ is given by Eq. (\ref{rate0}), there is a further
renormalization ${\cal E}_{0R}^2{\rightarrow}{\cal E}_{0R}^2+V_0$ and
we have set $E=2\pi J$. $\ell_N$ is a nucleation length, which is in first
approximation given by
\begin{equation}
{\ell}_N^2{\simeq}\frac{ {\cal E}_{0R}^2}{4\pi^2 J^2}
-\frac{V_{0R}}{4\pi^2 J^2} \left | J_0 \left ( k
\frac{ {\cal E}_{0R} } {\pi J} \right ) \right |
\label{nuclen}
\end{equation}
\noindent
and corresponds physically to the distance a vortex and antivortex
pair must travel to acquire the nucleation energy ${\cal E}_{0R}$.
The presence of the $J_0(2k{\ell}_N)$ argument in the correction factor
$K(J)$ due to the pinning lattice thus gives rise to oscillations in
$\Gamma (J)$ (hence in the sample's resistance) through the parameter
$2k{\ell}_N=4\pi{\ell}_N/d$. Vortex nucleation is therefore
sensitive to the corrugation of the pinning substrate. However, these
oscillations should be observable only in the confined phase, $\eta > \eta_c$,
where interrupted-renormalization prevents the prefactor in front of the
$J_0(x)$ oscillating function from becoming too small for relatively
small current densities.
\section*{References}
| proofpile-arXiv_065-369 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Acknowledgments}
N.S. would like to thank H.~Kanno,
H.~Kunitomo and M.~Sato for valuable discussions.
N.S. is supported by JSPS Research Fellowship for Young
Scientists (No.~06-3758).
The work of K.I. is supported in part by
the Grant-in-Aid for Scientific Research from
the Ministry of Education (No.~08740188 and No.~08211209).
| proofpile-arXiv_065-370 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
The theoretical possibility that strange quark matter may be absolutely stable
with respect to iron, that is, the energy per baryon is below 930 MeV, has been
pointed out by Bodmer\ (1971), Terazawa\ (1979), and Witten\ (1984). This
so-called strange matter hypothesis constitutes one of the most startling
possibilities of the behavior of superdense nuclear matter, which, if true,
would have implications of fundamental importance for cosmology, the early
universe, its
\begin{figure}[tb]
\begin{center}
\leavevmode
\mbox{\psfig{figure=qlcomp.bb,width=4.5in,height=3.1in}}
\caption[Relative densities of quarks and leptons in cold, beta-stable,
electrically charge neutral quark matter as a function of mass density.] {\em
{Relative densities of quarks and leptons, $n_i/n$, where $n$ denotes the
total quark density, in cold, beta-stable, electrically charge neutral
quark-star matter as a function of energy density ($B^{1/4}=145$ MeV)
(Kettner et al,\ 1995a).}}
\label{fig:1.5}
\end{center}
\end{figure} evolution to the present day, astrophysical compact objects, and
laboratory physics (for an overview, see Madsen and Haensel\ (1991), and
Table\ 1). Even to the present day there is no sound
\begin{figure}[tb]
\begin{center}
\leavevmode
\mbox{\psfig{figure=gapcru.bb,width=4.5in,height=3.1in}}
\caption[Gap width versus electrostatic crust potential, for different
temperatures]{\em {Gap width, $R_{\rm gap}$, versus electrostatic crust potential,
$eV_{\rm crust}$. The labels refer to temperature (in MeV).}}
\label{fig:vvsgap}
\end{center}
\end{figure}
scientific basis on which one can either confirm or reject the
hypothesis, so that it remains a serious possibility of fundamental
significance for various phenomena.
\begin{table}[tb]
\begin{center}
\begin{tabular}{|l|l|} \hline
Phenomenon &References \\ \hline
Centauro cosmic ray events &Chin et al.\ (1979), Bjorken et al.\ (1979),\\
&Witten\ (1984)\\
High-energy gamma ray sources &Jaffe\ (1977), Baym et al.\ (1985) \\
Strange matter hitting the earth: & \\
~~~~strange meteors &De R{\'{u}}jula et al.\ (1984) \\
~~~~nuclearite-induced earthquakes &De R{\'{u}}jula et al.\ (1984) \\
~~~~strange nuggets in cosmic rays &Terazawa\ (1991,1993) \\
Strange matter in supernovae &Michel\ (1988), Benvenuto et al.\ (1989),\\
&Horvath et al.\ (1992) \\
Strange star (pulsar) phenomenology &Alcock et al.\ (1986), Haensel et al.\
(1986), \\
&Alcock et al.\ (1988), Glendenning\ (1990), \\
&Glendenning et al.\ (1992)\\
Strange dwarfs &Glendenning et al.\ (1995a),\\
&Glendenning et al.\ (1995b) \\
Strange planets &Glendenning et al.\ (1995a),\\
&Glendenning et al.\ (1995b) \\
Burning of neutron stars to strange stars
&Olinto\ (1987), Horvath et al.\ (1988),\\
&Frieman et al.\ (1989)\\
Gamma-ray bursts &Alcock et al.\ (1986), Horvath et al.\ (1993)\\
Cosmological aspects of strange matter
&Witten\ (1984), Madsen et al.\ (1986),\\
&Madsen\ (1988), Alcock et al.\ (1988) \\
Strange matter as compact energy source &Shaw et al.\ (1989) \\
Strangelets in nuclear collisions
&Liu et al.\ (1984), Greiner et al.\ (1987),\\
&Greiner et al.\ (1988)\\
\hline
\end{tabular}
\caption[Strange Matter Phenomenology]{\em {Overview of strange matter
phenomenology}}\label{tab:over}
\end{center}
\end{table}
On theoretical scale arguments, strange quark matter is as plausible a
\begin{figure}[tb]
\begin{center}
\leavevmode
\mbox{\psfig{figure=pvse_s.bb,width=4.5in,height=3.1in}}
\caption[Equation of state~ of a strange star surrounded by a nuclear crust with inner
crust density lower than neutron drip, $\epsilon_{\rm crust} \leq \epsilon_{\rm drip}$.]{\em {Equation of state~ of a
strange star surrounded by a nuclear crust. $P_{\rm drip}(\epsilon_{\rm drip})$ denotes the
pressure at the maximum possible inner crust density determined by neutron
drip, $\epsilon_{\rm crust} = 0.24~MeV/fm^3$. Any inner crust value smaller than that
is possible. As an example, we show the equation of state~ for $\epsilon_{\rm crust} =10^{-4}~
MeV/fm^3$.}}
\label{fig:eos}
\end{center}
\end{figure} ground state as the confined state of hadrons (Witten, 1984; Farhi
and Jaffe, 1984; Glendenning, 1990) Unfortunately it seems unlikely that QCD
calculations will be accurate enough in the
\begin{table}[tb]
\begin{center}
\begin{tabular}{|l|l|} \hline
Experiment &References \\ \hline
Cosmic ray searches for strange nuggets: & \\
~~~~~balloon-borne experiments &Saito\ (1990, 1995) \\
~~~~~MACRO &MACRO\ (1992) \\
~~~~~IMB &De R{\'{u}}jula et al.\ (1983) \\
~~~~~tracks in ancient mica &De R{\'{u}}jula et al.\ (1984) \\
&Price\ (1984) \\
Rutherford backscattering of $^{238}$U and $^{208}$Pb
&Br{\"{u}}gger et al.\ (1989) \\
Heavy-ion experiments at BNL: E864, E878, E882-B, &Thomas et al.\ (1995) \\
$~$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~E886, E-888, E896-A & \\
Heavy-ion experiments at CERN: NA52 &Thomas et al.\ (1995) \\
\hline
\end{tabular}
\caption[Search experiments for strange matter]
{\em {Overview of search experiments for strange matter}}\label{tab:labexp}
\end{center}
\end{table} foreseeable future to give a definitive prediction on the stability
of strange matter, and one is left with experiment, Table\ 2, and astrophysical
tests, as performed here, to either confirm or reject the hypothesis.
One striking implication of the hypothesis would be that pulsars, which are
conventionally interpreted as rotating neutron stars, almost certainly would be
rotating strange stars (strange pulsars) (Witten, 1984; Haensel, Zdunik, and
Schaeffer,\ 1986; Alcock, Farhi, and Olinto,\ 1986; Glendenning,\ 1990). Part
of this paper deals with an investigation of the properties of such objects.
In addition to this, we develop the complete sequence of strange stars with
nuclear crusts, which ranges from the compact members, with properties similar
to those of neutron stars, to white dwarf-like objects (strange dwarfs), to
planetary-like strange matter objects, and discuss their
\begin{figure}[tb]
\begin{center}
\leavevmode
\mbox{\psfig{figure=mr_145o.bb,width=4.1in,height=3.5in}}
\caption[Mass versus radius of strange-star configurations with nuclear crust.]
{\em {Mass versus radius of strange-star configurations with nuclear crusts
(solid curve) and gravitationally bound stars (dotted). (NS=neutron star,
SS=strange star, wd=white dwarf, sd=strange dwarf.) The cross denotes the
termination point of the strange-dwarf sequence ($\epsilon_{\rm crust}=\epsilon_{\rm drip}$). The
dots and vertical arrows refer to the maximum- and minimum-mass star of
each sequence.}}
\label{fig:1}
\end{center}
\end{figure} stability against acoustical vibrations (Glendenning, Kettner, and
Weber,\ 1995a,b; Kettner, Weber, Weigel, and Glendenning,\ 1995b). The
properties with respect to which strange-matter stars differ from their
non-strange counterparts are discussed, and observable signatures of strange
stars are pointed out.
\goodbreak
\section{Quark-lepton composition of strange matter}\label{sec:qlc}
The relative quark/lepton composition of quark-star matter at zero temperature
is shown in Fig.\ 1. All quark flavor states that become populated at the
densities shown are taken into account. (Strange and charm quark masses of
respectively 0.15 GeV and 1.2 GeV are assumed.) Since stars in their lowest
energy state are electrically charge neutral to very high precision
(Glendenning, 1985), any net positive quark charge must be balanced by leptons.
In general, as can be seen in Fig.\ 1, there is only little need for leptons,
since charge neutrality can be achieved essentially among the quarks
themselves. The concentration of electrons is largest at the lower densities
of Fig.\ 1 due to the finite $s$-quark mass which leads to a deficit of net
negative quark charge, and at densities beyond which the $c$-quark state
becomes populated which increases the net positive quark charge.
\goodbreak
\section{Nuclear crusts on strange stars}\label{sec:ncss}
The presence of electrons in strange quark matter is crucial for the possible
existence of a nuclear crust on such objects. As shown by Alcock, Farhi, and
Olinto (1996), and Kettner, Weber, Weigel, and Glendenning (1995a), the
electrons, because they are bound to strange matter by the Coulomb force rather
than the strong force, extend several hundred fermi beyond the surface of the
strange star. Associated with this electron displacement is a electric dipole
layer which can support, out of contact with the surface of the strange star, a
crust of nuclear material, which it polarizes (Alcock, Farhi, and Olinto,\
1986). The maximal possible density at the base of the crust (inner crust
density) is determined by neutron drip, which occurs at about $4.3\times
10^{11}~{\rm g/cm}^3$. (Free neutrons in the star cannot exist. These would be
dissolved into quark matter as they gravitate into the strange core. Therefore
the maximum density of the crust is strictly limited by neutron drip.)
The determination of the electrostatic electron potential at the surface of a
strange star performed by Alcock, Farhi, and Olinto (1986) has been extended to
finite temperatures only recently (Kettner, Weber, Weigel, and Glendenning,\
1995a). The results obtained there for the gap between the surface of the
star's strange core and the base of the inner crust are shown in Fig.\ 2. A
minimum value of $R_{\rm gap}\sim 200$ fm was established by Alcock, Farhi, and
Olinto (1986) as the lower bound on $R_{\rm gap}$ necessary to guarantee the crust's
security against strong interactions with the strange-matter core. For this
value one finds from Fig.\ 2 that a hot strange pulsar with $T\sim 30$ MeV can
only carry nuclear crusts whose electrostatic potential at the base is rather
smaller, $eV_{\rm crust}\stackrel{\textstyle <}{_\sim} 0.1$ MeV. Crust potentials in
\begin{table}[tbh]
\begin{center}
\begin{tabular}{|l|l|l|} \hline
Features of strange quark-matter stars ~~~ &observable~~ &definite signal \\
\hline \hline
$\bullet$ Strange Stars: & & \\
Small rotational periods, $P<1$ msec &yes$\,^\dagger$ &possibly \\
$\bullet$ Light, planetary-like objects &yes &no \\
$\bullet$ Strange Dwarfs (white-dwarf-like) &yes &to be studied$\,^*$
\\
$\bullet$ Cooling behavior &yes &possibly \\
$\bullet$ Glitches &yes &to be studied$\,^*$ \\
$\bullet$ Post-glitch behavior &yes &to be studied$\,^*$\\
\hline
\end{tabular}
\caption[Features of strange quark-matter stars]{\em {Features of strange
quark-matter stars.($\;^\dagger$Until recently rotational periods of $P\sim
1$ millisecond were the borderline of detectability.~$^*$Presently under
investigation.)}}\label{tab:feat}
\end{center}
\end{table} the range of $8$--$12$ MeV, which are expected for a crust at
neutron drip density (Alcock, Farhi, and Olinto,\ 1986), are only possible for
core temperatures of $T\stackrel{\textstyle <}{_\sim} 5$ MeV. Therefore we conclude that only strange
stars with rather low temperatures (on the nuclear scale) can carry the densest
possible crusts.
\goodbreak
\section{Equation of state of strange stars with crust}\label{sec:eos}
The somewhat complicated situation of the structure of a strange star with
crust described above can be represented by a proper choice of equation of state~
(Glendenning and Weber,\ 1992), which consists of two parts (Fig.\ 3). At
densities below neutron drip it is represented by the low-density equation of state~ of
charge-neutral nuclear matter, for which we use the Baym-Pethick-Sutherland
equation of state. The star's strange-matter core is described by the bag model (Freedman
and McLerran,\ 1977; Farhi and Jaffe,\ 1984; Glendenning and Weber,\ 1992;
Kettner, Weber, Weigel, and Glendenning,\ 1995a).
\goodbreak
\section{Properties of strange-matter stars}\label{sec:psec}
\goodbreak
\subsection{Complete sequences of strange-satter stars}
Since the nuclear crusts surrounding the cores of strange stars are bound by
the gravitational force rather than confinement, the mass-radius relationship
of strange-matter stars with crusts is qualitatively similar to the one of
purely gravitationally bound stars -- i.e., neutron stars and white dwarfs --
as illustrated in Fig.\ 4. The strange-star sequence is computed for the
maximal possible inner crust density, $\epsilon_{\rm crust}=\epsilon_{\rm drip}$. Of course there are
other possible sequences of strange stars with any smaller value of inner crust
density. Their properties were discussed by Glendenning, Kettner and Weber
(1995a,b). From the maximum-mass star (dot), the central density decreases
monotonically through the sequence in each case. The neutron-star sequence is
computed for a representative model for the equation of state~ of neutron star matter, the
relativistic Hartree-Fock equation of state~ (HFV of Weber and Weigel, 1989), which has been
combined at subnuclear densities with the Baym-Pethick-Sutherland equation of state. Hence
the white dwarfs shown in Fig.\ 4 are computed for the latter. (For an
overview of the bulk properties of neutron stars, constructed for a
representative collection of modern nuclear equations of state, we refer to
Weber and Glendenning (1992, 1993a,b).) Those gravitationally bound stars with
radii $\stackrel{\textstyle <}{_\sim} 200$ km and $\stackrel{\textstyle >}{_\sim} 3000$ km represent stable neutron stars and
white dwarfs, respectively. The fact that strange stars with crust possess
smaller radii than neutron stars leads to smaller rotational mass shedding
(Kepler) periods $P_{\rm K}$, as indicated by the classical expression
$P_{\rm K}=2\pi\sqrt{R^3/M}$. (We recall that mass shedding sets an absolute limit
on rapid rotation.) Of course the general relativistic expression for $P_{\rm K}$,
given by (Glendenning and Weber,\ 1992; Glendenning, Kettner, and Weber,\
1995a)
\begin{eqnarray}
P_{\rm K} \equiv \frac{2\, \pi}{\Omega_{\rm K}} \, , ~{\rm with} ~~ \Omega_{\rm K} =
\omega +\frac{\omega^\prime}{2\psi^\prime} + e^{\nu -\psi} \sqrt{
\frac{\nu^\prime}{\psi^\prime} + \Bigl(\frac{\omega^\prime}{2
\psi^\prime}e^{\psi-\nu}\Bigr)^2} \; ,
\label{eq:okgr}
\end{eqnarray} which is to be applied to neutron and strange stars, is considerably more
complicated. However the qualitative dependence of $P_{\rm K}$ on mass and radius
remains valid (Glendenning and Weber,\ 1994). So one finds that, due to the
smaller radii of strange stars, the complete sequence of such objects (and not
just those close to the mass peak, as is the case for neutron stars) can
sustain extremely rapid rotation (Glendenning, Kettner, and Weber,\ 1995a). In
particular, a strange star with a typical pulsar mass of $\sim 1.45\,M_{\odot}$ can
rotate at (general relativistic) Kepler periods as small as $P \simeq
0.5~\rm msec$, depending on crust thickness and bag constant (Glendenning and
Weber,\ 1992; Glendenning, Kettner, and Weber,\ 1995a). This is to be compared
with $P_{\rm K}\sim 1~\rm msec$ obtained for neutron stars of the same mass (Weber and
Glendenning,\ 1993a,b).
The minimum-mass configuration of the strange-star sequence (labeled `a' in
Fig.\ 4) has a mass of about $M_{\rm min} \sim 0.017\, M_{\odot}$ (about 17 Jupiter
masses). More than that, we find stable strange-matter stars that can be even
by orders of magnitude lighter than this star, depending on the chosen value of
inner crust density (Glendenning, Kettner, and Weber,\ 1995a,b). If abundant
enough in our Galaxy, such low-mass strange stars, whose masses and radii
resemble those of ordinary planets (hence one may call such objects strange
planets, or strange MACHOS) could be seen by the gravitational microlensing
searches that are being performed presently. Strange stars located to the
right of `a' consist of small strange cores ($R_{\rm core}\stackrel{\textstyle <}{_\sim} 3$ km) surrounded by
a thick nuclear crust (made up of white dwarf material). We thus call such
objects strange dwarfs. Their cores have shrunk to zero at `d'. What is left
is a ordinary white dwarf with a central density equal to the inner crust
density of the former strange dwarf (Glendenning, Kettner, and Weber,\
1995a,b). A detailed stability analysis of strange stars against radial
oscillations (Kettner, Weber, Weigel, and Glendenning,\ 1995a,b) shows that the
strange dwarfs between `b' and `d' in Fig.\ 4 are unstable against the
fundamental eigenmode. Hence such objects cannot exist stably in nature.
However all other stars of this sequence ($\epsilon_{\rm crust}=\epsilon_{\rm drip}$) are stable against
oscillations. So, in contrast to neutron stars and white dwarfs, the branches
of strange stars and strange dwarfs are stably connected with each other
(Glendenning, Kettner, and Weber,\ 1995a,b). So far our discussion was
restricted to inner crust densities equal to neutron drip. For the case
$\epsilon_{\rm crust}<\epsilon_{\rm drip}$, we refer to Glendenning, Kettner, and Weber\ (1995a).
\subsection{Glitch behavior of strange pulsars}
\label{ssec:glitch}
A crucial astrophysical test, which the strange-quark-matter hypothesis must
pass in order to be viable, is whether strange quark stars can give rise to the
observed phenomena of pulsar glitches. In the crust quake model an oblate solid
nuclear crust in its present shape slowly comes out of equilibrium with the
forces acting on it as the rotational period changes, and fractures when the
built-up stress exceeds the sheer strength of the crust material. The period
and rate of change of period slowly heal to the trend preceding the glitch as
the coupling between crust and core re-establish their co-rotation.
The only existing investigation which deals with the calculation of the
thickness, mass and moment of inertia of the nuclear solid crust that can exist
on the surface of a rotating, general relativistic strange quark star has been
performed by Glendenning and Weber\ (1992). Their calculated mass-radius
relationship for strange stars with a nuclear crust, whose maximum density is
the neutron drip density, is shown in Fig.\ 5.
\begin{figure}[tb]
\begin{center}
\parbox[t]{6.5cm}
{\leavevmode \mbox{\psfig{figure=rm.bb,width=6.0cm,height=7.0cm,angle=90}}
{\caption[Radius as a function of mass of a strange star with crust, and
radius of the strange star core for inner crust density equal to neutron
drip, for non-rotating stars. The bag constant is $B^{1/4}=160$ MeV. The solid
dots refer to the maximum-mass model of the sequence]{\em {Radius as a
function of mass of a non-rotating strange star with crust (Glendenning
et al,\ 1992).}}\label{fig:radss}}} \ \hskip1.4cm \
\parbox[t]{6.5cm}
{\leavevmode \mbox{\psfig{figure=icit.bb,width=6.0cm,height=7.0cm,angle=90}}
{\caption[The ratio $I_{\rm crust}/I_{\rm total}$ as a function of star mass. Rotational
frequencies are shown as a fraction of the Kepler frequency. The solid dots
refer to the maximum-mass models. The bag constant is $B^{1/4}=160$ MeV]{\em
{The ratio $I_{\rm crust}/I_{\rm total}$ as a function of star mass. Rotational
frequencies are shown as a fraction of the Kepler frequency, $\Omega_{\rm K}$
(Glendenning et al,\ 1992).}}
\label{fig:cm160}}}
\end{center}
\end{figure} The radius of the strange quark core, denoted $R_{\rm drip}$, is shown
by the dashed line, $R_{\rm surf}$ displays the star's surface. (A value for the bag
constant of $B^{1/4}=160$ MeV for which 3-flavor strange matter is absolutely
stable has been chosen. This choice represents weakly bound strange matter with
an energy per baryon $\sim 920$ MeV, and thus corresponds to strange quark
matter being absolutely bound with respect to $^{56}{\rm Fe}$). The radius of
the strange quark core is proportional to $M^{1/3}$ which is typical for
self-bound objects. This proportionality is only modified near that stellar
mass where gravity terminates the stable sequence.
The moment of inertia of the hadronic crust, $I_{\rm crust}$, that can be carried by
a strange star as a function of star mass for a sample of rotational
frequencies of $\Omega=\Omega_{\rm K},\Omega_{\rm K}/2$ and 0 is shown in Fig.\ 6. Because of the
relatively small crust mass of the maximum-mass models of each sequence, the
ratio $I_{\rm crust}/I_{\rm total}$ is smallest for them (solid dots in Fig.\ 6). The less
massive the strange star the larger its radius (Fig.\ 5) and therefore the
larger both $I_{\rm crust}$ as well as $I_{\rm total}$. The dependence of $I_{\rm crust}$ and
$I_{\rm total}$ on $M$ is such that their ratio $I_{\rm crust}/I_{\rm total}$ is a monotonically
decreasing function of $M$. One sees that there is only a slight difference
between $I_{\rm crust}$ for $\Omega=0$ and $\Omega=\Omega_{\rm K}/2$.
Of considerable relevance for the question of whether strange stars can
exhibit glitches in rotation frequency, one sees that $I_{\rm crust}/I_{\rm total}$
varies between $10^{-3}$ and $\sim 10^{-5}$ at the maximum mass.
If the angular momentum of the pulsar is conserved in the quake
then the relative frequency change and moment of inertia change are equal,
and one arrives at (Glendenning and Weber,\ 1992)
\begin{eqnarray}
{{\Delta \Omega}\over{\Omega}} \; = \;
{{|\Delta I|}\over {I_0}} \; > \;
{{|\Delta I|}\over {I}} \; \equiv \; f \;
{I_{\rm crust}\over I}\; \sim \; (10^{-5} - 10^{-3})\, f \; ,
~{\rm with} \quad 0 < f < 1\; .
\label{eq:delomeg}
\end{eqnarray} Here $I_0$ denotes the moment of inertia of that part of the star whose
frequency is changed in the quake. It might be that of the crust only, or some
fraction, or all of the star. The factor $f$ in Eq.\ (2) represents the
fraction of the crustal moment of inertia that is altered in the quake, i.e.,
$f \equiv |\Delta I|/ I_{\rm crust}$. Since the observed glitches have relative
frequency changes $\Delta \Omega/\Omega = (10^{-9} - 10^{-6})$, a change in the
crustal moment of inertia of $f\stackrel{\textstyle <}{_\sim} 0.1$ would cause a giant glitch even in
\begin{figure}[tb]
\begin{center}
\leavevmode
\mbox{\psfig{figure=fig_cool.ps.bb,width=12.0cm,height=9.0cm,angle=-90}}
\caption{\em {Left panel: Cooling of neutron stars with pion (solid
curves) or kaon condensates (dotted curve). Right panel: Cooling of
$M=1.8\,M_\odot$ strange stars with crust. The cooling curves of lighter
strange stars, e.g. $M\stackrel{\textstyle >}{_\sim} 1\,M_\odot$, differ only insignificantly from
those shown here. Three different assumptions about a possible superfluid
behavior of strange quark matter are made: no superfluidity (solid),
superfluidity of all three flavors (dotted), and superfluidity of up and
down flavors only (dashed). The vertical bars denote luminosities of
observed pulsars.
\label{fig:cool}}}
\end{center}
\end{figure}
the least favorable case (for more details, see Glendenning and Weber,\ 1992).
Moreover, we find that the observed range of the fractional change in the
spin-down rate, $\dot \Omega$, is consistent with the crust having the small
moment of inertia calculated and the quake involving only a small fraction $f$
of that, just as in Eq.\ (2). For this purpose we write (Glendenning and
Weber,\ 1992) \begin{eqnarray} { {\Delta \dot\Omega}\over{\dot\Omega } } \; = \; { {\Delta
\dot\Omega / \dot\Omega} \over {\Delta \Omega / \Omega } } \, { {|\Delta I
|}\over{I_0} } \; = \; { {\Delta \dot\Omega / \dot\Omega} \over {\Delta
\Omega / \Omega } } \; f \; {I_{\rm crust}\over {I_0} } \; > \; (10^{-1}\; {\rm
to} \; 10) \; f \; ,
\label{eq:omdot}
\end{eqnarray} where use of Eq.\ (2) has been made. Equation (3) yields a small $f$
value, i.e., $f < (10^{-4} \; {\rm to} \; 10^{-1})$, in agreement with $f\stackrel{\textstyle <}{_\sim}
10^{-1}$ established just above. Here measured values of the ratio $(\Delta
\Omega/\Omega)/(\Delta\dot\Omega/\dot\Omega) \sim 10^{-6}$ to $10^{-4}$ for the
Crab and Vela pulsars, respectively, have been used. So we arrive at the
important finding that the nuclear crust mass that can envelope a strange
matter core can be sufficiently large enough such that the relative changes in
$\Omega$ and $\dot\Omega$ obtained for strange stars with crust in the
framework of the crust quake model are consistent with the observed values, in
contrast to claims expressed in the literature.
\section{Cooling behavior of neutron stars and strange stars}
The left panel of Fig.\ 7 shows a numerical simulation of the thermal evolution
of neutron stars. The neutrino emission rates are determined by the modified
and direct Urca processes, and the presence of a pion or kaon condensate. The
baryons are treated as superfluid particles. Hence the neutrino emissivities
are suppressed by an exponential factor of $\exp(-\Delta/kT)$, where $\Delta$
is the width of the superfluid gap (see Schaab, Weber, Weigel, and
\begin{figure}[tb]
\begin{center}
\leavevmode
\mbox{\psfig{figure=atomic_matter.ps.bb,width=4.5in,height=3.1in,angle=-90}}
\caption[Graphical illustration of all possible stable nuclear objects if
nuclear matter (i.e., iron) is the most stable form of matter.] {\em Graphical
illustration of all possible stable nuclear objects if nuclear matter (i.e.,
iron) is the most stable form of matter. Note the huge range referred to as
nuclear desert which is void of any stable nuclear systems.}
\end{center}
\end{figure}
\begin{figure}[tb]
\begin{center}
\leavevmode
\mbox{\psfig{figure=sqm.ps.bb,width=4.5in,height=3.1in,angle=-90}}
\caption[Graphical illustration of all possible stable nuclear objects if
strange quark matter is more stable than nuclear matter.] {\em Same as Fig.\ 8
but for strange quark matter as the most stable configuration of matter.
Various stable strange-matter objects are shown. In sharp contrast to Fig.\
8, the nuclear desert does not exist anymore but is filled with a variety of
different stable strange-matter objects, ranging from strangelets at the
small baryon number end to strange dwarfs at the large baryon number end. The
strange counterparts of ordinary atomic nuclei are denoted strange nuggets,
those of neutron stars (pulsars) are referred to as compact strange stars
(see text for details). Observational implications are indicated at the
top.}
\end{center}
\end{figure} Glendenning\ (1996) for details). Due to the dependence of the
direct Urca process and the onset of meson condensation on star mass, stars
that are too light for these processes to occur (i.e., $M<1\,M_{\odot}$) are
restricted to standard cooling via modified Urca. Enhanced cooling via the
other three processes results in a sudden drop of the star's surface
temperature after about 10 to $10^3$ years after birth, depending on the
thickness of the ionic crust. As one sees, agreement with the observed data is
achieved only if different masses for the underlying pulsars are assumed. The
right panel of Fig.\ 7 shows cooling simulations of strange quark stars. The
curves differ with respect to assumptions made about a possible superfluid
behavior of the quarks. Because of the higher neutrino emission rate in
non-superfluid quark matter, such quark stars cool most rapidly (as long as
cooling is core dominated). In this case one does not get agreement with most
of the observed pulsar data. The only exception is pulsar PSR 1929+10.
Superfluidity among the quarks reduces the neutrino emission rate, which delays
cooling (Schaab, Weber, Weigel, and Glendenning,\ 1996). This moves the cooling
curves into the region where most of the observed data lie.
Subject to the inherent uncertainties in the behavior of strange quark matter
as well as superdense nuclear matter, at present it appears much too premature
to draw any definitive conclusions about the true nature of observed pulsars.
Nevertheless, should a continued future analysis in fact confirm a considerably
faster cooling of strange stars relative to neutron stars, this would provide a
definitive signature (together with rapid rotation) for the identification of a
strange star. Specifically, the prompt drop in temperature at the very early
stages of a pulsar, say within the first 10 to 50 years after its formation,
could offer a good signature of strange stars (Pizzochero, 1991). This
feature, provided it withstands a more rigorous analysis of the microscopic
properties of quark matter, could become particularly interesting if continued
observation of SN 1987A would reveal the temperature of the possibly existing
pulsar at its center.
\section{Summary}
This work deals with an investigation of the properties of the complete
sequences of strange-matter stars that carry nuclear crusts. Some striking
features of such objects are summarized in Table\ 3. Figures 8 and 9 stress
the implications of strange quark matter as the most stable form of matter
graphically. The following items are particularly noteworthy:
\begin{enumerate}
\item The complete sequence of compact strange stars can sustain
extremely rapid rotation and not just those close to the mass peak,
as is the case for neutron stars!
\item If the strange matter hypothesis~ is correct, the observed white dwarfs
and planets could contain strange-matter cores in their centers. The
baryon numbers of their cores are smaller than $\stackrel{\textstyle <}{_\sim} 2 \times
10^{55}$!
\item The strange stellar configurations would populate a vast region in the
mass-radius plane of collapsed stars that is entirely void of stars if
strange quark matter is not the absolute ground state of strongly interacting
matter!
\item If the new classes of stars mentioned in (2) and (3) exist
abundantly enough in our Galaxy, the presently performed
gravitational microlensing experiments could see them all!
\item We find that the moment of inertia of the crust on a strange star can
account for both the observed relative frequency changes of pulsars
(glitches) as well as the relative change in spin-down rate!
\item Due to the uncertainties in the behavior of superdense
nuclear as well as strange matter, no definitive conclusions about
the true nature (strange or conventional) of observed pulsar can be
drawn from cooling simulations yet. As of yet they could be made of
strange quark matter as well as of conventional nuclear matter.
\end{enumerate}
Of course, there remain various interesting aspects of strange pulsars, strange
dwarfs and strange planets that need to be worked out in detail. From their
analysis one may hope to arrive at definitive conclusions about the behavior of
superdense nuclear matter and, specifically, the true ground state of strongly
interacting matter. Clarifying the latter item is of fundamental importance
for the early universe, its evolution to the present day, massive stars,
and laboratory physics.
\medskip
{\bf Acknowledgment:}
\doe
\section{References}
\hang\noindent Alcock, C., Farhi, E., Olinto, A. V.: 1986, Astrophys.\ J.\ {\bf 310},
p.\ 261.
\hang\noindent Alcock, C., Olinto, A. V.: 1988, Ann.\ Rev.\ Nucl.\ Part.\ Sci.\
{\bf 38}, p.\ 161.
\hang\noindent Baym, G., Pethick, C., Sutherland, P.: 1971, Astrophys.\ J.\ {\bf 170},
p.\ 299.
\hang\noindent Baym, G, Kolb, E. W., McLerran, L., Walker, T. P., Jaffe, R. L.:
1985, Phys.\ Lett.\ {\bf 160B}, p.\ 181.
\hang\noindent Benvenuto, O. G., Horvath, J. E.: 1989, Phys.\ Rev.\ Lett.\ {\bf 63},
p.\ 716.
\hang\noindent Bjorken, J. D., McLerran, L.: 1979, Phys.\ Rev.\ D {\bf 20}, p.\ 2353.
\hang\noindent Br{\"{u}}gger, M., L{\"{u}}tzenkirchen, K., Polikanov, S., Herrmann, G.,
Overbeck, M., Trautmann, N., Breskin, A., Chechik, R., Fraenkel, Z.,
Smilansky, U.: 1989, Nature {\bf 337}, p.\ 434.
\hang\noindent Chin, S. A., Kerman, A. K.: 1979, Phys.\ Rev.\ Lett.\ {\bf 43}, p.\ 1292.
\hang\noindent De R{\'{u}}jula, A., Glashow, S. L., Wilson, R. R., Charpak, G.: 1983,
Phys.\ Rep.\ {\bf 99}, p.\ 341.
\hang\noindent De R{\'{u}}jula, A., Glashow, S. L.: 1984, Nature {\bf 312}, p.\ 734.
\hang\noindent Farhi, E., Jaffe, R. L.: 1984, Phys.\ Rev.\ D {\bf 30}, p.\ 2379.
\hang\noindent Freedman, B. A., McLerran, L. D.: 1977, Phys.\ Rev.\ D {\bf 16},
p.\ 1130; {\bf 16}, p.\ 1147; {\bf 16}, p.\ 1169.
\hang\noindent Frieman, J. A., Olinto, A. V.: 1989, Nature {\bf 341}, p.\ 633.
\hang\noindent Glendenning, N. K.: 1985, Astrophys.\ J.\ {\bf 293}, p.\ 470.
\hang\noindent Glendenning, N. K.: 1990, Mod.\ Phys.\ Lett.\ {\bf A5}, p.\ 2197.
\hang\noindent Glendenning, N. K., Weber, F.: 1992, Astrophys.\ J.\ {\bf 400}, p.\ 647.
\hang\noindent Glendenning, N. K., Weber, F.: 1994, Phys.\ Rev.\ D {\bf 50}, p.\ 3836.
\hang\noindent Glendenning, N. K., Kettner, Ch., Weber, F.: 1995a, Astrophys.\ J.\
{\bf 450}, p.\ 253.
\hang\noindent Glendenning, N. K., Kettner, Ch., Weber, F.: 1995b, Phys.\ Rev.\ Lett.\
{\bf 74}, p.\ 3519.
\hang\noindent Greiner, C., Koch, P., St{\"{o}}cker, H.: 1987, Phys.\ Rev.\ Lett.\
{\bf 58}, p.\ 1825.
\hang\noindent Greiner, C., Rischke, D.-H., St{\"{o}}cker, H., Koch, P.: 1988,
Phys.\ Rev.\ D {\bf 38}, p.\ 2797.
\hang\noindent Haensel, P., Zdunik, J. L., Schaeffer, R.: 1986, Astron.\ Astrophys.\
{\bf 160}, p.\ 121.
\hang\noindent Horvath, J. E., Benvenuto, O. G.: 1988, Phys.\ Lett.\ {\bf 213B}, p.\
516.
\hang\noindent Horvath, J. E., Benvenuto, O. G., Vucetich, H.: 1992, Phys.\ Rev.\ D
{\bf 45}, p.\ 3865.
\hang\noindent Horvath, J. E., Vucetich, H., Benvenuto, O. G.: 1993, Mon.\ Not.\ R.\
Astr.\ Soc.\ {\bf 262}, p.\ 506.
\hang\noindent Jaffe, R. L.: 1977, Phys.\ Lett.\ {\bf 38}, p.\ 195.
\hang\noindent Kettner, Ch., Weber, F., Weigel, M. K., Glendenning, N. K.: 1995a,
Phys.\ Rev.\ D {\bf 51}, p.\ 1440.
\hang\noindent Kettner, Ch., Weber, F., Weigel, M. K., Glendenning, N. K.: 1995b,
Proceedings of the International Symposium on Strangeness and Quark Matter,
eds. G. Vassiliadis, A. D. Panagiotou, S. Kumar, and J. Madsen, World
Scientific, p.\ 333.
\hang\noindent Liu, H.-C., Shaw, G. L.: 1984, Phys.\ Rev.\ D {\bf 30}, p.\ 1137.
\hang\noindent MACRO collaboration: 1992, Phys.\ Rev.\ Lett.\ {\bf 69}, p.\ 1860.
\hang\noindent Madsen, J., Heiselberg, H., Riisager, K.: 1986, Phys.\ Rev.\ D {\bf 34},
p.\ 2947.
\hang\noindent Madsen, J.: 1988, Phys.\ Rev.\ Lett.\ {\bf 61}, p.\ 2909.
\hang\noindent Olinto, A. V.: 1987, Phys.\ Lett.\ {\bf 192B}, p.\ 71.
\hang\noindent Pizzochero, P.: 1991, Phys.\ Rev.\ Lett.\ {\bf 66}, p.\ 2425.
\hang\noindent Price, P. B.: 1984, Phys.\ Rev.\ Lett.\ {\bf 52}, p.\ 1265.
\hang\noindent Saito, T., Hatano, Y., Fukuda, Y., Oda, H.: 1990, Phys.\ Rev.\ Lett.\
{\bf 65}, p.\ 2094.
\hang\noindent Saito, T.: 1995, Proceedings of the International Symposium on Strangeness
and Quark, eds. G. Vassiliadis, A. D. Panagiotou, S. Kumar, and J.
Madsen, World Scientific.
\hang\noindent Schaab, Ch., Weber, F., Weigel, M. K., Glendenning, N. K.: 1996, Nucl.\
Phys.\ {\bf A605}, p.\ 531.
\hang\noindent Shaw, G. L., Shin, M., Dalitz, R. H., Desai, M.: 1989, Nature {\bf 337},
p.\ 436.
\hang\noindent Terazawa, H.: 1991, J.\ Phys.\ Soc.\ Japan, {\bf 60}, p.\ 1848.
\hang\noindent Terazawa, H.: 1993, J.\ Phys.\ Soc.\ Japan, {\bf 62}, p.\ 1415.
\hang\noindent Thomas, J., Jacobs, P.: 1995, {\it A Guide to the High Energy Heavy Ion
Experiments}, UCRL-ID-119181.
\hang\noindent Weber, F., Weigel, M. K.: 1989, Nucl.\ Phys.\ {\bf A505}, p.\ 779.
\hang\noindent Weber, F., Glendenning, N. K.: 1992, Astrophys.\ J.\ {\bf 390}, p.\ 541.
\hang\noindent Weber, F., Glendenning, N. K.: 1993a, Proceedings of the Nankai Summer
School, ``Astrophysics and Neutrino Physics'', ed. by D. H. Feng, G. Z. He,
and X. Q. Li, World Scientific, Singapore, p.\ 64--183.
\hang\noindent Weber, F., Glendenning, N. K.: 1993b, Proceedings of the First Symposium
on Nuclear Physics in the Universe, ed. by M. W. Guidry and M. R.
Strayer, IOP Publishing Ltd, Bristol, UK, p.\ 127.
\hang\noindent Witten, E.: 1984, Phys.\ Rev.\ D {\bf 30}, p.\ 272.
\end{document}
| proofpile-arXiv_065-371 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Since the first observations of large rapidity gap events in deep-inelastic
scattering at HERA
\cite{firstZEUS,firstH1}, diffractive interactions have attracted much attention.
In this Conference, 31 reports (about half theoretical and half experimental)
have been presented to the working group on
diffraction, of which 15 were presented in sessions held in common with the
working groups on structure functions, on photoproduction and on final states.
Two discussion sessions were devoted mainly to the interpretation of
the inclusive measurements.
Reports were also presented on the DESY Workshop on the future of HERA \cite{halina},
and on Monte Carlo simulations of diffractive processes \cite{solano}.
The experimental results concern mainly HERA, but also experiments at the
Tevatron collider.
The present summary consists of two parts, devoted respectively to inclusive
measurements
\footnote{presented by V. Del Duca and P. Marage} and to exclusive vector meson
production \footnote{presented by E. Gallo}.
\section{DDIS Inclusive Measurements}
\subsection {Introduction}\label{sec:intr}
Diffractive interactions,
sketched in Fig. \ref{diffproc}, are attributed to the exchange of a colour singlet
system, the pomeron, and are characterised by the presence in the
final state of a large rapidity gap, without particle emission, between two
systems \mbox{$X$}\ and \mbox{$Y$}\ of masses \mbox{$M_X$}\ and \mbox{$M_Y$}\ much smaller than the
total hadronic mass \mbox{$W$}.
The final state system \mbox{$Y$}\ is a proton (elastic scattering) or an excited
state of higher mass (proton dissociation).
\begin{figure}[h]
\begin{center}
\epsfig{file=diffkin.epsf,width=0.35\textwidth}
\end{center}
\caption {Diffractive interaction in photo-- or electroproduction
$\gamma^{(*)} p \rightarrow XY$.}
\label{diffproc}
\end{figure}
The cross section for diffractive deep inelastic scattering (DDIS)
is defined using four variables
(in addition to the mass \mbox{$M_Y$}\ of the system \mbox{$Y$}),
$e.g.$ \mbox{$Q^2$}\ (the negative four-momentum squared of the exchanged
photon), the Bjorken scaling variable \mbox{$x$}, the total hadronic mass \mbox{$W$},
and the square of the pomeron four-momentum $t$.
It is also useful to define the variables \mbox{$x_{I\!\! P}$}\ and $\beta$
\begin{equation}
\mbox{$x_{I\!\! P}$} = \frac{\mbox{$Q^2$}+\mbox{$M_X$}^2-t}{\mbox{$Q^2$}+\mbox{$W^2$}-\mbox{$M_p$}^2}, \ \ \ \beta = \frac{\mbox{$Q^2$}}{\mbox{$Q^2$}+\mbox{$M_X$}^2-t} ,
\label{eq:eqPMI}
\end{equation}
which are related to \mbox{$x$}\ by the relation $x = \beta \cdot \mbox{$x_{I\!\! P}$} $.
Inclusive DIS is usually parameterised in terms of two structure functions,
$F_i = F_i(x, Q^2)$, with $i=1,2$. DDIS needs in principle four
structure functions, which can be written in such a way that two
are similar to the ones of inclusive DIS, and the other two are
the coefficients of terms proportional to the $t$ variable,
and may be neglected since $t$ is small~\cite{chpww}. Thus, in analogy with
inclusive DIS the cross section for DDIS is written as
\begin{equation}
{d^4\sigma(e + p \rightarrow e + X + Y)
\over dx dQ^2 dx_{I\!\! P} dt} =
{4\pi \alpha^2\over x Q^4}\, \left[1-y+{y^2\over 2(1+R^D)}\right]\,
F_2^{D(4)}(x, Q^2; x_{I\!\! P}, t)\, ,\label{nove}
\end{equation}
with $y = \mbox{$Q^2$} / x \cdot s$ the electron energy loss, $s$ the total
$(e, p)$ centre of mass energy, and
\begin{equation}
R^D(x, Q^2; x_{I\!\! P}, t) = {1\over 2x} {F_2^{D(4)}
\over F_1^{D(4)}} - 1\, .\label{ratio}
\end{equation}
$R$ has not been measured yet in the inclusive DIS,
and $R^D$ will be set to 0 in what follows. Therefore
the diffractive cross section (\ref{nove}) is directly related to
$F_2^{D(4)}$~\footnote{In the theoretical literature the diffractive
structure function $F_2^{D(4)}$ is often called
${dF_2^D \over dx_{I\!\! P} dt}$.}.
If factorization of the collinear
singularities works in this case as it does in inclusive DIS, then
the diffractive structure function may be written in terms of
a parton density of the pomeron~\cite{ber},
\begin{equation}
F_2^{D(4)}(x, Q^2; x_{I\!\! P}, t) = \sum_a
\int^{x_{I\!\! P}}_x d\zeta {df^D_{a/p}(\zeta,\mu; x_{I\!\! P},t)
\over dx_{I\!\! P} dt} \hat F_{2,a}\left({x\over\zeta}, Q^2, \mu\right)\,
,\label{diffact}
\end{equation}
with $\zeta$ the parton momentum fraction within the proton,
$x_{I\!\! P}$ the momentum fraction of the pomeron, $\mu$ the factorization
scale, and the sum extending over
quarks and gluons; the parton structure functions $\hat F_{2,a}$ are
computable in perturbative QCD. The integral of the diffractive parton
density over $t$ is the fracture function~\cite{trent}
\begin{equation}
\int_{-\infty}^0 dt
{df^D_{a/p}(\zeta,\mu; x_{I\!\! P},t) \over dx_{I\!\! P} dt} =
M_{pp}^a(\zeta,\mu; x_{I\!\! P})\, .\label{frac}
\end{equation}
The structure function $F_2^{D(3)}$ is obtained by integration of
$F_2^{D(4)}$ over the \mbox{$t$}\ variable. It is thus related to the
fracture function by
\begin{equation}
F_2^{D(3)}(x, Q^2; x_{I\!\! P}) = \sum_a \int^{x_{I\!\! P}}_x d\zeta
M_{pp}^a(\zeta,\mu; x_{I\!\! P}) \hat F_{2,a}\left({x\over\zeta}, Q^2,
\mu\right)\, .\label{newdiff}
\end{equation}
Next, let us assume that Regge factorization holds for the diffractive
parton density, namely that it can be factorized into a flux
of pomeron within the proton and a parton density within the
pomeron~\cite{is,dland}:
\begin{equation}
{df^D_{a/p}(\zeta,\mu; x_{I\!\! P},t) \over dx_{I\!\! P} dt} =
{f_{p I\!\! P}(x_{I\!\! P},t) \over x_{I\!\! P}}
f_{a/I\!\! P}\left({\zeta\over x_{I\!\! P}},\mu; t\right)\, ,\label{regg}
\end{equation}
with flux
\begin{equation}
f_{p I\!\! P}(x_{I\!\! P},t) = {|\beta_{p I\!\! P}(t)|^2\over 8\pi^2}
x_{I\!\! P}^{1-2\alpha(t)}\, ,\label{flux}
\end{equation}
the pomeron-proton coupling $\beta_{p I\!\! P}(t)$ and the trajectory
$\alpha(t)$ \footnote{For simplicity the trajectory is supposed to be
linear~\cite{dl2}, however it is also possible to consider models
with a non-linear trajectory~\cite{jenk}.} being obtained from fits to elastic
hadron-hadron cross sections at small $t$ ~\cite{chpww,dl2},
\begin{eqnarray}
\beta_{p I\!\! P}(t) &=& \beta_{\bar{p} I\!\! P}(t) \simeq 4.6\, mb^{1/2}\,
e^{1.9 GeV^{-2} t}\, ,\label{five}\\
\alpha(t) &\simeq& 1.08 + 0.25\, GeV^{-2}\, t\, .\nonumber
\end{eqnarray}
Substituting the diffractive parton density (\ref{regg}) into the
structure function (\ref{diffact}), we obtain
\begin{equation}
F_2^{D(4)}(x, Q^2; x_{I\!\! P}, t) = f_{p I\!\! P}
(x_{I\!\! P},t)\, F_2^{I\!\! P}(\beta, Q^2; t)\, ,\label{elev}
\end{equation}
with the pomeron structure function
\begin{equation}
F_2^{I\!\! P}(\beta, Q^2; t) = \sum_a \int_{\beta}^1 d\beta'
f_{a/I\!\! P}\left(\beta',\mu; t\right)
\hat F_{2,a}\left({\beta\over\beta'}, Q^2, \mu\right)\, ,\label{pomf}
\end{equation}
with $\beta' = \zeta/x_{I\!\! P}$ and $\beta$ the fraction
of the pomeron momentum carried by the struck parton.
When the outgoing proton momentum is measured as the fraction $x_L$ of the
incident proton momentum, one has $x_L \simeq 1 - \mbox{$x_{I\!\! P}$}$.\\
Several reports at this Conference and numerous discussions dealt with the
procedures of diffractive cross section measurement, the factorisation properties of the
structure function and the possibility to extract parton distributions for the
exchange system.
\subsection {Cross section measurements}
The H1~\cite{H193} and ZEUS~\cite{ZEUSI} experiments have measured the cross section
for diffractive deep inelastic scattering in the data taken in 1993, by selecting
events with a large rapidity gap in the forward part of their main calorimeters.
The non-diffractive and the proton dissociation diffractive contributions
were subtracted using Monte Carlo simulations.
Within the limited statistics, both experiments found that the results were compatible
with a factorisation of the structure function \mbox{$F_2^{D(3)}$}\ of the form
\begin{equation}
\mbox{$F_2^{D(3)}$} (\mbox{$Q^2$}, \beta, \mbox{$x_{I\!\! P}$}) = \frac {1} {\mbox{$x_{I\!\! P}$}^n} \ A (\mbox{$Q^2$}, \beta),
\label{eq:eqPMIII}
\end{equation}
where the \mbox{$x_{I\!\! P}$}~dependence could be interpreted as proportional to a pomeron flux
in the proton, in agreement with eq.~(\ref{elev}) integrated over $t$.
The exponent $n$ is related to the effective pomeron trajectory
by $n = 2\ \alpha(t) - 1$, as in eq.~(\ref{flux}),
with $\alpha(t)$ given in eq.~(\ref{five}).
The following \mbox{$t$}~averaged \mbox{$\bar \alpha$}\ values were obtained by H1 and ZEUS, respectively:
\begin{eqnarray}
\mbox{$\bar \alpha$} = 1.10 \pm 0.03 \pm 0.04 \ \ \ {\rm H1} \\
\mbox{$\bar \alpha$} = 1.15 \pm 0.04 \ ^{+0.04}_{-0.07} \ \ \ {\rm ZEUS} .
\end{eqnarray}
\begin{figure}
\special{psfile=kowalski1.ps
hscale=50 vscale=45 hoffset=10 voffset=-265}
\unitlength1cm
\begin{picture}(5,5.5)
\thicklines
\end{picture}
\caption{\label{fig:kowalskiI}
{ZEUS Coll. (log $M_X$ method [14]):
Example of a fit for the determination of the nondiffractive
background.
The solid lines
show the extrapolation of the nondiffractive background as
determined from the fit of the diffractive and nondiffractive
components to the data (dotted line).
}}
\end{figure}
\begin{figure}
\special{psfile=kowalski2.ps
hscale=45 vscale=45 hoffset= 40 voffset=-315}
\unitlength1cm
\begin{picture}(7,8.8)
\thicklines
\end{picture}
\caption{\label{fig:kowalskiII}
{ZEUS Coll. (log $M_X$ method [14]):
The differential cross sections $d\sigma^{diff}
(\gamma^* p \to X N)/dM_X$. The inner error bars show the
statistical errors and the full bars the statistical and
systematic errors added in quadrature.
The curves show the results
from fitting all cross sections to the form $d\sigma^{diff}/dM_X
\propto (W^2)^{(2\overline{\alphapom}-2)}$ with a common value of
$\overline{\alphapom}$.}}
\end{figure}
The ZEUS Collaboration has presented at this Conference a different method to
extract the diffractive contribution (1993 data)~\cite{ZEUSII}.
The (uncorrected) $\log \mbox{$M_X$}^2$ distributions, in bins of \mbox{$Q^2$}\ and $W$, are
parameterised as the sum of an exponentially falling contribution at high \mbox{$M_X$},
attributed to non-diffractive interactions,
and of a constant contribution at low \mbox{$M_X$}, attributed to diffraction.
(see Fig. \ref{fig:kowalskiI}).
With this ``operational definition'' of diffraction (with $\mbox{$M_Y$}\ \raisebox{-0.5mm}{$\stackrel{<}{\scriptstyle{\sim}}$}\ 4$ \mbox{$\rm GeV/c^2$}),
no Monte Carlo simulations are used.
The $W$ dependence of the diffractive cross section gives
(see Fig. \ref{fig:kowalskiII}):
\begin{equation}
\mbox{$\bar \alpha$} = 1.23 \pm 0.02 \pm 0.04, \ \ \beta = 0.1 - 0.8.
\end{equation}
The difference with the previously published result is attributed by ZEUS to the
uncertainties in the Monte Carlo procedure for background subtraction.
\begin{figure}
\vspace{-2cm}
\begin{center}
\leavevmode
\hbox{%
\epsfxsize = 3.5in
\epsffile{barberis2.ps}}
\end{center}
\vspace{-2.5cm}
\caption
{ZEUS Coll. (LPS data [15]):
The structure function $F^{D(3)}_{2}$, plotted as a function
of $x_{I\!P}$ in 5 bins in $\beta$ at a central value $Q^2=12\ \mbox{${\rm GeV}^2$}$.
The errors are statistical only.
The solid line corresponds to a fit in the form of eq. (12).
\label{fig:barberis}}
\end{figure}
The ZEUS Collaboration has also presented in this Conference preliminary results
obtained with the Leading Proton Spectrometer (1994 data)~\cite{barberis}.
In this case, the scattered proton is unambiguously tagged, and the kinematics are
reconstructed using its momentum measurement. A fit of the \mbox{$x_{I\!\! P}$}\ dependence
(see Fig. \ref{fig:barberis})
for $\av{\mbox{$M_X$}^2} = 100$ \mbox{${\rm GeV}^2$}\ gives:
\begin{equation}
\mbox{$\bar \alpha$} = 1.14 \pm 0.04 \pm 0.08, \ \ \beta = 0.04 - 0.5.
\end{equation}
The ZEUS LPS has also provided the first inclusive measurement of the diffractive
$t$ dependence at HERA, parameterised as
\begin{equation}
\frac {{\rm d}\sigma} {{\rm d}t} \propto e^{-b|t|},
\ \ b = 5.9 \pm 1.3 \ ^{+1.1}_{-0.7}\ \mbox{${\rm GeV}^{-2}$}.
\end{equation}
\begin{figure}[ht]
\begin{center}
\epsfig{figure=newman1.epsf,width=1.25\textwidth}
\vspace{-0.8cm}
\caption{H1 Coll. [16]: (a) \protect{$\mbox{$x_{I\!\! P}$} \cdot F_2^{D(3)}(\beta,Q^2,\mbox{$x_{I\!\! P}$})$} integrated
over the range $\mbox{$M_Y$} < 1.6\ {\rm GeV}$ and $|t|\,<\,1$ GeV$^2$; (b) the
$\beta$ dependence of $n$ when $F_2^{D(3)}$ is fitted to the form
\protect{$F_2^{D(3)}= A(\beta,Q^2)/ {\mbox{$x_{I\!\! P}$}}^{n(\beta)}$}; (c) the $Q^2$
dependence of $n$ when $F_2^{D(3)}$ is fitted to the form
\protect{$F_2^{D(3)}= A(\beta,Q^2)/ {\mbox{$x_{I\!\! P}$}}^{n(Q^2)}$}.
The experimental errors are statistical and systematic added in quadrature.}
\label{fig:newmanI}
\end{center}
\end{figure}
Finally, the H1 Collaboration has reported preliminary results on \mbox{$F_2^{D(3)}$}\ from
the 1994 data~\cite{newman}.
The use of the forward detectors allows the selection of events with no activity
in the pseudorapidity range $3.2 < \eta < 7.5$ ($\mbox{$M_Y$} < 1.6$ \mbox{$\rm GeV/c^2$}).
With this extended kinematical domain and a tenfold increase in statistics
compared to the 1993 data, 43 bins in \mbox{$Q^2$}\ and $\beta$ are defined.
A clear breaking of factorisation is observed:
in the form of parameterisation (\ref{eq:eqPMIII}), the data suggest that
the $n$ exponent is independent of \mbox{$Q^2$}\, but they require a definite $\beta$ dependence,
with \mbox{$\bar \alpha$}\ ranging from $\simeq 0.98$ for $\av{\beta} \simeq 0.1$ to
$\simeq 1.12$ for $\av{\beta} \ \raisebox{-0.5mm}{$\stackrel{>}{\scriptstyle{\sim}}$}\ 0.4$
(see Fig. \ref{fig:newmanI}).
Experimentally, for similar values of $\av{\beta}$, the H1 results thus favour a
smaller value of \mbox{$\bar \alpha$}\ than the ($\log \mbox{$M_X$}^2$) ZEUS analysis.
Detailed comparisons and discussions between the two experiments should in the
future provide more information concerning the source of this difference.
\subsection {Factorisation Breaking and Parton Distributions}
\label{sec:break}
The source of the factorisation breaking observed by H1 has been discussed in
several communications and during the round tables.
N. N. Nikolaev underlined particularly that the pomeron is not a particle, and that
in a QCD inspired approach, factorisation is not expected to hold~\cite{nikolaev}.
The possible contribution in the selected samples of different exchanges,
in particular of pomeron and $f$ and $a_2$ trajectories, was particularly
emphasised~\cite{jenk,nikolaev,landshoff,Eilat,pred,stirling}.
These trajectories have different energy and thus $\mbox{$x_{I\!\! P}$}^{n}$ dependences.
They have also different partonic contents, and thus different functions
$A (\mbox{$Q^2$}, \beta)$ describe the interaction with the photon.
Even if each contribution were factorisable, their combination would thus not be
expected to allow a factorisable effective parameterisation.\\
However,
if it is possible to select a domain where pomeron exchange dominates, and
if the factorization picture outlined in sect.~\ref{sec:intr} holds,
it is possible to fit the data on single hard diffraction to extract the
parton densities in the pomeron~\footnote{It is not clear whether the fits
should be extended to data from hadron-hadron scattering or from
photoproduction because of additional
factorization-breaking contributions~\cite{cfs}.}.
First we note that the pomeron, being an object with the quantum numbers of
the vacuum, has $C = 1$ and is isoscalar~\cite{dl2}~\footnote{A
$C =-1$ contribution would be indication of an odderon exchange~\cite{pred}.}.
The former property implies that
$f_{q/I\!\! P}(\beta) = f_{\bar{q}/I\!\! P}(\beta)$ for any quark $q$
and the latter that $f_{u/I\!\! P}(\beta) = f_{d/I\!\! P}(\beta)$.
Therefore it is necessary to determine only the up and strange quark densities
and the gluon density. Since the parton structure function is,
\begin{equation}
\hat F_{2,a}\left({\beta\over\beta'}, Q^2, \mu\right) = e_a^2
\delta\left(1 - {\beta\over\beta'}\right) + O(\alpha_s)\, ,\label{partf}
\end{equation}
with $e_a$ the quark charge and $a$ running over the quark flavors,
the pomeron structure function (\ref{pomf}) becomes,
\begin{equation}
F_2^{I\!\! P}(\beta, Q^2; t) = {10\over 9} \beta f_{u/I\!\! P}(\beta,Q^2; t) +
{2\over 9} \beta f_{s/I\!\! P}(\beta,Q^2; t) + O(\alpha_s)\, ,\label{dod}
\end{equation}
where the gluon density contributes to the $O(\alpha_s)$ term through the
DGLAP evolution.\\
\begin{figure}[ht]
\begin{center}
\epsfig{figure=newman2.epsf,width=\textwidth}
\end{center}
\vspace{-0.4cm}
\caption{H1 Coll. [16]: (a) \protect{${\tilde{F}}_{2}^{D}(\beta,Q^2)$} as
a function of $Q^2$ for different $\beta$ values. The superimposed
lines correspond to the best fit to a linear dependence on $\ln Q^2$
(continuous) and $\pm 1\sigma$ (dashed);
(b) \protect{${\tilde{F}}_{2}^{D}(\beta,Q^2)$} as a function of $\beta$ for
different $Q^2$ values, with a best fit to a constant $\beta$ dependence;
(c) DGLAP QCD comparison of the $(\beta,Q^2)$ dependence of
${\tilde{F}}_{2}^{D}$ assuming
only quarks at the starting scale of $Q_0^2=2.5\,{\rm GeV^2}$; (d) DGLAP QCD
comparison of the $(\beta,Q^2)$ dependence of ${\tilde{F}}_{2}^{D}$ assuming
both quarks and gluons at the starting scale.}
\label{fig:newmanII}
\end{figure}
The H1 Collaboration has studied the evolution of the structure function
\mbox{$F_2^{D(3)}$}, integrated over \mbox{$x_{I\!\! P}$}\ in the range $0.0003 < \mbox{$x_{I\!\! P}$}\ < 0.05$ (in practice,
most of the data are for $\mbox{$x_{I\!\! P}$}\ < 0.02$):
\begin{equation}
\mbox{${\tilde{F}_2^D}$}(\mbox{$Q^2$},\beta) = \int \mbox{$F_2^{D(3)}$}(\mbox{$Q^2$},\beta,\mbox{$x_{I\!\! P}$}) \ {\rm d}\mbox{$x_{I\!\! P}$}.
\end{equation}
It is observed
(see Fig. \ref{fig:newmanII})
that $\mbox{${\tilde{F}_2^D}$}(\mbox{$Q^2$},\beta$) shows
no $\beta$ dependence at fixed \mbox{$Q^2$}\, but increases with \mbox{$Q^2$}\ for fixed $\beta$ values,
up to large $\beta$.
If interpreted in a partonic framework, this behaviour, strinkingly different of that
of ordinary hadrons, is suggestive of an important gluonic contribution at large
$\beta$.
More specifically, the H1 Collaboration assumed the possibility to perform
a QCD analysis of this evolution of \mbox{${\tilde{F}_2^D}$}\ using the DGLAP equations (with
no inhomogeneous term) to extract parton densities in the exchange.
At $\mbox{$Q^2$} = 5\ \mbox{${\rm GeV}^2$}$, a leading gluon component is obtained.
When the corresponding parton densities are input in Monte Carlo simulations of
exclusive processes (sect.~\ref{sec:vddone}),
consistent results are obtained \cite{theis,tap,alice}.
This procedure was discussed during the round tables and in several
contributions.
In the absence of factorisation theorems (see ~\cite{berera}), it can be
questioned whether the parton distribution functions are universal, and whether
they obey a DGLAP evolution.
A specific problem, due to the fact that the pomeron is not a particle, is that
momentum sum rules need not be valid.
However, it was noticed that the contribution of several Regge trajectories
may not affect the validity of a common QCD-DGLAP evolution, in so far as these
exchanges can all be given a partonic interpretation~\cite{stirling}.
On the other hand, it was also argued~\cite{nikolaev,bartels} that, even if one
accepts the concept of parton density functions in the diffractive exchange,
the DGLAP evolution should not be valid at high $\beta$, because of charm
threshold effects and because of the specific and different \mbox{$Q^2$}\ evolutions
of the longitudinal and transverse contributions.
\subsection{Parton Distributions and Jet Production}
\label{sec:vddone}
Jet production in diffractive interactions has yielded the first
hint of a partonic structure of the pomeron~\cite{is}. We have seen
in sect.~\ref{sec:break} how the quark densities may be directly
related to the pomeron structure function.
The gluon density may also be directly measured by using data
on diffractive charm or jet production. For the latter,
the final state of the hard scattering $jet_1 + jet_2 + X$ consists
at the lowest order $O(\alpha_s)$ of two partons only, generated in
quark-exchange and Compton-scattering diagrams for quark-initiated
hard processes, and in photon-gluon fusion diagrams for the gluon-initiated
ones. The parton momentum fraction $x$ in the proton may be
computed from the jet kinematic variables, $x = (E/P^0) \exp(2\bar{\eta})$,
with $E$ and $P^0$ the electron and proton energies and $\bar{\eta} =
(\eta_{j_1}+\eta_{j_2})/2$ the rapidity boost of the jet system. The
momentum fraction $x_{I\!\! P}$ may be obtained
from the invariant mass of the
system recoiling against the proton, $x_{I\!\! P}\simeq M_{ej_1j_2}^2/s$.
If we neglect the strange quark density~\footnote{The strange quark
density might be measured adding to the fit data on charged-current
charm production in DDIS~\cite{chpww}.}, the data on $F_2^D$ and diffractive
jet production suffice to measure the parton densities in the pomeron, and
may be used to link the value of the momentum sum,
$\sum_a \int_0^1 dx\, x f_{a/I\!\! P}(x)$, to the gluon content of the
pomeron~\cite{msr}.
On the other hand, if the gluon density is determined from $F_2^D$ alone,
as outlined in sect.~\ref{sec:break}, one can
make predictions for the data on diffractive charm or jet production.
Indeed, using the data on $F_2^D$ and the Monte Carlo RAPGAP~\cite{jung},
based on the factorization (\ref{elev}), the H1 Collaboration~\cite{newman}
finds that the parton densities $f_{a/I\!\! P}$ are dominated by a very
hard gluon~\footnote{There are models~\cite{cfs,buch} which predicted the
dominance of a single
hard gluon. In these models the color is neutralized by the exchange of
one or more soft gluons.} (sect.~\ref{sec:break}).
Having measured the densities $f_{a/I\!\! P}$, the H1
Collaboration~\cite{theis} finds that the jet rapidity and
transverse momentum distributions in diffractive dijet production
are in good agreement with the prediction from RAPGAP. In addition,
the data on energy flow in diffractive photoproduction also
seem to support a gluon-dominated structure of the pomeron~\cite{tap}.
Besides, by examining the thrust we may probe the amount of
gluon radiation in diffractive dijet production. The thrust axis is
defined as the axis in the parton center-of-mass system
along which the energy flow is maximal. The value of the thrust, $T$,
then measures the fraction of energy along this axis. For a back-to-back
dijet event $T=1$; so the amount by which $T$ differs from unity gives an
indication on the amount of emitted gluon radiation.
The data~\cite{alice} shows indeed the presence of hard gluon radiation,
the thrust being even smaller than in dijet production in $e^+e^-$
annihilation.
Finally, we mention several angular-correlation analyses recently proposed.
Namely, the
azimuthal-angle distribution in diffractive dijet production~\cite{bartels};
the final-state electron-proton azimuthal-angle distribution in the lab
frame~\cite{stir}; the azimuthal-angle distribution of the initial electron
in the photon-proton frame~\cite{nacht}. The respective measurements, if
carried out, should allow to further probe the pomeron structure, and to
discriminate between different models.
\subsection {Other Measurements}
In addition to these diffractive DIS measurements, the HERA Collaborations
also contributed reports on several other topics.
The H1 Collaboration presented results on diffraction in
photoproduction~\cite{newman}, with a measurement of its decomposition in
vector meson production, photon dissociation, proton dissociation and
double dissociation. In particular, the \mbox{$M_X$}\ distribution is consistent
with soft pomeron exchange.
The ZEUS experiment reported the observation of a charged current diffractive
candidate event in the 1994 data~\cite{zarnecki}, and discussed the design of a
special trigger used in the 1995 data taking.
ZEUS also presented the observation of DIS events with a high energy neutron
produced at very small angle with respect to the proton direction,
detected in their neutron counter~\cite{jmartin}.
These events, attributed to pion exchange, account for about 10\% of the
total DIS rate, independent of $x$ and \mbox{$Q^2$}.
Finally, the E665 muon experiment at Fermilab reported on the ratio of diffractive
to inelastic scattering~\cite{Wittek}.
\section{Exclusive Vector Meson Production}
\subsection{Introduction}\label{intro:vm}
Exclusive vector mesons production at HERA\footnote{Combined session
with Working Group 2, on Photoproduction Interactions.}
is a very interesting process to study
the transition from a non-perturbative description of the pomeron, the 'soft'
pomeron, to the hard perturbative pomeron.
The process that we study
is shown in figure \ref{fig1:vm}a and corresponds to the reaction
\begin{equation}
ep \rightarrow e V N,
\label{eq1:vm}
\end{equation}
where $V$ is a vector meson ($\rho,\omega,\phi,\rho^\prime,J/\psi,...$) and $N$ is either
the final state proton which remains intact in the interaction (elastic production)
or an excited state in case of proton dissociation.
The cross section for the elastic process has been calculated by several authors. In the
'soft' pomeron picture of Donnachie-Landshoff \cite{dl},
the photon fluctuates into a $q \bar q$ pair,
which then interacts with the proton by exchanging a pomeron.
The cross section $\sigma(\gamma p \rightarrow Vp)$ is expected to increase slowly with the $\gamma p$
center of mass energy $W$; the exponential dependence on $t$, the square of the
four momentum transfer at the proton vertex, is expected to become steeper as $W$ increases
(shrinkage).
In models \cite{ryskin,brodsky,nemchik} based on perturbative QCD,
the pomeron is treated as a perturbative two-gluon system: the cross section is then related to the
square of the gluon density in the proton and a strong dependence of the
cross section on $W$ is expected. The prediction,
taking into account the HERA measurements of the gluon density at
low $x_{\rm Bjorken}$, is that the cross section should have
at low $x$ a dependence of the type $W^{0.8-0.9}$.
In order to be able to apply perturbative QCD, a 'hard' scale
has to be present in the process.
The scales involved in process (\ref{eq1:vm}) are the mass of the quarks in the vector meson $V$,
the photon virtuality $Q^2$ and $t$.
In the following we
summarize results on vector meson production obtained by the H1 and ZEUS Collaborations
at HERA, in the energy range $W =40-150 ~{\rm GeV}$,
starting from $Q^2 \simeq 0$ and increasing the scales in the process.
We also review
results at high $t$, which were presented for the first time at this conference.
Results on vector meson production with diffractive proton dissociation events were also discussed.
\subsection{Vector mesons at $Q^2 \simeq 0$}\label{mass:vm}
Elastic vector meson production at $Q^2 \simeq 0$ has been studied by both Collaborations
\cite{sacchi,schiek}. The cross section for $\rho,\omega,\phi,J/\psi$ production
is plotted versus $W$
in fig.~\ref{fig1:vm}b (from \cite{h1rho}, where the results obtained at HERA ($W=50-200$ GeV)
are compared to those
of fixed target experiments ($W \simeq 10$ GeV).
At $Q^2 \simeq 0$, the cross section is mainly due to transversely polarized photons.
The Donnachie-Landshoff model (DL), where one expects for the
$\sigma(\gamma p) \rightarrow Vp$ cross section a dependence
of the type $\simeq W^{0.22}$,
reproduces the energy dependence for the light vector mesons (see lines in the figure).
In the same way the dependence of the total photoproduction cross section is well reproduced
by the soft pomeron model \cite{landshoff} (see figure).
In contrast, this model fails to describe the strong $W$ dependence observed
in the $J/\psi$ data. Note that this steep $W$ dependence,
$\sigma(\gamma p \rightarrow Vp) \simeq W^{0.8}$, is
implied even within the restricted $W$ range covered by the HERA data alone.
In this case the hard scale is
provided by the charm mass.
Elastic $J/\psi$ production is a very important process to determine the gluon density in the
proton, as the cross section is proportional to the square of this density.
The gluon density can be measured in the range
$5 \times 10^{-4} < x \simeq \frac{(m_V^2+Q^2+|t|)}{W^2} < 5 \times 10^{-3}$.
An improved calculation for this process was presented \cite{amartin}:
although the normalization of
the cross section is known theoretically with a precision of $30\%$, the shape of the $W$ dependence is very
sensitive to different parton density parametrizations in the proton.
Open charm production is also a very sensitive probe
of the gluon density, and the perturbative calculation has no ambiguities in the normalization; however it is
experimentally more difficult.
\begin{center}
\begin{figure}
\hbox{
\psfig{figure=procvm_fig1.ps,height=2.0in}
\psfig{figure=procvm_fig2.ps,height=2.5in}}
\caption{(a) Feynman graph of vector meson production at HERA;
(b) Cross section versus $W$ for vector meson production at $Q^2 \simeq 0, t \simeq 0$.}
\label{fig1:vm}
\end{figure}
\end{center}
\subsection{Vector mesons at high $Q^2$}\label{qtwo:vm}
Results on vector meson production at high $Q^2$ have been presented by H1 \cite{clerbaux}.
The cross section for the process $(\gamma^*p \rightarrow \rho p)$ with the H1
1994 data is shown in fig.~\ref{fig2:vm}a, together with the ZEUS 1993 results \cite{zeusrho},
in the $\gamma^* p$ centre of mass energy $W$ of $40$ to $140$ GeV, and at
$Q^2=10,20~{\rm GeV^2}$.
At these values of $Q^2$, the $\sigma(\gamma^*p \rightarrow Vp)$ cross section is dominated by
longitudinally polarized photons.
Comparing the H1 data to the NMC data at
$W \simeq 10~{\rm GeV}$, a dependence of the type $\sigma \simeq W^{0.6}$
at $Q^2=10~{\rm GeV^2}$
for the the cross section is obtained.
The $t$ dependence of the reaction $\gamma^* p \rightarrow \rho p$
for $\mbox{$|t|$} < 0.5 ~{\rm GeV^2}$ is well reproduced by an
an exponential distribution ${\rm exp}(-b\mbox{$|t|$})$, with $b \simeq 7~{\rm GeV^{-2}}$
(see table \ref{tab:vm}).
In the framework of Regge theory, the shrinkage of the elastic peak can be
written as $b(W^2)= b(W^2=W_0^2)+ 2\alpha^\prime \ln (W^2/W^2_0)$, where
$\alpha^\prime$ is the slope of the pomeron trajectory.
Comparing the H1 1994 data with
the NMC data, the parameter $\alpha^\prime$
which is obtained is in agreement with that expected from a soft
pomeron trajectory ($\alpha^{\prime}=0.25$ \mbox{${\rm GeV}^{-2}$}).
The H1 1994 and the ZEUS 1993 data are compatible within the errors, however
while the H1 $\rho$ data suggest that we are in a transition region between soft and hard
processes, the ZEUS 1993 data show
a stronger $W$ dependence for the cross section when compared to NMC ($\sigma \simeq W^{0.8}$),
and a flatter $t$ distribution, $ b \simeq 5~{\rm GeV^{-2}}$ (tab. \ref{tab:vm}).
These two last results suggest a hard pomeron exchange.
More data will allow to
reduce the uncertainties due to the non resonant background, to the proton dissociation background and to study
the $W$ dependence in the region covered by the HERA data alone.
The ratio of the dissociative over elastic $\rho$ cross section was measured \cite{clerbaux}
to be $0.59 \pm 0.12 \pm 0.12$, with no significant $Q^2$ dependence (in the range
between $8$ and $36~\rm{GeV^2}$) or $W$ dependence (in the range between $60$ and
$180~\rm{GeV}$).
$J/\psi$ production was presented by both collaborations \cite{clerbaux,stanco}: while the ratio of the cross sections
$\sigma(J/\Psi)/\sigma(\rho)$ is of the order of $10^{-3}-10^{-2}$ at $Q^2 \simeq 0$,
this ratio becomes close to 1 at $Q^2>10~{\rm GeV^{2}}$ (see figs \ref{fig2:vm}a
and \ref{fig2:vm}b, from \cite{h1rho}), as predicted
by perturbative QCD \cite{strikman}.
\begin{figure}
\hbox{
\psfig{figure=barbara1.eps,height=2in}
\psfig{figure=barbara2.eps,height=2in}}
\caption{Cross section versus $W$ for vector meson production at $Q^2 \simeq 0$ and at high $Q^2$, for $\rho$ (a) and $J/\Psi$ (b) production.}
\label{fig2:vm}
\end{figure}
\subsection{Vector mesons at high $\mbox{$|t|$}$}\label{t:vm}
The installation in 1995 of a new detector \cite{piotr}
in the ZEUS experiment at $44~{\rm m}$ from the interaction point in the direction of the outgoing positron,
allowed to tag the scattered positron and to access vector mesons at $Q^2 < 0.01 ~{\rm GeV^2}$,
$1 < |t| < 4.5 ~{\rm GeV^{2}}$ and in the $W$ range
$80 < W< 100 ~{\rm GeV}$. About 600 $\rho$ candidates and 80 $\phi$ candidates were found:
from Monte Carlo studies it was seen that at $t>1~{\rm GeV}^2$, the main contribution is
proton dissociative production.
The ratio of the cross sections $\sigma(\phi)/\sigma(\rho)$ was measured in two ranges of $t$
($ 1.2 < |t| < 2 ~{\rm GeV^2}$, $ 2 < |t| < 4.5 ~{\rm GeV^2})$, and was found to be
$0.16\pm 0.025(stat.) \pm 0.02 (sys.)$ and $0.18 \pm 0.05 (stat.) \pm 0.04(sys.)$ respectively,
close to the value of $2/9$ predicted by SU(3) flavour symmetry. This value is significantly higher
than the value obtained at $t \simeq 0, ~Q^2 \simeq 0$ and is instead compatible with the value obtained \cite{clerbaux,zeusphi} at
$Q^2 \simeq 12 ~{\rm GeV^2}$,
suggesting that at high $|t|$ perturbative QCD plays an important role.
Vector meson production at high $\mbox{$|t|$}$ has been suggested \cite{ivanov}
as a very nice field to study the hard pomeron,
since the cross sections are calculable in pQCD.
\begin{table}[t]
\caption{Results on the value $b$ (in ${\rm GeV}^{-2}$) fitted to the exponential $t$ distributions
for elastic and proton dissociation vector meson production at HERA. The first error is the statistical, the second error is the systematic one.
\label{tab:vm}}
\vspace{0.4cm}
\footnotesize
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& & $b$(H1) & $b$(ZEUS) \\ \hline
$Q^2=0,|t|<0.5~{\rm GeV}^2$, elastic & $\rho$ & $10.9 \pm 2.4 \pm 1.1$ & $9.9 \pm 1.2\pm 1.4$ \\
$Q^2=0,0.07<|t|<0.4~{\rm GeV}^2$ & & & $9.6\pm0.8\pm1.2$ (LPS) \\ \hline
$Q^2=0,|t|<0.5~{\rm GeV}^2$, elastic & $\omega$ & & $10.6\pm 1.1\pm 1.4$ \\
$Q^2=0,|t|<0.5~{\rm GeV}^2$, elastic & $\phi$ & & $7.3\pm1.0\pm0.8$ \\
$Q^2=0,|t|<1~{\rm GeV}^2$, elastic & $J/\Psi$ & $4.0\pm 0.2\pm 0.2$ & \\ \hline
$Q^2\simeq 10,|t|<0.5~{\rm GeV}^2$, elastic & $\rho$ & $7.0 \pm 0.8 \pm 0.6$ & $5.1^{+1.2}_{-0.9} \pm 1.0$ \\
$Q^2\simeq 10,|t|<0.8~{\rm GeV}^2$, elastic & $J/\Psi$ & $b=3.8 \pm 1.2^{+2.0}_{-1.6}$ & \\ \hline
$Q^2=0,0.04<|t|<0.45~{\rm GeV}^2$, p-dissociation & $\rho$ & & $5.3 \pm 0.8\pm 1.1$ (LPS) \\
$Q^2=10,|t|<0.8~{\rm GeV}^2$, p-dissociation & $\rho$ & $2.1 \pm 0.7 \pm 0.4 $ & \\
$Q^2=10,|t|<1~{\rm GeV}^2$, p-dissociation & $J/\Psi$ & $1.6 \pm 0.3 \pm 0.1 $ & \\ \hline
$Q^2=0,1<|t|<4.5~{\rm GeV}^2$ , p-dissociation & $\rho$ & & $ \simeq2.5 $ \\ \hline
\end{tabular}
\end{center}
\end{table}
\normalsize
\subsection{Outlook}
A summary of the $t$ slopes presented at this conference is given in table \ref{tab:vm}.
The results marked with LPS were obtained using the ZEUS Leading Proton Spectrometer
\cite{sacchi}, which detects the scattered proton and measures its momentum; the LPS allows
to tag a clean sample of elastic events and to measure $t$ directly.
The parameter $b$ is proportional, in diffractive processes, to the square of the radius of the
interaction, which decreases with the mass of the quark or the photon virtuality, as confirmed by the
results in the table.
Note that the result in inclusive diffractive deep inelastic events obtained by the ZEUS experiment using the
LPS is $b=5.9 \pm 1.3^{+1.1}_{-0.7}~{\rm GeV^{-2}}$,
for a mean value of the mass of the final hadronic system of $10~{\rm GeV}$ \cite{barberis}.
Also results in proton dissociation events were presented at this conference.
In $\rho^0$ photoproduction
events with proton dissociation, the exponential slope is $b\simeq 5~{\rm GeV^{-2}}$ at $t \simeq 0$ \cite{sacchi}
and becomes flatter, $b \simeq 2.5 ~{\rm GeV^{-2}}$, at high $|t|$ \cite{piotr}.
In summary, vector meson production at HERA is a rich field for studying the interplay between soft
and hard interactions in diffractive processes.
More luminosity and the forward proton spectrometers installed in both
the H1 and ZEUS experiments will allow to make more precise measurements.
\section*{Acknowledgments}
We wish to thank all the colleagues who participated in this parallel session,
our colleagues convenors of the shared sessions, the secretariat of the session,
and the organizers and the secretariat of DIS96 for the warm hospitality.
VDD would like to thank PPARC and the Travel and Research Committee of the
University of Edinburgh for the support.
EG wants to thank G.~Barbagli, M.~Arneodo and A.~Levy for a careful reading of
part of the manuscript.
\section*{References}
| proofpile-arXiv_065-372 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In order to make quantitative predictions in perturbative QCD, it is
essential to work to (at least) next-to-leading order (NLO). However, this
is far from straightforward because for all but the simplest quantities,
the necessary phase-space integrals are too difficult to do analytically,
making numerical methods essential. But the individual integrals are
divergent, and only after they have been regularized and combined is the
result finite. The usual prescription, dimensional regularization,
involves working in a fractional number of dimensions, making analytical
methods essential.
To avoid this dilemma, one must somehow set up the calculation such that
the singular parts can be treated analytically, while the full complexity
of the integrals can be treated numerically. Efficient techniques have
been set up to do this, at least to NLO, during the last few years.
A new general algorithm was recently presented,\cite{CS} which can be used
to compute arbitrary jet cross sections in arbitrary processes. It is
based on two key ingredients: the {\em subtraction method\/} for cancelling
the divergences between different contributions; and the {\em dipole
factorization theorems} (which generalize the usual soft and collinear
factorization theorems) for the universal (process-independent) analytical
treatment of individual divergent terms.
These are sufficient to write a general-purpose
Monte Carlo program in which any jet quantity can be calculated simply by
making the appropriate histogram in a user routine.
In this contribution we give a brief summary of these two ingredients (more
details and references to other general methods can be found in
Refs.\cite{CS}$^{\!-\,}$\cite{CSrh}) and show numerical results for the
specific case of jets in deep-inelastic lepton-hadron scattering (DIS).
\section{The Subtraction Method}
The general structure of a QCD cross section in NLO is
$\sigma = \sigma^{LO} + \sigma^{NLO} ,$
where the leading-order (LO) cross section $\sigma^{LO}$ is obtained by
integrating the fully
exclusive Born cross section $d\sigma^{B}$ over the phase space for the
corresponding jet quantity. We suppose that this LO calculation involves
$m$ partons, and write:
\begin{equation}
\label{sLO}
\sigma^{LO} = \int_m d\sigma^{B} \;.
\end{equation}
At NLO, we receive contributions from real and virtual processes (we assume
that the ultraviolet divergences of the virtual term are already
renormalized):
\begin{equation}
\label{sNLO}
\sigma^{NLO}
= \int_{m+1} d\sigma^{R} + \int_{m} d\sigma^{V} \;.
\end{equation}
As is well known, each of these is separately divergent, although their sum
is finite. These divergences are regulated by working in $d=4-2\epsilon$
dimensions, where they are replaced by singularities in $1/\epsilon$. Their
cancellation only becomes manifest once the separate phase space integrals
have been performed.
The essence of the subtraction method is to use the {\em exact\/} identity
\begin{equation}
\label{sNLO1}
\sigma^{NLO} = \int_{m+1} \left[ d\sigma^{R} - d\sigma^{A} \right]
+ \int_m d\sigma^{V} + \int_{m+1} d\sigma^{A} \;,
\end{equation}
which is obtained by subtracting and adding back the `approximate' (or
`fake') cross section contribution $d\sigma^{A}$, which has to fulfil two
main properties.
Firstly, it must exactly match the singular behaviour (in $d$ dimensions)
of $d\sigma^{R}$ itself. Thus it acts as a {\em local\/} counterterm for
$d\sigma^{R}$ and one can safely perform the limit $\epsilon \to 0$ under the
integral sign in the first term on the right-hand side of
Eq.~(\ref{sNLO1}).
Secondly, $d\sigma^{A}$ must be analytically integrable (in $d$ dimensions)
over the one-parton subspace leading to the
divergences. Thus we can rewrite the integral in the last term of
Eq.~(\ref{sNLO1}), to obtain
\begin{equation}
\label{sNLO2}
\sigma^{NLO} = \int_{m+1} \left[ \left( d\sigma^{R} \right)_{\epsilon=0}
- \left( d\sigma^{A} \right)_{\epsilon=0} \;\right] +
\int_m
\left[ d\sigma^{V} + \int_1 d\sigma^{A} \right]_{\epsilon=0}\;\;.
\end{equation}
Performing the analytic integration $\int_1 d\sigma^{A}$, one obtains $\epsilon$-pole
contributions that can be combined with those in $d\sigma^{V}$, thus
cancelling all the divergences. Equation~(\ref{sNLO2}) can be easily
implemented in a `partonic Monte Carlo' program that generates
appropriately weighted partonic events with $m+1$ final-state partons and
events with $m$ partons.
\section{The Dipole Formalism}
The fake cross section
$d\sigma^{A}$ can be constructed in a
fully process-independent way, by using the factorizing properties of gauge
theories. Specifically, in the soft and collinear limits, which give rise
to the divergences,
the factorization theorems can be used to write the
cross section as the contraction of the Born cross section with universal
soft and collinear factors (provided that colour and spin correlations are
retained). However, these theorems are only valid in the exactly singular
limits, and great care should be used in extrapolating them away from these
limits.
In particular, a careful treatment of momentum conservation is
required. Care has also to be
taken in order to avoid double counting the soft and collinear divergences in
their overlapping region (e.g.~when a gluon is both soft and collinear to
another parton).
The use of the dipole factorization theorem introduced in Ref.~\cite{CSlett}
allows one to overcome these difficulties in a straightforward way.
The dipole factorization formulae
relate the singular behaviour of ${\cal M}_{m+1}$, the tree-level matrix element
with $m+1$ partons, to ${\cal M}_{m}$. They
have the following symbolic structure:
\begin{equation}
\label{Vsim}
|{\cal M}_{m+1}(p_1,...,p_{m+1})|^2 =
|{\cal M}_{m}({\widetilde p}_1,...,{\widetilde p}_{m})|^2
\otimes {\bom V}_{ij}
+ \dots \;\;.
\end{equation}
The dots on the right-hand side stand for contributions that are not singular
when $p_i\cdot p_j \to 0$.
The dipole splitting functions ${\bom V}_{ij}$ are universal
(process-independent) singular factors that
depend on the momenta and quantum numbers of the $m$ partons in the tree-level
matrix element $|{\cal M}_{m}|^2$. Colour and helicity correlations are denoted by
the symbol $\otimes$. The set ${\widetilde p}_1,...,{\widetilde p}_{m}$
of modified momenta on the right-hand side of Eq.~(\ref{Vsim})
is defined starting from the original $m+1$ parton momenta in such a way that
the $m$ partons in $|{\cal M}_{m}|^2$ are physical, that is,
they are on-shell and energy-momentum conservation is
implemented exactly.
The detailed expressions for these parton momenta and for the dipole splitting
functions are given in Ref.~\cite{CS}.
Equation~(\ref{Vsim}) provides a {\em single\/} formula that
approximates the real matrix element $|{\cal M}_{m+1}|^2$
for an arbitrary process, in {\em all\/} of its singular limits. These limits
are approached smoothly, avoiding double counting
of overlapping soft and collinear singularities. Furthermore, the precise
definition of the $m$ modified
momenta allows an {\em exact\/} factorization
of the $m+1$-parton phase space, so that the universal dipole splitting
function can be integrated once and for~all.
This factorization, which is valid for the total phase space,
is not sufficient to provide a universal fake cross
section however, as its phase space
should depend
on the particular jet
observable being considered. The fact that the $m$ parton momenta are
physical provides a simple way to implement this dependence.
We construct $d\sigma^{A}$ by adding the dipole contributions on the
right-hand side of Eq.~(\ref{Vsim}) and for
each
contribution
we calculate the jet observable not
from the original $m+1$ parton momenta, but from the corresponding $m$
parton momenta, ${\widetilde p}_1,...,{\widetilde p}_{m}$. Since these are
fixed during the analytical integration, it can be performed without any
knowledge of the jet observable.
\vspace{0.3cm}
\noindent{\bf 4 $\;\,$ Final Results}
\vspace{0.2cm}
Refering to Eq.~(\ref{sNLO2}), the final procedure is then straightforward.
The calculation of any jet quantity to NLO consists of an $m+1$-parton
integral and an $m$-parton integral. These can be performed separately
using standard Monte Carlo methods.
For the $m+1$-parton integral, a phase-space point is generated and the
corresponding real matrix element
in $d\sigma^R$ is
calculated. These are passed to a user
routine, which can analyse the event in any way and histogram any
quantities of interest. Next, for each dipole term (there are
about $m(m^2-1)/2$ of them)
in $d\sigma^A$,
the set of $m$ parton momenta is derived from
the same phase-space point
and the corresponding dipole contribution is
calculated. These are also given to the user routine. They are such that
for any singular $m+1$-parton configuration, one or more of the $m$-parton
configurations becomes indistinguishable from it, so that they fall in the
same bin of any histogram. Simultaneously, the real
matrix element
and dipole term will have equal and opposite weights, so that the total
contribution to that histogram bin is finite. Thus the first integral of
Eq.~(\ref{sNLO2}) is finite.
The $m$-parton integral
in Eq.~(\ref{sNLO2})
has a simpler structure: it is identical
to the LO integration in Eq.~(\ref{sLO}), but with the Born term
replaced by the finite sum of the virtual matrix element
in $d\sigma^V$
and the analytical integral of the dipole contributions
in $d\sigma^A$.
In addition to the above considerations, there are slight extra
complications for processes involving incoming partons, like DIS, or
identified outgoing partons, like fragmentation-function calculations.
However, these can be overcome in an analogous way, as discussed in
Ref.~\cite{CS}.
For the specific case of jets in DIS, we have
implemented the algorithm as a Monte Carlo program,
which can be obtained
from the world wide web, at
\verb+http://surya11.cern.ch/users/seymour/nlo/+. In Fig.~\ref{fig}a
we show as an example the differential jet rate as a function of jet
resolution parameter, $f_{cut}$,
using the $k_\perp$ jet algorithm~\cite{ktalg}. We see that the NLO
corrections are
generally small and positive, except at very small
$f_{cut}$.
In Fig.~\ref{fig}b, we show the variation
of the
jet rate at a fixed $f_{cut}$ with factorization and
renormalization scales. The scale dependence is considerably smaller at NLO.
A Monte Carlo program based on a different method is presented
in~Ref.~\cite{mepjet}.
\begin{figure}
\centerline{\epsfig{figure=rome_01.ps,height=4.5cm}\hfill
\epsfig{figure=rome_02.ps,height=4.5cm}}
\caption[]{ Jet cross sections in $ep$ collisions at HERA energies
(${\sqrt s}= 300~{\rm GeV}$).
(a) The distribution of resolution parameter $f_{cut}$ at which
DIS events are resolved into $(2+1)$ jets according to the
$k_\perp$ jet algorithm. Curves are LO (dashed) and NLO
(solid) using factorization and renormalization scales
equal to $Q^2$, and the MRS D$-'$ distribution functions.
Both curves are normalized to the LO cross section.
(b) The rate of events with exactly $(2+1)$ jets at
$f_{cut}=0.25$ with variation of renormalization
(solid) and factorization (dashed) scales. Normalization
is again the LO cross section with fixed factorization
scale.
\label{fig}}
\end{figure}
\vspace{-0.6cm}
\noindent{\bf 5 $\;\,$ Conclusion}
\vspace{0.2cm}
The subtraction method provides an {\em exact\/} way to calculate arbitrary
quantities in a given process using a general purpose Monte Carlo program.
The dipole formalism provides a way to construct such a program from
process-independent components. Recent applications have included jets in
DIS. More details of the program, and its results, will be given
elsewhere.
\vspace{0.3cm}
\noindent{\bf Acknowledgments.}
This research is supported in part by EEC Programme
{\it Human Capital and Mobility}, Network {\it Physics at High
Energy Colliders}, contract CHRX-CT93-0357 (DG 12 COMA).
\vspace{0.3cm}
\noindent{\bf References}
\vspace{-0.1cm}
| proofpile-arXiv_065-373 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
This meeting saw many beautiful experimental
results presented, the overwhelming majority of
which support
the correctness of our basic understanding of particle physics.
Many of the puzzles
and data which did not fit
into our picture from last year's conferences have
become less compelling,
leaving a wealth of data forming a consistent picture. The first
observations of $W$ pairs from LEP were
presented, along with new measurements of the $W$, $Z$, and top
quark masses. The errors on
all of these masses are significantly reduced
from previous values. Numerous electroweak precision measurements
were presented, along with new measurements of
$\alpha(M_Z^2)$ and $\alpha_s(M_Z^2)$.
In this note, I give a (very
subjective!) lightning review of some of the highlights
of the meeting. Unfortunately, there are many exciting
and important results which I will not be able to cover.
This has been truly a productive year for particle physics.
\section{Precision Measurements of Masses}
\subsection{The $Z$ Mass}
The mass of the $Z$ boson is usually taken as an
input parameter in studies of electroweak physics.
At the $Z$ resonance,
the error on the $Z$ mass is directly related to the
precision with which the beam energy is measured.
Previous measurements have taken into account the
phases of the moon and the weight of Lake Geneva
on the ring. The latest measurement incorporates
the time schedules
of the TGV trains
(which generate vagabond currents in the beam)
and leads to a measurement
with errors\cite{tc}
\begin{eqnarray}
\Delta M_Z&=&\pm 1.5~MeV \\ \nonumber
\Delta \Gamma_Z&=& \pm 1.7~MeV~\quad .
\end{eqnarray}
These errors yield a new
combined LEP result from the $Z$ lineshape,~\cite{tc}
\begin{eqnarray}
M_Z&=& 91.1863\pm .0020~GeV
\\
\nonumber
\Gamma_Z&=& 2.4946\pm .0027~GeV\quad .
\qquad\qquad {\rm LEP}
\end{eqnarray}
The $Z$ mass is down $4~MeV$ from the previous measurement.
This shift is due almost entirely to understanding the effects
of the trains!
\subsection{The $W$ Mass}
The LEP experiments have presented
preliminary measurements of the $W$ pair
production cross section. $W^+W^-$ pairs have been observed in the
$q {\overline q} q {\overline q}$,
$q {\overline q} l \nu$, and $ l \nu l \nu$ decays modes, with the
number
of $W$ pairs increasing daily. Because of the sharp threshold behaviour
of the production cross section, $\sqrt{s}
\sim 161~GeV$ is the optimal
energy at which to measure the $W$ mass and the $M_W$ dependance of
the cross section at this point is relatively insensitive to new physics
effects. The combined result from the $4$ LEP experiments at
$\sqrt{s}=161.3 \pm .2~GeV$ is,~\cite{gb}
\begin{equation}
\sigma(e^+e^-\rightarrow W^+W^-)=3.6\pm .7~pb
\quad .
\qquad\qquad {\rm LEP}
\end{equation}
Assuming the validity of the Standard Model, this gives a
new measurement of the $W$ mass,\cite{gb}
\begin{equation}
M_W=80.4\pm.3\pm.1~GeV
\quad . \qquad\qquad {\rm LEP}
\end{equation}
Since the error is dominated by statistics, it
should be reduced considerably with further running.
The data presented correspond to $3~ pb^{-1}$ per experiment.
$W^+W^-$ pair production at LEP will also be used to measure
deviations of the $W^+W^-Z$ and
$W^+W^-\gamma$ couplings from their
Standard Model values and OPAL presented preliminary
limits on these couplings
as a ``proof of principle".\cite{gb}
These limits are not yet competitive with those obtained
at the Tevatron.
The D0 collaboration presented a new measurement of the $W$ mass
from the
transverse mass spectrum of $W\rightarrow e \nu$, \cite{ak}
\begin{equation}
M_W=80.37\pm .15~ GeV\quad . \qquad \qquad {\rm D0}
\end{equation}
This error is considerably smaller than previous CDF and D0 $W$ mass
measurements.
These results contribute to
a new world average,\cite{md}
\begin{equation}
M_W=80.356\pm .125~GeV\quad . \qquad \qquad {\rm WORLD}
\end{equation}
\subsection{The Top Quark Mass}
The top quark has moved from being a newly discovered particle to
a
mature particle whose properties can be studied in
detail. CDF and D0 each have more than
$100~pb^{-1}$ of data which means that about $500~t
{\overline t}$ pairs have been produced
in each experiment. Together, the experiments have
identified
around $13$ di-lepton, $70$ lepton
plus jets, and $60$ purely hadronic
top events and the top
quark cross section and mass have been measured in many
channels.
The cross sections and masses obtained from the
various channels are in good agreement and
the combined results from CDF and D0
at $\sqrt{s}=1.8~TeV$ are, \cite{bw,top}
\begin{eqnarray}
\sigma_{t {\overline t}}&=& 6.4_{-1.2}
^{+1.3}~pb \nonumber \\
M_T&=& 175 \pm 6~GeV\quad . \qquad \qquad
{\rm CDF,D0}
\end{eqnarray}
The error on $M_T$ of $\pm 6~GeV$ is a factor of $2$ smaller than that
reported in February, 1995 due both
to greater statistics and to improved analysis techniques.
The dominant source of error
remains the jet energy correction.
There has been
considerable theoretical effort devoted to computing the
top quark cross section in QCD beyond the leading order. In
order to sum the soft gluon effects, (which are numerically
important), the non-perturbative regime must be confronted,
leading to some differences between the various calculations.\cite{eb}
The theoretical cross section is slightly higher than the
experimental value, but is in reasonable agreement.
The direct measurement of $M_T$ can be compared with the
indirect
result inferred from precision electroweak
measurements at LEP and SLD,\cite{md,pl}
\begin{equation}
M_T=179\pm 7^{+16}_{-19}~GeV\quad .\qquad\qquad {\rm INDIRECT}
\end{equation}
(The second error results from varying the Higgs mass between
$60$ and $1000$ GeV with the central value taken as $300~GeV$.)
This is truly an impressive agreement between the direct
and indirect measurements!
Measurements of the top quark properties can be used to probe
new physics. For example, by measuring the branching ratio of
$t\rightarrow W b$ (and assuming $3$ generations
of quarks plus unitarity),
the $t-b$ element of the
Kobayashi-Maskawa matrix can be measured,\cite{bw,ts}
\begin{equation}
\mid V_{tb}\mid =.97\pm.15\pm.07\quad .\qquad\qquad
{\rm CDF}
\end{equation}
\section{Precision Electroweak Measurements}
There were many results from precision
electroweak measurements presented at this meeting, most of
which are in spectacular agreement with the predictions
of the Standard Model. (See the talks by
P.~Langacker~\cite{pl} and
M.~Demarteau~\cite{md} for tables of electroweak measurements
and the comparisons with Standard Model predictions).
Here, I will discuss two of those measurements,
\begin{eqnarray}
R_b&\equiv &{\Gamma(Z\rightarrow b {\overline b})
\over \Gamma(
Z\rightarrow {\rm hadrons})}
\nonumber \\
A_b&\equiv &
{2 g_V^bg_A^b\over
[(g_V^b)^2+(g_A^b)^2]}
\qquad .
\end{eqnarray}
Both of these measurements differ from the Standard Model
predictions
and are particularly interesting theoretically
since they involve the couplings of the third generation quarks.
In many non-standard
models, the effects of new physics would first show
up in the couplings of gauge bosons to the $b$ and $t$ quarks.
A year ago, the value of $R_b$ was about $3\sigma$
above the Standard Model prediction. At this meeting new results
were presented by the SLC collaboration and by the $4$ LEP
experiments.
Numerous improvements in the analyses have been
made, including measuring many of the charm decay rates directly
instead of inputting values from other experiments.
The ALEPH and SLD experiments
have employed a new analysis technique
utilizing a lifetime and mass tag. This technique allows them
to obtain $b$ quark samples which are $\sim 97\%$ pure,
while maintaining relatively high efficiencies.
This purity is considerably larger than
that obtained in previous studies of $R_b$.
The new ALEPH~\cite{ab}
and SLD~\cite{ew} results are right on the nose of
the Standard Model prediction,
\begin{equation}
R_b=\left\{
\begin{array}{ll}
.21582\pm.00087{\rm(stat)}
&
{\rm ALEPH}
\\
.2149\pm.0033 {\rm (stat)}\pm.0021 {\rm(syst)}
\pm.00071 {\rm (R_c)} &
{\rm SLD}
\\
.2156\pm .0002
\qquad .
&
{\rm SM}
\end{array}
\right.
\end{equation}
(The theory error results from varying $M_H$).\cite{pl}
Incorporating all measurements leads to a new world
average, \cite{md,pl}
\begin{equation}
R_b=.2178\pm .0011
\quad ,
\qquad\qquad {\rm WORLD}
\end{equation}
which is $1.8\sigma$ above the Standard Model.
Advocates of supersymmetric models remind us that it is
difficult to obtain effects larger than $2\sigma$ in these
models, so the
possibility that $R_b$ may indicate new physics remains, although
the case for it has certainly been weakened.
(The value of $R_c$ is now within $1\sigma$ of the Standard
Model prediction.)
The only electroweak precision measurement which
is in serious disagreement with the Standard Model
prediction is $A_b$,
which is sensitive to the axial vector coupling of the
$b$ quark. The new SLD result
obtained using a lepton sample,\cite{gm}
\begin{equation}
A_b=.882\pm.068 {\rm (stat)}\pm
.047 {\rm (syst)}\qquad \qquad {\rm SLD}
\end{equation}
leads to a revised world average,
\begin{equation}
A_b=.867\pm .022\quad, \qquad\qquad {\rm WORLD}
\end{equation}
about $3\sigma$ below the Standard Model prediction of $A_b=.935$.
There are,however,
assumptions involved in comparing the SLD and LEP
numbers which may help resolve this
discrepancy.\cite{jh}
The LEP and SLD electroweak precision measurements can also be
used to infer a preferred value for the Higgs mass,
(including also the direct measurement of $M_T$ as an input),
\cite{md}
\begin{equation}
M_H=149^{+148}_{-82~GeV}
\quad . \qquad {\rm INDIRECT}
\end{equation}
This limit is driven by $R_b$ and $A_{LR}$.
Since the observables depend only logarithmically
on $M_H$, there are large errors, but it is interesting
that a relatively light value of $M_H$ seems to
be preferred.
Such a light Higgs boson mass is
predicted in supersymmetric
theories.
The electromagnetic coupling constant can also be
extracted from electroweak precision measurements,
\begin{equation}
{1\over \alpha_{EM}(M_Z^2)}=128.894\pm .090
\qquad .
\end{equation}
This leads to an error of $\delta\sin^2\theta_W=.00023$,
which is roughly the same size as the experimental error.
This emphasizes the need for a more precise
measurement of $\alpha_{EM}$.
\section{QCD and Measurements of $\alpha_s$}
At the summer meetings a year ago, it seemed that the values of
$\alpha_s(M_Z^2)$ as extracted from
lattice calculations and low energy experiments were smaller
than the values extracted from measurements at the $Z$ pole. This led to
numerous speculations of the possibilities for new physics to
cause this effect. At this meeting the CCFR collaboration presented a new
measurement of $\alpha_s(M_Z^2)$
obtained by fitting the $Q^2$ dependance of
the $\nu$ deep inelastic structure functions,
$F_2$ and $xF_3$,
\cite{pss}
\begin{equation}
\alpha_s(M_Z^2)=.119\pm.0015
{\rm (stat)}\pm.0035 {\rm (syst)}
\pm .004 {\rm (scale)}
\qquad .
\qquad {\rm CCFR}
\end{equation}
This value is higher than the
previous values of $\alpha_s(M_Z^2)$ extracted from
deep inelastic scattering experiments.
We can compare with the value extracted from the lineshape
at LEP~\cite{pl}
\begin{equation}
\alpha_s(M_Z^2)=.123\pm.004\qquad\qquad {\rm LEP}
\end{equation}
to see that there does not seem to be any systematic discrepancy
between the values of $\alpha_s(M_Z^2)$ measured at different energies.
A world average for $\alpha_s(M_Z^2)$ (not including
the new CCFR point) can be found, \cite{md}
\begin{equation}
\alpha_s(M_Z^2)=.121\pm.003\pm.002\qquad . \qquad\qquad {\rm WORLD}
\end{equation}
Most of the extracted values of $\alpha_s(M_Z^2)$ are within
$1\sigma$ of this value.\cite{pl}
The inclusive jet cross sections measured at the Tevatron
continue to show an excess of events
at high $E_T$ when compared with the theoretical
predictions.\cite{etjet} When corrections are made
for differences in the rapidity coverages, etc,
between the detectors,
the CDF and D0 data on inclusive
jet cross sections are in agreement.\cite{ebg}
The data can be partially explained by adjusting the gluon structure
function at large $x$,\cite{hl}
although considerable theoretical work remains to
be done before this effect is completely understood.
\section{$\nu$ Puzzles}
The deficit of solar neutrinos from the Homestake mine, Kamiokande,
SAGE, and GALLEX
experiments remains a puzzle, as
it is not easily explained by adjustments to the
solar model. These results could be
interpreted in terms of oscillations.\cite{hs}
The LSND
collaboration presented positive
evidence for the oscillation ${\overline{
\nu_\mu}}\leftrightarrow {\overline{\nu_e}}$.~\cite{hk}
They now have $22$ events
with an expected background of $4.6\pm.6$. Their claim is
that the excess events are consistent with the
oscillation hypothesis.
Hopefully, an upgraded KARMEN detector will be able to clarify the
LSND results.\cite{sm}
\section{The $\tau$ lepton, $b$ and $c$ quarks}
This summary would not be complete without mentioning the
$\tau$, $b$ and $c$. Although each of these
particles was discovered some years ago, interesting new
results on lifetimes, branching ratios, and mixing
angles continue to be reported. See the reviews by
H.~Yamamoto~\cite{hy} and P.~Sphicas.\cite{ps}
\section{New Physics}
There were many talks at this meeting
devoted to searches for physics beyond the Standard Model.
They can best be summarized by stating that
there is no experimental evidence for such physics.
Many theorist's favorite candidate for
physics beyond the Standard Model is supersymmetry
and there were a large number of parallel talks with limits on the SUSY
spectrum, (see the reviews by W.~Merritt\cite{wm}
and M.~Schmitt\cite{ms}).
In many cases, the limits are in the
interesting $100-200~GeV$ range and seriously
restrict models with supersymmetry at the electroweak scale.
Considerable attention has been paid to a single CDF event
with an $e^+e^-\gamma\gamma$ in the final state, along with
missing energy. This event is particularly clean and lends
itself to various supersymmetric interpretations. At this meeting,
however, the $E_T^{\rm miss}$ distribution in the $\gamma\gamma$
spectrum was presented by the
CDF collaboration and there is no additional evidence
(besides this one event) for unexplained physics
in this channel.\cite{dt}
\section{Conclusions}
The theoretical predictions and experimental data discussed
at this meeting form a coherent picture in which the
predictions of the standard $SU(3)\times
SU(2)\times U(1)$ model have been validated many, many times.
We need to remind ourselves, however, that this is not the end
of particle physics and that
there are large areas in which we continue to be almost
totally
ignorant. There remain many unanswered questions:
"How is the electroweak symmetry broken?", "Why are there three
generations of quarks and leptons?", "Why do the coupling constants
and masses
have their measured values?" ....~The list goes on and on and our
questions can only be answered by future experiments.
\section*{Acknowledgments}
I am grateful to all the speakers who so generously
shared their transparencies and knowledge with me.
This manuscript has been authored under contract number
DE-AC02-76CH00016 with the U.S. Department of Energy.
Accordingly, the U.S. Government retains a non-exclusive, royalty-free
license to publish or reproduce the published form of this contribution, or allow others to do so,
for U.S. Government purposes.
\section*{References}
| proofpile-arXiv_065-374 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Supersymmetry provides an elegant framework
in which physics at the electroweak scale
can be decoupled from Planck scale physics.
The electroweak scale arises dynamically as the effective scale
of supersymmetry breaking in the visible sector.
The breaking of supersymmetry must be transmitted
from a breaking sector to the visible sector through
a messenger sector.
Most phenomenological studies of low energy
supersymmetry implicitly assume that messenger
sector interactions are of gravitational strength.
The intrinsic scale of supersymmetry breaking is
then necessarily of order $\sqrt{F} \sim 10^{11}$ GeV,
giving an electroweak scale of ${G_F^{-1/2}} \sim F / M_p$.
While gravitational strength interactions represent a
lower limit, it is certainly possible that the messenger
scale, $M$, is anywhere between the Planck and just above
the electroweak scale,
with supersymmetry broken at
an intermediate scale, ${G_F^{-1/2}} \sim F / M$.
If the messenger scale is well below the Planck scale,
it is likely that the usual gauge interactions of the
standard model play some role in the messenger sector.
This is because standard model gauginos couple
at the renormalizable level only through gauge interactions.
If the Higgs bosons received masses
predominantly from non-gauge interactions in the messenger sector,
with only a small contribution from
gauge interactions,
the standard model gauginos would be unacceptably
lighter than the electroweak scale.\footnote{
The argument for light gauginos in the absence of
standard model gauge interactions within a messenger
sector well below the Planck scale
only applies if the gauginos
are elementary degrees of freedom.
If the standard model gauge group is magnetic at or below
the messenger scale the gauginos could in principle
receive a large mass from operators suppressed by the
confining magnetic scale.}
It is therefore interesting to consider theories in which
the standard model gauge interactions act as messengers of supersymmetry
breaking \cite{lsgauge,hsgauge,dnmodels}.
This mechanism occurs if supersymmetry is realized non-linearly
in some sector which transforms under the standard model
gauge group.
Supersymmetry breaking in the visible sector spectrum
then arises as a radiative correction.
In this paper we consider the superpartner spectroscopy and
important phenomenological signatures that result
from gauge-mediated supersymmetry breaking.
Since within this anzatz
the gauge interactions transmit supersymmetry breaking,
the standard model soft masses arise in proportion to
gauge charges squared.
This leads to a sizeable hierarchy among the superpartner masses
according to gauge quantum numbers.
In addition, for a large class of models, there are a number
of relations and sum rules among the superpartner masses.
Electroweak symmetry breaking is driven by negative radiative
corrections to the up-type Higgs mass squared from the large top quark
Yukawa coupling and large stop masses
\cite{dnmodels}.
With the constraint of electroweak symmetry breaking
the minimal model of gauge-mediation is highly constrained
and very predictive,
with the superpartner spectrum depending primarily
on two parameters -- the overall scale and $\tan \beta$.
In addition, there is a logarithmic dependence
of various mass relations on the messenger scale.
The precise form of the low lying superpartner spectrum
determines the signatures that can be observed at a high
energy collider.
With gauge-mediated supersymmetry breaking, either
a neutralino or slepton is the lightest standard model
superpartner.
The signature for supersymmetry is then either the traditional
missing energy or heavy charged particle production.
In a large class of models the general form of the cascade decays
to the lightest standard model superpartner is largely fixed
by the anzatz of gauge-mediation.
In addition,
for a low enough supersymmetry breaking scale, the lightest
standard model superpartner can decay to its partner plus
the Goldstino component of the gravitino within the
detector.
In the next subsection the natural lack of flavor
changing neutral currents with gauge-mediated supersymmetry
breaking is discussed.
The minimal model of gauge-mediated supersymmetry breaking
(MGM) and its variations are presented in section 2.
A renormalization group analysis of the minimal model
is performed in section 3,
with the constraint of proper radiative electroweak symmetry
breaking enforced.
Details of the resulting superpartner and Higgs boson spectra
are discussed.
Mass relations and sum rules are identified which can distinguish
gauge mediation from other theories for the soft terms.
Some mass relations allow a logarithmically sensitive probe of
the messenger scale.
In section 4 variations of the minimal model are studied.
With larger messenger sector representations
the lightest standard model superpartner is naturally a slepton.
Alternately, additional sources for Higgs sector masses, can lead
in some instances to a Higgsino as the lightest standard model
superpartner.
The phenomenological consequences of gauge-mediated supersymmetry
breaking are given in section 5.
The supersymmetric contribution to
${\rm Br}(b \rightarrow s \gamma)$ in the minimal model, and resulting
bound on the overall scale for the superpartners, are quantified.
The collider signatures for superpartner production in both
the minimal model, and models with larger messenger sector
representations, are also detailed.
In the latter case, the striking signature of heavy charged
particles exiting the detector can result, rather than
the traditional missing energy.
The signatures resulting from decay of the lightest standard
model superpartner to its partner plus the Goldstino are
also reviewed.
In section 6 we conclude with a few summary remarks
and a comment about tuning.
The general expression for scalar and gaugino masses in a large
class of models is given in appendix A.
A non-minimal model is presented in appendix B which
demonstrates an approximate $U(1)_R$ symmetry, and has
exotic scalar and gaugino mass relations, even though
it may be embedded in a GUT theory.
Finally, in appendix C the couplings of the Goldstino
component of the gravitino
are reviewed.
In addition to the general expressions for the decay
rate of the lightest standard model superpartner
to its partner plus the Goldstino,
the severe suppression of the branching ratio to Higgs boson final
states in the minimal model is quantified.
\subsection{Ultraviolet Insensitivity}
\label{UVinsensitive}
Low energy supersymmetry removes power law sensitivity
to ultraviolet physics.
In four dimensions with ${\cal N}=1$ supersymmetry the parameters
of the low energy theory are renormalized however.
Infrared physics can therefore be logarithmically sensitive
to effects in the ultraviolet.
The best example of this is the value of the weak
mixing angle at the electroweak scale in supersymmetric grand unified
theories \cite{dimgeo}.
Soft supersymmetry breaking terms in the low energy theory
also evolve logarithmically with scale.
The soft terms therefore remain ``hard'' up to the
messenger scale at which they are generated.
If the messenger sector interactions are of gravitational strength,
the soft terms are sensitive to ultraviolet physics all the
way to the Planck or compactification scale.
In this case patterns within the soft terms might give an
indirect window to the Planck scale \cite{peskin}.
However, the soft terms are then also sensitive
to flavor violation at all scales.
Flavor violation at any scale can then in principle
lead to unacceptable flavor violation at the electroweak
scale \cite{hcr}.
This is usually avoided by postulating precise relations
among squark masses at the high scale, such as
universality \cite{dimgeo} or proportionality.
Such relations, however, do not follow from any unbroken
symmetry, and are
violated by Yukawa interactions.
As a result the relations only hold at a single scale.
They are ``detuned'' under renormalization
group evolution, and can be badly violated in extensions
of the minimal supersymmetric standard model (MSSM).
For example, in grand unified theories, large flavor violations
can be induced by running between the Planck and
GUT scales \cite{hcr}.
Elaborate flavor symmetries may be imposed to limit
flavor violations with a Planck scale messenger
sector \cite{flavorsym}.
Sensitivity to the far ultraviolet is removed in theories
with a messenger scale well below the Planck scale.
In this case the soft terms are ``soft'' above the
messenger scale.
Relations among the soft parameters are then not
``detuned'' by ultraviolet physics.
In particular there can exist a sector which
is responsible for the flavor structure of the Yukawa matrix.
This can arise from a hierarchy
of dynamically generated scales \cite{flavordyn}, or
from flavor symmetries, spontaneously broken
by a hierarchy of expectation values \cite{Yukvev}.
If the messenger sector for supersymmetry breaking is well
below the scale at which the Yukawa hierarchies are generated,
the soft terms can be insensitive to the flavor sector.
Naturally small flavor violation can result without
specially designed symmetries.
Gauge-mediated supersymmetry breaking gives an elegant
realization of a messenger sector below the Planck scale
with ``soft'' soft terms and very small flavor violation.
The direct flavor violation induced
in the squark mass matrix at the messenger scale
is GIM suppressed compared with
flavor conserving squark masses
by ${\cal O}(m_f^2/M^2)$, where $m_f$ is a fermion mass.
The largest flavor violation is generated by renormalization
group evolution between the messenger and electroweak
scales.
This experiences a GIM suppression of
${\cal O}(m_f^2/\tilde{m}^2)\ln(M/\tilde{m})$,
where $\tilde{m}$ is a squark mass, and is
well below current experimental bounds.
If the messenger sector fields transforming under the
standard model gauge group have the same quantum numbers
as visible sector fields, flavor violating
mixing can take place through Yukawa couplings.
This generally leads to large flavor violating soft
terms \cite{Yukflavor}.
These dangerous mixings are easily avoided by discrete
symmetries.
For example, in the minimal model discussed in the next section,
if the messenger fields are even under $R$-parity,
no mixing occurs.
In more realistic models in which the messenger fields
are embedded directly in the supersymmetry breaking
sector, messenger sector gauge symmetries responsible for
supersymmetry breaking forbid flavor violating mixings.
The natural lack of flavor violation is a significant advantage
of gauge-mediated supersymmetry breaking.
\section{The Minimal Model of Gauge-Mediated Supersymmetry Breaking}
The standard model gauge interactions act as messengers of
supersymmetry breaking if fields within the supersymmetry
breaking sector transform under the standard model gauge group.
Integrating out the messenger sector fields gives rise to
radiatively generated soft terms within the visible sector,
as discussed below.
The messenger fields should fall into vector representations
at the messenger scale
in order to obtain a mass well above the electroweak scale.
In order not to disturb
the successful prediction of gauge coupling
unification within the MSSM \cite{dimgeo} at lowest order,
it is sufficient (although not necessary \cite{martin})
that the messenger sector fields transform as complete
multiplets of any grand unified gauge group which contains the
standard model.
If the messenger fields remain elementary degrees of freedom
up to the unification scale, the further requirement of
perturbative unification may be imposed on the
messenger sector.
For supersymmetry breaking at a low scale,
these constraints allow up to four flavors of
${\bf 5} + \overline{\bf 5}$ of $SU(5)$,
a single ${\bf 10} + \overline{\bf 10}$ of $SU(5)$, or
a single ${\bf 16} + \overline{\bf 16}$ of $SO(10)$.
With the assumptions outlined above, these are the
discrete choices for the standard model representations in the
messenger sector.
In the following subsection the minimal model of gauge-mediated
supersymmetry breaking is defined.
In the subsequent subsection variations of the
minimal model are introduced.
\subsection{The Minimal Model}
\label{minimalsection}
\sfig{gauginoloop}{fig1.eps}
{One-loop messenger sector supergraph which gives rise to visible sector
gaugino masses.}
The minimal model of gauge-mediated supersymmetry breaking
(which preserves the successful predictions of perturbative
gauge unification) consists of messenger fields which transform
as a single flavor of ${\bf 5} + \overline{\bf 5}$ of $SU(5)$,
i.e. there are $SU(2)_L$ doublets
$\ell$ and $\bar{\ell}$, and $SU(3)_C$ triplets $q$ and $\bar{q}$.
In order to introduce supersymmetry breaking into the messenger
sector, these fields may be coupled to a gauge
singlet spurion, $S$, through the superpotential
\begin{equation}
W = \lambda_2 S \ell \bar{\ell} + \lambda_3 S q \bar{q}
\label{SQQbar}
\end{equation}
The scalar expectation value of $S$ sets the overall scale
for the messenger sector, and the auxiliary component, $F$,
sets the supersymmetry breaking scale.
For $F \neq 0$ the messenger spectrum is not supersymmetric,
$$
m_b = M \sqrt{ 1 \pm {\Lambda \over M} }
$$
\begin{equation}
m_f = M
\end{equation}
where $M = \lambda S$ and $\Lambda = F/S$.
The parameter $\Lambda /M$ sets the scale for the fractional
splitting between bosons and fermions.
Avoiding electroweak and color breaking in the messenger
sector requires $M > \Lambda$.
In the models of Ref. \cite{dnmodels} the field $S$ is an
elementary singlet which couples through a secondary
messenger sector to the supersymmetry breaking sector.
In more realistic models the messenger fields are
embedded directly in the supersymmetry breaking sector.
This may be accomplished within a model of dynamical supersymmetry
breaking by identifying an unbroken global symmetry with the
standard model gauge group.
In the present context,
the field $S$ should be thought of as a spurion which
represents the dynamics which break supersymmetry.
The physics discussed in this paper does not depend on the
details of the dynamics represented by the spurion.
Because (\ref{SQQbar}) amounts to tree level breaking, the
messenger spectrum satisfies the sum rule
${\cal S}Tr~m^2 = 0$.
With a dynamical supersymmetry breaking sector, this sum rule
need not be satisfied.
The precise value of ${\cal S}Tr~m^2$ in the messenger sector, however,
does not significantly affect the radiatively generated
visible sector soft parameters discussed below.
\sfig{scalarloop}{fig2.eps}
{Two-loop messenger sector supergraph which gives rise to visible sector
scalar masses. The one-loop subgraph gives rise to visible sector
gaugino wave function renormalization. Other graphs related by
gauge invariance are not shown.}
Integrating out the non-supersymmetric messengers gives rise
to effective operators, which lead to supersymmetry breaking
in the visible sector.
Gaugino masses arise at one-loop from the operator
\begin{equation}
\int d^2\theta~\ln S\, W^\alpha W_\alpha ~+~ h.c.
\label{SSWDV}
\end{equation}
as shown in Fig. 1.
In superspace this operator amounts to a shift of the
gauge couplings in the presence of the background spurion.
Inserting a single spurion auxiliary component gives a
gaugino mass.
For $F \ll \lambda S^2$ the gaugino masses
are \cite{dnmodels}
\begin{equation}
m_{\lambda_i}(M) = \frac{\alpha_i(M)}{4\pi}~ \Lambda
\label{gauginomass}
\end{equation}
where $\Lambda = F /S$ and
GUT normalized gauge couplings are assumed
($\alpha_1 = \alpha_2 = \alpha_3$
at the unification scale).\footnote{The standard model normalization
of hypercharge is related to the GUT normalization
by $\alpha^{\prime} = (3/5) \alpha_1$.}
The dominant loop momenta in Fig. 1 are ${\cal O}(M)$, so
(\ref{gauginomass})
amounts to a boundary condition for
the gaugino masses at the messenger scale.
Visible sector scalar masses arise at two-loops from the
operator
\begin{equation}
\int d^4 \theta~ \ln(S^{\dagger} S) ~ \Phi^{\dagger} e^V \Phi
\label{SSPP}
\end{equation}
as shown in Fig. \ref{scalarloop}.
In superspace this operator represents wave function
renormalization from the background spurion.
Inserting two powers of the auxiliary component of the
spurion gives a scalar mass squared.
For $F \ll \lambda S^2$ the scalar masses are \cite{dnmodels}
\begin{equation}
m^2(M) = 2 \Lambda^2~ \sum_{i=1}^3 ~ k_i
\left( \alpha_i(M) \over 4 \pi \right)^2
\label{scalarmass}
\end{equation}
where the sum is over $SU(3) \times SU(2)_L \times U(1)_Y$, with
$k_1 = (3/5) (Y/2)^2$ where the hypercharge is
normalized as $Q = T_3 + {1 \over 2} Y$,
$k_2 = 3/4$ for $SU(2)_L$ doublets and zero for singlets,
and $k_3 = 4/3$ for $SU(3)_C$ triplets and zero for singlets.
Again, the dominant loop momenta in Fig. \ref{scalarloop}
are ${\cal O}(M)$, so
(\ref{scalarmass})
amounts to a boundary condition for
the scalar masses at the messenger scale.
It is interesting to note that for $F \ll \lambda S^2$ the soft masses
(\ref{gauginomass}) and (\ref{scalarmass}) are independent
of the magnitude of the Yukawa couplings (\ref{SQQbar}).
This is because the one-loop graph of Fig. \ref{gauginoloop}
has an infrared divergence, $k^{-2}$, which is cut off by the
messenger mass $M=\lambda S$, thereby cancelling the $\lambda F$ dependence
in the numerator.
The one-loop subgraph of Fig. (\ref{scalarloop})
has a similar infrared divergence which cancels the
$\lambda$ dependence.
For finite $F/ (\lambda S^2)$ the corrections to (\ref{gauginomass})
and (\ref{scalarmass})
are small unless $M$ is very close to $\Lambda$
\cite{martin,dgp,yuritloop}.
Since the gaugino masses arise at one-loop and scalar masses
squared at two-loops, superpartners masses are generally
the same order for particles with similar gauge charges.
If the messenger scale is well below the GUT scale,
then $\alpha_3 \gg \alpha_2 > \alpha_1$, so the squarks and gluino
receive mass predominantly from $SU(3)_C$ interactions,
the left handed sleptons and $W$-ino from $SU(2)_L$ interactions,
and the right handed sleptons and $B$-ino from $U(1)_Y$ interactions.
The gaugino and scalar masses are then related at the messenger
scale by $m_3^2 \simeq {3 \over 8} m_{\tilde{q}}^2$,
$m_2^2 \simeq {2 \over 3} m_{\lL}^2$, and
$m_1^2 = {5 \over 6} m_{\lR}^2$.
This also leads to a hierarchy in mass between electroweak and
strongly interacting states.
The gaugino masses at the messenger scale are in the ratios
$m_1 : m_2 : m_3 = \alpha_1 : \alpha_2 : \alpha_3$,
while the scalar masses squared are in the approximate ratios
$m_{\tilde{q}}^2 : m_{\lL}^2 : m_{\lR}^2 \simeq {4 \over 3} \alpha_3^2 :
{3 \over 4} \alpha_2^2 : {3 \over 5} \alpha_1^2$.
The masses of particles with different gauge charges are tightly
correlated in the minimal model.
These correlations are reflected in the constraints of electroweak
symmetry breaking on the low energy spectrum, as
discussed in section \ref{RGEanalysis}.
The parameter $(\alpha / 4 \pi) \Lambda$ sets the scale for
the soft masses.
This should be of order the weak scale, implying
$\Lambda \sim {\cal O}(100\tev)$.
The messenger scale $M$ is, however, arbitrary in the minimal model,
subject to $M > \Lambda$.
In models in which
the messenger sector is embedded directly in a renormalizable
dynamical supersymmetry breaking sector \cite{SUquantum},
the messenger and
effective supersymmetry breaking scales are
the same order, $M \sim \Lambda \sim {\cal O}(100\tev)$,
up to small hierarchies from messenger sector Yukawa couplings.
This is also true of models
with a secondary messenger sector \cite{dnmodels}.
The messenger scale can, however, be well separated from the
supersymmetry breaking scale.
This can arise in models with large ratios of dynamical scales.
Alternatively with non-renormalizable supersymmetry breaking,
which vanishes in the flat space limit,
expectation values intermediate between the Planck and
supersymmetry breaking
scale can develop, leading to $M \gg \Lambda$.
A noteworthy feature of the minimal messenger sector
is that it is invariant under charge conjugation and parity,
up to electroweak radiative corrections.
This has the important effect of enforcing the vanishing
of the $U(1)_Y$ Fayet-Iliopoulos $D$-term
at all orders in interactions that involve gauge interactions
and messenger fields only.
This is crucial since a non-zero $U(1)_Y$ D-term at one-loop
would induce soft scalar masses much larger
in magnitude than the
two-loop contributions (\ref{scalarmass}),
and lead to $SU(3)_C$ and $U(1)_Q$ breaking.
This vanishing is unfortunately not an automatic feature of
models in which the messenger fields also transform
under a chiral representation of the gauge group responsible for
breaking supersymmetry.
In the minimal model a $U(1)_Y$ $D$-term is generated only
by gauge couplings to chiral standard model fields at three loops.
The leading log contribution comes from renormalization group
evolution and is discussed in section~\ref{electroweaksection}.
The dimensionful parameters within the Higgs sector
\begin{equation}
W = \mu H_u H_d
\end{equation}
and
\begin{equation}
V = m_{12}^2 H_u H_d ~+~ h.c.
\end{equation}
do not follow from the anzatz of gauge-mediated supersymmetry
breaking.
These terms require additional interactions which violate
$U(1)_{PQ}$ and $U(1)_{R-PQ}$ symmetries.
A number of models have been proposed to generate these
terms, including,
additional messenger quarks and
singlets \cite{dnmodels}, singlets with an
inverted hierarchy \cite{dnmodels}, and singlets with
an approximate global symmetry \cite{dgphiggs}.
In the minimal model the mechanisms for generating the
parameters $\mu$ and $m_{12}^2$ are not specified, and
they are taken as free parameters at the messenger scale.
As discussed below, upon imposing electroweak symmetry breaking,
these parameters may be eliminated in favor of
$\tan \beta = v_u / v_d$ and $m_{Z^0}$.
\sfig{Aloop}{fig3.eps}
{One-loop visible sector supergraph which contains both
logarithmic and finite contributions to visible sector
$A$-terms. The cross on the visible sector gaugino line
represents the gaugino mass insertion shown in Fig. 1.}
Soft tri-linear $A$-terms require interactions which
violate both $U(1)_R$ and visible sector chiral
flavor symmetries.
Since the messenger sector does not violate visible sector
flavor symmetries, $A$-terms are not generated at one-loop.
However, two-loop contributions involving a visible sector
gaugino supermultiplet do give rise to
operators of the form
\begin{equation}
\int d^4 \theta~ \ln S ~ {D^2 \over \vbox{\drawbox{6.0}{0.4}}} Q H_u \bar{u}
~+~h.c.
\label{SSQHu}
\end{equation}
as shown in Fig. \ref{Aloop},
and similarly for down-type quarks and leptons.
This operator is related by an integration by parts in superspace
to a correction to
the superpotential in the presence of the background spurion.
Inserting a single auxiliary component of the spurion
(equivalent to a visible sector gaugino mass in the one-loop subgraph)
gives a tri-linear $A$-term.
The momenta in the effective one-loop graph shown in Fig. \ref{Aloop}
are equally weighted in logarithmic intervals between
the gaugino mass and messenger scale.
Over these scales the gaugino mass is ``hard.''
The $A$-terms therefore effectively
vanish at the messenger scale, and
are generated
from renormalization group evolution below
the messenger scale, and finite contributions at
the electroweak scale.
At the low scale the $A$-terms have magnitude
$A \sim (\alpha / 4 \pi) m_{\lambda}\ln(M/m_{\lambda})$.
Note that $A$ is small compared with the scale
of the other soft terms,
unless the logarithm is large.
As discussed in section \ref{sspectrum} the $A$-terms
do not have a qualitative effect on the superpartner
spectrum unless the messenger scale is large.
The general MSSM has a large number of $CP$-violating phases
beyond those of the standard model.
Since here the soft masses are flavor symmetric (up to very small
GIM suppressed corrections discussed in section (\ref{UVinsensitive})),
only flavor symmetric phases are relevant.
A background charge analysis \cite{dtphases} fixes the
basis independent combination of flavor symmetric phases to be
${\rm Arg}(m_{\lambda} \mu (m_{12}^2)^*)$ and
${\rm Arg}(A^* m_{\lambda})$.
Since the $A$-terms vanish at the messenger scale only the first
of these can arise in the soft terms.
In the models of \cite{dnmodels} the auxiliary component of a
single field is the source for
all soft terms, giving a correlation among the phases such
that ${\rm Arg}(m_{\lambda} \mu (m_{12}^2)^*) = 0$ mod $\pi$.
In the minimal model, however, the mechanism for generating
$\mu$ and $m_{12}^2$ is not specified, and the phase is
arbitrary.
Below the messenger scale the particle content of the
minimal model is just that of the MSSM, along with the
gravitino discussed in section \ref{collidersection}
and appendix \ref{appgoldstino}.
At the messenger scale the boundary conditions for the
visible sector
soft terms are given by (\ref{gauginomass}) and
(\ref{scalarmass}), $\mu$, $m_{12}^2$, and $A=0$.
It is important to note that from the low energy
point of view the minimal model is just a set of
boundary conditions specified at the messenger scale.
These boundary conditions may be traded for the electroweak
scale parameters
\begin{equation}
( ~ \tan \beta~,~ \Lambda=F/S~,~{\rm Arg}~\mu~,~\ln M~)
\label{minpar}
\end{equation}
The most important of these is $\Lambda$ which sets
the overall scale for the superpartner spectrum.
Since all the soft masses are related in the minimal model,
$\Lambda$ may be traded for any of these, such
as $m_{\tilde{B}}(M)$.
It may also be traded for a physical mass, such as
$m_{\na}$ or $m_{\lL}$.
In addition, as discussed in section \ref{EWSB}
$\tan \beta$ ``determines'' $m_{12}$ and $\mu$
in the low energy theory, and can have important effects
on the superpartner spectrum.
\subsection{Variations of the Minimal Model}
\label{subvariations}
The minimal model represents a highly constrained and
very predictive theory for the soft supersymmetry
breaking terms.
It is therefore interesting to consider how
the qualitative features discussed in the
remainder of the paper change under deformations
away from the minimal model.
The most straightforward generalization is to
the other messenger sector representations which are consistent
with perturbative unification discussed at the
beginning of this section.
The expressions for gaugino and scalar masses for a
general messenger sector are given in appendix \ref{appgeneral}.
The gaugino masses grow like the quadratic index of
the messenger sector matter, while the scalar masses
grow like the square root of the quadratic index.
Models with larger messenger sector representations generally
have gauginos which are relatively heavier, compared with the
scalars, than the minimal model.
This can have important consequences for the standard model
superpartner spectrum.
In particular, a scalar lepton can be the lightest
standard model superpartner (as opposed to the lightest
neutralino for the minimal model).
This is the case for a range of parameters with two
messenger generations of ${\bf 5} + \overline{\bf 5}$ of
$SU(5)$, as discussed in section
\ref{multiple}.
Another generalization is to introduce multiple spurions
with general scalar and auxiliary
components, and general Yukawa couplings.
Unlike the case with a single spurion,
the scalar and fermion mass matrices in
the messenger sector are in general not aligned.
Such a situation generally arises if the messengers receive
masses from a sector not associated with supersymmetry
breaking.
This can occur in dynamical models with chiral messenger
representations which confine to, or are dual to,
a low energy theory with vector representations.
The messengers can gain a mass at the
confinement or duality scale, with
supersymmetry broken at a much lower scale \cite{itdual}.
With multiple spurions, the infrared cancelations of the
messenger Yukawa couplings, described in the previous
subsection for the minimal model, no longer hold.
This has the effect of removing the tight correlations
in the minimal model
between masses of superpartners with different gauge charges.
As an example, a model is presented in appendix \ref{appnonmin}
with two generations of ${\bf 5} + \overline{\bf 5}$ of
$SU(5)$, and two singlet fields.
One of the singlets is responsible for masses in the
messenger sector, while the other breaks supersymmetry.
Even though the model can be embedded in a GUT theory,
it yields a non-minimal spectrum.
Soft scalar masses require supersymmetry breaking, while
gaugino masses require, in addition, breaking of $U(1)_R$
symmetry.
If $U(1)_R$ symmetry is broken at a lower scale
than supersymmetry, the gauginos can be lighter
than in the minimal model.
This may be represented in the low energy
theory by a parameter $\not \! \! R$ which is the ratio of
a gaugino to scalar mass at the messenger
scale, relative to that in the minimal model.
The general definition of $\not \! \! R$ in theories with
a single spurion is given in appendix \ref{appgeneral}.
With multiple spurions $\not \! \! R < 1$ is generally obtained
since the messenger scalar and fermion mass matrices do not
necessarily align.
Since the gauginos have little influence on electroweak
symmetry breaking, as discussed in section \ref{appUR},
the main effect of $\not \! \! R < 1$ is simply to make
the gauginos lighter than in the minimal model.
The non-minimal model given in appendix \ref{appnonmin}
has, in one limit,
an approximate $U(1)_R$ symmetry and light gauginos.
Additional interactions in the messenger sector
are required in order to generate $\mu$ and $m_{12}^2$.
It is likely that these interactions also contribute
to the Higgs boson soft masses.\footnote{We thank Gian Giudice
for this important observation.}
Therefore, even though Higgs bosons and lepton doublets
have the same electroweak quantum numbers, their soft
masses at the messenger scale can be different.
In the low energy theory this may be parameterized by
the quantities
$$
\Delta_{+}^2 \equiv
m_{H_d}^2 + m_{H_u}^2 - 2 m_{l_L}^2
$$
\begin{equation}
\Delta_{-}^2 \equiv
m_{H_d}^2 - m_{H_u}^2
\label{splithiggs}
\end{equation}
where $m_{\lL}^2$ is the gauge-mediated left handed
slepton mass,
and all masses are evaluated at the messenger scale.
In the minimal model $\Delta_{\pm}^2=0$.
Since the Higgs soft masses affect electroweak symmetry
breaking, these splittings can potentially
have significant effects on the superpartner
spectrum.
However, as discussed in section \ref{addhiggs}, unless the non-minimal
contributions are very large, the general form of the
spectrum is largely unaltered.
Finally, the $U(1)_Y$ $D$-term, $D_Y$, can have a non-zero
expectation value at the messenger scale.
This leads to a shift of the soft scalar masses at the messenger
scale proportional to hypercharge, $Y$,
\begin{equation}
\delta m^2 (M) = g' Y ~ D_Y(M)
\label{Dmass}
\end{equation}
where $g' = \sqrt{3/5} g_1$ is the $U(1)_Y$ coupling.
Note that for the Higgs bosons, this yields a shift
$\Delta_-^2 = -2 g' D_Y(M)$.
As discussed in the previous subsection, $D_Y(M)=0$
in the minimal model as the result of a parity symmetry in
the messenger sector.
In non-minimal models $D_Y(M)$ need not vanish.
In general, an unsupressed $U(1)_Y$ $D$-term generated at
one-loop in the messenger sector destabilizes the electroweak
scale, and leads to $SU(3)_C$ and $U(1)_Q$ breaking.
However, it is possible for $D_Y(M)$ to be generated
at a level which gives rise to soft masses of the same order
as the gauge mediated masses.
For example, the one-loop contribution to $D_Y(M)$ is
suppressed if there is an approximate parity symmetry in
the messenger sector, and further suppressed if the messenger
couplings are unified at the GUT scale \cite{appparity}.
As another example, if the messengers sector transforms under
an additional $U(1)_{\tilde{Y}}$, gauge kinetic mixing between $U(1)_Y$ and
$U(1)_{\tilde{Y}}$ couples $D_Y$ and $D_{\tilde{Y}}$ \cite{kineticmix}.
Renormalization group evolution gives a one-loop
mixing contribution
$D_Y(M) \simeq {\cal O}((g' \tilde{g} / 16 \pi^2) \ln(M_c / M)
D_{\tilde{Y}})$ where $M_c$ is the GUT or compactification scale.
If the messengers in this case were embedded directly in a
renormalizable dynamical supersymmetry breaking sector,
$\tilde{g} D_{\tilde{Y}} \sim (\lambda^2 / \tilde{g}^2) F$,
where $\lambda$ is a Yukawa coupling in the dynamical sector.
For $\lambda \ll \tilde{g}$ the one-loop contribution to
$D_Y(M)$ is then suppressed.
With a non-renormalizable model $D_{\tilde{Y}}$ is suppressed by
powers of the dynamical scale over the Planck scale.
Given the above variations,
we consider the following parameters, in addition
to those of the minimal model (\ref{minpar})
\begin{equation}
(~N~,~\not \! \! R~ , ~\Delta^2_+ ~ , ~\Delta^2_-~, ~D_Y~)
\end{equation}
where $N$ is the number of generations of
${\bf 5} + \overline{\bf 5}$ in the messenger sector.
In the remainder of the paper we consider the minimal
case, $N=1$, $\not \! \! R=1$, $\Delta_{\pm}^2=0$,
and $D_Y=0$, and
discuss in detail the effects on the superpartner spectrum
for $N=2$, $\not \! \! R \neq 1$, $\Delta_-^2 \neq 0$, and
$D_Y \neq 0$ in
section \ref{varcon}.
\section{Renormalization Group Analysis}
\label{RGEanalysis}
The general features of the superpartner spectrum
and resulting phenomenology are
determined by the boundary conditions at the messenger scale.
Renormalization group evolution of the parameters
between the messenger and electroweak scales
can have a number of important effects.
Electroweak symmetry breaking results from the negative
evolution of the up-type Higgs mass squared from the large
top quark Yukawa coupling.
Imposing electroweak symmetry breaking gives
relations among the Higgs sector mass parameters.
Details of the sparticle spectrum can also be affected by
renormalization.
In particular, changes in the
splittings among the light states can have an important impact
on several collider signatures, as discussed in section
\ref{collidersection}.
In addition, general features of the spectrum, such as the
splitting between squarks and left handed sleptons, can
be logarithmically sensitive to the messenger scale.
In this section the effects of renormalization group evolution
on electroweak symmetry breaking and the superpartner spectrum in
the minimal model are presented.
Gauge couplings, gaugino masses, and third generation
Yukawa couplings are evolved at two-loops.
Scalar masses, tri-linear $A$-terms, the $\mu$ parameter,
and $m_{12}^2$ are evolved at one-loop.
For the scalar masses, the $D$-term contributions to the
$\beta$-functions
are included (these are sometimes neglected in the literature).
Unless stated otherwise, the top quark pole mass is
taken to be $m_t^{\rm pole} = 175$ GeV.
The renormalization group analysis is similar to that of
Ref. \cite{CMSSM}, being modified for the boundary conditions and
arbitrary messenger scale of the minimal model of gauge-mediated
supersymmetry breaking.
As discussed in the previous section,
the boundary conditions at the high scale are
$m_{\lambda_i}, m^2_i, \mu, m_{12}^2$, and
${\rm sgn}~\mu$.
The precise scale at which a
soft term is defined depends on the
messenger field masses.
In the minimal model the mass of the messenger doublets and
triplets are determined by the messenger Yukawa couplings,
$M_{2,3} = \lambda_{2,3} S$, which can in principle differ.
Even if $\lambda_2 = \lambda_3$ at the GUT scale these couplings
are split under renormalization group evolution \cite{hitoshi}.
In a full model, the supersymmetry breaking dynamics
also in part determine the values of $\lambda_2$ and $\lambda_3$
through renormalization group evolution.
Since we are interested primarily in effects which follow from the
anzatz of gauge-mediated supersymmetry breaking, and not
on details of the mechanism of supersymmetry breaking,
we will neglect any splitting between doublet and triplet masses.
This would in fact be the case, up to small gauge coupling
corrections, if the messengers are embedded
in a supersymmetry breaking sector near a strongly coupled
fixed point.
All soft terms are therefore assumed to be specified
at a single messenger scale, $M$.
The range of allowed $M$ is taken to be $\Lambda \leq M \leq M_{GUT}$.
In the minimal model, $M$ precisely equal to
$\Lambda$ is unrealistic since some
messenger sector scalars become massless at this point.
In what follows
the limit $M = \Lambda$
is not to be taken literally in the minimal model, but
should be thought of as indicative
of a realistic model with a single dynamical scale, and
no small parameters \cite{SUquantum}.
In many of the specific examples given in the following subsections
we take $\Lambda = 76,118,163$ TeV, with $\Lambda=M$, which
gives a $B$-ino mass at the messenger scale of
$m_{\tilde B}(M) = 115, 180, 250$ GeV,
with the other soft masses related by the boundary conditions
(\ref{gauginomass}) and (\ref{scalarmass}).
\jfig{sfig7n}{fig4.eps}{Renormalization group evolution of the
$\overline{\rm DR}$ mass parameters with MGM boundary conditions. The
messenger scale is $M=76\tev$, $m_{\tilde B}(M)=115\gev$,
$\tan\beta =3$, ${\rm sgn}(\mu) = +1$,
and $m_t^{\rm pole} = 175$ GeV.
The masses are plotted as $m \equiv {\rm sgn}(m) (|m|)^{1/2}$.
}
The renormalization of the $\overline{\rm DR}$ mass parameters
according to the one- and two-loop $\beta$ functions described
above
for the minimal model
with $M = \Lambda = 76$ TeV,
$m_{\tilde{B}}(M) = 115$ GeV, $\tan \beta =3$,
and ${\rm sgn}(\mu) = +1$,
is shown in Fig. \ref{sfig7n}.
As can be seen, renormalization has an effect
on the mass parameters.
In the following subsections the
constraints imposed
by electroweak symmetry breaking, and the
form of the low energy spectrum resulting from renormalization group
evolution are discussed.
\subsection{Electroweak Symmetry Breaking}
\label{EWSB}
The most significant effect of renormalization group evolution
is the negative contribution to the
up-type Higgs boson mass squared from the large
top quark Yukawa coupling.
As can be seen in Fig. \ref{sfig7n} this leads to a negative
mass squared for $H_u$.
The $\beta$-function for $m_{H_u}^2$ is dominated by the
heavy stop mass
\begin{equation}
\frac{dm^2_{H_u}}{dt} \simeq
\frac{1}{16\pi^2} (6h^2_t (m^2_{\tilde{t}_L} +m^2_{\tilde{t}_R}
+ m_{H_u}^2) +\cdots )
\end{equation}
where $t=\ln(Q)$, with a small
${\cal O}(g_2^2 m_2^2 / h_t^2 m^2_{\tilde{t}})$
correction from gauge interactions.
For the parameters given in Fig. \ref{sfig7n}
the full $\beta$-function for $m_{H_u}^2$
is approximately constant above the stop
thresholds,
differing by less than 1\% between $M$ and $m_{\tilde t}$.
The evolution of $m_{H_u}^2$ is therefore approximately
linear in this region
(the non-linear feature in Fig. \ref{sfig7n} is a square-root
singularity because the quantity plotted is
${\rm sgn}(m_{H_u}) \sqrt{ | m_{H_u} |^2}$).
It is worth noting that
for $\tan \beta$ not too large, and
$M$ not too much larger than $\Lambda$,
$m_{H_u}^2(m_{\tilde t})$ is then well approximated by
\begin{equation}
m_{H_u}^2(m_{\tilde t}) \simeq m_{H_u}^2(M) - {3 \over 8 \pi^2} h_t^2
( m^2_{\tilde{t}_L} +m^2_{\tilde{t}_R} ) \ln(M/ m_{\tilde t})
\label{Huapp}
\end{equation}
(although throughout we use numerical integration of the
full renormalization group equations).
For the parameters of Fig. \ref{sfig7n}
the approximation (\ref{Huapp})
differs from the full numerical integration by 2\%
(using the messenger scale values of $h_t$,
$m_{\tilde t_L}$, and $m_{\tilde t_R}$).
The magnitude of the small positive gauge contribution
can be seen below the stop thresholds in Fig. \ref{sfig7n}.
The negative value of $m_{H_u}^2$ leads to electroweak symmetry
breaking in the low energy theory.
The mechanism of radiative symmetry breaking \cite{dnmodels}
is similar to that for high scale supersymmetry breaking with universal
boundary conditions.
With high scale supersymmetry breaking, $m_{H_u}^2 < 0$ develops
because of the large logarithm.
Here $m_{H_u}^2 < 0$ results not because the logarithm is large,
but because the stop masses are large \cite{dnmodels}.
Notice in Fig. \ref{sfig7n} that $m_{H_u}^2$ turns negative
in less than a decade of running.
The negative stop correction
effectively amounts to an
${\cal O}((\alpha_3 / 4 \pi)^2
(h_t/4 \pi)^2 \ln(M / m_{\tilde t}))$
three-loop contribution which is larger than the
${\cal O}((\alpha_2 / 4 \pi)^2 )$ two-loop contribution
\cite{dnmodels}.
Naturally large stop masses which lead automatically to
radiative electroweak symmetry breaking are one of the
nice features of low scale
gauge-mediated supersymmetry breaking.
Imposing electroweak symmetry breaking gives relations
among the Higgs sector mass parameters.
In the approach taken here we solve for the electroweak
scale values of $\mu$ and $m_{12}^2$
in terms of $\tan \beta$ and $m_{Z^0}$
using the minimization
conditions
\begin{eqnarray}
|\mu|^2+\frac{m_{Z^0}^2}{2} & = &
\frac{(m^2_{H_d}+\Sigma_d)-(m^2_{H_u}+\Sigma_u)\tan^2\beta}
{\tan^2\beta -1}
\label{mincona} \\
\label{minconb}
\sin 2\beta & = &
\frac{-2m^2_{12}}{(m^2_{H_u}+\Sigma_u)+(m^2_{H_d}+\Sigma_d)+2|\mu|^2}
\end{eqnarray}
where
$\Sigma_{u,d}$ represent finite one-loop corrections
from gauge interactions and top and bottom Yukawas
\cite{finitehiggs}.
These corrections are
necessary to reduce substantially the scale dependence
of the minimization conditions.
In order to minimize the stop contributions to the finite corrections,
the renormalization scale is taken to be the geometric mean
of the stop masses, $Q^2 = m_{\tilde{t}_1} m_{\tilde{t}_2}$.
The finite corrections to the one-loop effective potential
make a non-negligible contribution
to (\ref{mincona}) and (\ref{minconb}).
For example, with the parameters of Fig. \ref{sfig7n},
minimization of the tree level conditions with the renormalization
scheme given above gives $\mu = 360$ GeV,
while inclusion of the finite corrections results in $\mu = 395$ GeV.
The minimization conditions (\ref{mincona}) and (\ref{minconb})
depend on
the value of the top quark mass, $m_t$, mainly through
the running of $m_{H_u}^2$, and also through the finite corrections.
For the parameters of Fig. \ref{sfig7n},
a top mass range in the range $m_t = 175 \pm 15$ GeV gives a
$\mu$ parameter in the range $\mu = 395 \pm 50$ GeV.
\jfig{sfig16n}{fig5.eps}
{The relation between $m_2$ and $|\mu|$
imposed by electroweak symmetry breaking with MGM boundary conditions
for $\tan\beta =2,3,5,10,30$, and $\Lambda=M$.}
The correlation between $\mu$ and $m_{12}^2$ can be obtained from
(\ref{mincona}) and (\ref{minconb})
in terms of $\tan \beta$ and $m_{Z^0}$.
The relation between the $W$-ino mass, $m_2$,
and $\mu$ (evaluated at the renormalization scale)
imposed by electroweak symmetry breaking
in the minimal model is shown
in Fig. \ref{sfig16n} for
$\tan \beta = 2,3,5,10$, and $30$, with $\Lambda = M$ and
${\rm sgn}(\mu)=+1$.
The actual correlation is of course between the Higgs sector
mass parameters.
The parameter $m_2$ is plotted just as a representative mass
of states transforming under $SU(2)_L$, and of the overall scale
of the soft masses
(the gaugino masses directly affect electroweak
symmetry breaking only through very small higher order corrections
to renormalized Higgs sector parameters).
The $\mu$ parameter typically lies in the range
$ {3 \over 2} m_2 \lsim |\mu| \lsim 3 m_2$ or
$ m_{\lL} \lsim |\mu| \lsim 2 m_{\lL}$,
depending on the precise values of
$\tan \beta$ and $\ln M$.
The correlation between $\mu$ and the
overall scale of the soft masses arises because
the stop masses set the scale for $m_{H_u}^2$ at the electroweak
scale, and therefore the depth of the Higgs potential.
For $\tan \beta \gg 1$
the conditions (\ref{mincona}) reduce to
$|\mu|^2 \simeq -m_{H_u}^2 - {1 \over 2} m_{Z^0}^2$.
In this limit, for $|\mu|^2 \gg m_{Z^0}^2$,
$|\mu| \simeq (- m_{H_u}^2)^{1/2}$, with the small difference determining
the electroweak scale.
At moderate $\tan \beta$ the corrections to this approximation
increase $\mu$ for a fixed overall scale.
At fixed $\tan \beta$, and $M$ not too far
above $\Lambda$, $m_{H_u}^2$ at the renormalization scale
is approximately a linear function of
the overall scale $\Lambda$, as can be seen from Eq. (\ref{Huapp}).
The very slight non-linearity in Fig. \ref{sfig16n} arises
from $\ln M$ dependence, and ${\cal O}(m_{Z^0}^2 / \mu^2)$
effects in the minimization conditions.
The limit $|\mu|^2, |m_{H_u}|^2 \gg m_{Z^0}^2$ of course represents
a tuning among the Higgs potential parameters
in order to obtain proper electroweak symmetry breaking.
\jfig{sfig11n}{fig6.eps}
{The ratio $m_{12}/\mu$ as a function of
$\tan\beta$ imposed by electroweak symmetry breaking for
$m_{\tilde B}(M)=115, 180,250$ GeV and $\Lambda=M$.}
The ratio $m_{12} / \mu$ at the renormalization scale is
plotted in Fig. \ref{sfig11n}
for $m_{\tilde B}(M)=115, 180,250$ GeV and $\Lambda=M$.
Again, the tight correlation, approximately independent of the
overall scale, arises because all soft terms are related to
a single scale, $\Lambda$.
The small splitting between the three cases shown
in Fig. \ref{sfig11n} arises from $\ln M$ dependence, and
${\cal O}(m_{Z^0}^2 / \mu^2)$
effects in the minimization conditions.
Ignoring corrections from the bottom Yukawa,
$m_{12} / \mu \rightarrow 0$
for $\tan \beta \gg 1$.
The saturation of $m_{12} / \mu$ at large $\tan \beta$ is
due to bottom Yukawa effects in the renormalization
group evolution and finite corrections to $m_{H_d}^2$.
Any theory for the origin of $\mu$ and $m_{12}^2$ with
minimal gauge mediation, and
only the MSSM degrees of freedom at low energy, would
have to reproduce (or at least be compatible with) the
relation given in Fig. \ref{sfig11n}.
Note that all the low scale Higgs sector mass parameters are quite
similar in magnitude
over essentially all the parameter space of the MGM.
\subsection{Sparticle Spectroscopy}
\label{sspectrum}
\begin{table}
\begin{center}
\begin{tabular}{cc}
\hline \hline
Particle & Mass (GeV) \\ \hline
$\tilde{u}_L, \tilde{d}_L$ & 869, 871 \\
$\tilde{u}_R, \tilde{d}_R$ & 834, 832 \\
$\tilde{t}_1, \tilde{t}_2$ & 765, 860 \\
$\tilde{g}$ & 642 \\
$A^0, H^0, H^{\pm}$ & 506, 510, 516 \\
$\chi_3^0, \chi_2^{\pm}, \chi_4^0$ & 404, 426, 429 \\
$\tilde{\nu}_L, \tilde{l}_L$ & 260, 270 \\
$ {\chi_1^{\pm}}, {\chi_2^0}$ & 174, 175 \\
$\tilde{l}_R$ & 137 \\
$h^0$ & 104 \\
${\chi_1^0}$ & 95 \\
\hline \hline
\end{tabular}
\caption{Superpartner physical spectrum for the parameters given in
Fig. 4.}
\end{center}
\end{table}
The gross features of the superpartner spectrum are determined
by the boundary conditions at the messenger scale.
Renormalization group evolution can modify these somewhat,
and electroweak symmetry breaking imposes relations which
are reflected in the spectrum.
Mixing and $D$-term contributions also shift
some of the states slightly.
The physical spectrum resulting from the renormalized parameters
given in Fig. \ref{sfig7n} is presented in Table 1.
In the following subsections the spectroscopy of the electroweak
states,
strongly interacting states, and Higgs bosons are discussed.
We also consider the
dependence of the spectrum on the messenger scale,
and discuss quantitative relations among the superpartner masses
which test the hypothesis of gauge-mediation and can be
sensitive to the messenger scale.
\subsubsection{Electroweak States}
\label{electroweaksection}
The physical charginos, $\chi_i^{\pm}$, and neutralinos,
$\chi_i^0$, are mixtures
of the electroweak gauginos and Higgsinos.
As discussed in the previous subsection,
imposing electroweak symmetry breaking with MGM boundary
conditions implies
${3 \over 2} m_2 \lsim |\mu| \lsim 3 m_2 $
over all the allowed
parameter space.
With this inequality, in the limit
$\mu^2 - m_2^2 \gg m_{W^{\pm}}^2$ the lightest chargino is mostly
gaugino.
Likewise, in the limit $\mu^2 - m_1^2 \gg m_{Z^0}^2$, the lightest
two neutralinos are mostly gaugino.
Under renormalization group evolution both $m_1$
and $m_2$ are slightly decreased.
For the parameters of Fig. \ref{sfig7n} this amounts to a
$-15$ GeV shift for the $B$-ino and a $-10$ GeV shift for the
$W$-ino.
At the electroweak scale $m_2 \simeq 2 m_1$.
The lightest neutralino is therefore mostly $B$-ino.
For example, with the parameters given in Fig. \ref{sfig7n},
the $\na$ eigenvectors are
$N_{1 \tilde{B}} = 0.98$,
$N_{1 \tilde{W}} = - 0.09$,
$N_{1d} = 0.14$, and
$N_{1u} = - 0.07$.
Expanding in $m_{Z^0}^2/ (\mu^2-m_1^2)$,
the $\na$ mass is given by~\cite{ssconstraints}
\begin{equation}
m_{\na} \simeq m_1 - { m_{Z^0}^2 \sin^2 \theta_W \
( m_1 + \mu \sin2 \beta) \over |\mu|^2 - m_1^2 } .
\label{binoshift}
\end{equation}
Note that the shift in the physical
$\na$ mass relative to the $B$-ino mass parameter $m_1$
depends on ${\rm sgn}(\mu)$.
For the parameters in Fig. \ref{sfig7n} with ${\rm sgn}(\mu) = +1$
this amounts to a $-5$ GeV shift.
Except for very large $\tan \beta$ discussed below,
$\na$ is the lightest standard model superpartner.
The lightest chargino and second lightest neutralino
are mostly $W$-ino, and form an approximately degenerate
triplet of $SU(2)_L$,
$(\chi_1^+, \chi_2^0, \chi_1^-)$,
as can be seen in Table 1.
This approximate triplet is very degenerate, with splittings
arising only at
${\cal O}(m_{Z^0}^4 / \mu^4)$.
Expanding in $m_{W^{\pm}}^2/(\mu^2 - m_2^2)$,
the triplet mass is given by~\cite{ssconstraints}
\begin{equation}
m_{\nb,\ca} \simeq m_2 - { m_{W^{\pm}}^2
( m_2 + \mu \sin2 \beta) \over |\mu|^2 - m_2^2 } .
\label{winoshift}
\end{equation}
Again, the shift in the physical mass relative to the $W$-ino
mass parameter $m_2$ is anticorrelated with ${\rm sgn}(\mu)$.
For the parameters of Fig. \ref{sfig7n} this amounts
to a $-19$ GeV shift.
The heavier chargino and two heaviest neutralinos
are mostly Higgsino, and form an approximately degenerate
singlet and triplet of $SU(2)_L$,
all with mass set by the $\mu$ parameter.
This is also apparent in Table 1, where $\chi_3^0$ is
the singlet, and $(\chi_2^{+}, \chi_4^0, \chi_2^-)$ form the triplet.
The splitting between the singlet and triplet
is ${\cal O}(m_{Z^0}^2 / \mu^2)$ while the
splitting among the triplets are ${\cal O}(m_{Z^0}^4 / \mu^4)$.
All these splittings may be verified by an effective operator analysis
which shows that splittings within a triplet require
four Higgs insertions, while splitting between a singlet
and triplet requires only two.
The right handed sleptons are lighter than the other scalars
since they only couple to the messenger sector through
$U(1)_Y$ interactions.
The low scale value of $m_{\lR}$ is shifted by a number
of effects.
First, renormalization due to gaugino interactions increases
$m_{\lR}$ in proportion to the gaugino mass.
In addition, the radiatively generated $U(1)_Y$ $D$-term,
proportional to $S \equiv {1 \over 2}{\rm Tr}(Y \tilde{m}^2)$
where ${\rm Tr}$ is over all scalars,
also contributes to renormalization of scalar masses \cite{alvarez}.
With gauge-mediated boundary conditions, $S =0$ at the
messenger scale as the result of anomaly cancelation,
${\rm Tr}(Y \{T^a,T^b \})=0$, where
$T^a$ is any standard model gauge generator.
The $\beta$-function for $S$ is homogeneous and
very small in magnitude
(below the messenger scale and above all sparticle thresholds
$\beta_S = (66 / 20 \pi) \alpha_1 S$ \cite{Sref})
so $S \simeq 0$ in the absence of
scalar thresholds.
The largest contribution comes below the squark thresholds.
The ``image'' squarks make a large negative contribution to $S$ in
this range.
Although not visible with the resolution in Fig. \ref{sfig7n},
the slope of $m_{\lR}(Q)$ has a kink at the
squark thresholds from this effect.
Finally, the classical $U(1)_Y$ $D$-term also increases
the physical mass in the presence of electroweak symmetry breaking,
$\tilde{m}^2_{\lR} = {m}^2_{\lR} - \sin^2 \theta_W
\cos 2 \beta m_{Z^0}^2$, where $\cos 2 \beta < 0$.
For the parameters of Fig. \ref{sfig7n}
the gauge and $U(1)_Y$ $D$-term contributions to renormalization,
and the classical $U(1)_Y$ $D$-term, contribute a
positive shift to $\tilde{m}_{\lR}$ of
$+2$, $+3$, and $+6$ GeV respectively.
As discussed in section \ref{collidersection},
the sum of all these small shifts,
and the $m_{\na}$ shift (\ref{binoshift}), can have
an important effect on signatures at hadron colliders.
The left handed sleptons receive mass from both $SU(2)_L$ and
$U(1)_Y$ interactions.
Under renormalization $m_{\lL}$ is increased slightly
by gaugino interactions.
The most important shift is the splitting between $m_{\lL}$ and
$m_{\nL}$ arising from $SU(2)_L$ classical $D$-terms
in the presence of electroweak symmetry breaking,
$m_{\lL}^2 - m_{\nL}^2 = - m_{W^{\pm}}^2 \cos 2 \beta$.
For the parameters of Fig. \ref{sfig7n} this amounts to
a 10 GeV splitting between $\lL$ and $\nL$.
\jfig{sfig17n}{fig7.eps}
{The ratio $\mstauR / \meR$ as a function of $\tbeta$ for
$m_{\bino}(M) = 115, 180,250$ GeV, and $\Lambda=M$.}
Because of the larger Yukawa coupling, the $\stau$
slepton masses
receive other contributions, beyond those
for $\tilde{e}$ and $\tilde{\mu}$ discussed above.
The $\tau$ Yukawa gives a negative contribution
to the renormalization group evolution of $m_{\stau_L}$
and $m_{\stau_R}$.
In addition the left and right handed $\stau$ are mixed
in the presence of electroweak symmetry breaking
\begin{equation}
m_{\stau}^2 = \left( \begin{array}{cc}
m_{{\stau}_L}^2 + m_{\tau}^2 + \Delta_{{\stau}_L} &
m_{\tau} ( A_{\stau} - \mu \tan \beta) \\
m_{\tau} ( A_{\stau} - \mu \tan \beta) &
m_{\tilde{\tau}_R}^2 + m_{\tau}^2 + \Delta_{\tilde{\tau}_R}
\end{array}
\right)
\label{staumatrix}
\end{equation}
where $\Delta_{\stau_L} = (-{1 \over 2} + \sin^2 \theta_W)
m_{Z^0}^2 \cos 2 \beta$ and
$\Delta_{\stau_R} = - \sin^2 \theta_W \cos 2 \beta$ are
classical $D$-term contributions.
As discussed in section \ref{minimalsection},
$A_{\stau}$ is only generated by renormalization group evolution
in proportional to $m_2$.
It is therefore small, and does not contribute significantly
to the mixing terms.
For the parameters of Fig. \ref{sfig7n} $A_{\stau} \simeq -25$ GeV,
and remains small for all $\tan \beta$.
\jfig{sfig10n}{fig8.eps}
{The ratio $m_{\tilde \tau_1}/m_{\chi^0_1}$ as a function of
$\tan\beta$ for $m_{\tilde B}(M) = 115,180,250$ GeV.
For large $\tan\beta$ the
$\tilde \tau_1$ becomes the lightest standard model superpartner.}
For large $\tan \beta$ the $\tau$ Yukawa coupling becomes large,
and the mixing terms cause level repulsion which lowers
the $\stau_1$ mass below $\lR$.
The ratio $\mstauR / \meR$ is shown in Fig. \ref{sfig17n}
as a function of $\tan \beta$ for
$m_{\bino}(M) = 115, 180,250$ GeV, and $\Lambda=M$.
For large $\tan \beta$ the $\stau_1$ can be significantly
lighter than $\eR$ and $\tilde{\mu}_R$.
The negative contributions to $m_{\stau_1}$
in this regime comes partially from
renormalization group evolution, but mostly from mixing.
For example,
with $m_{\bino}(M)=115$ GeV, $\Lambda=M$, and $\tan \beta = 40$,
the Yukawa renormalization and mixing contributions to
$m_{\stau_1}$ amount to $-10$ GeV and $-36$ GeV shifts
with respect to $m_{\eR}$.
For these parameters $m_{\eR} = 137$ GeV and
$m_{\stau_1} = 91$ GeV.
As the messenger scale is increased the relative contribution
to the mass shift due to renormalization group evolution
increases.
The negative shift of the lightest $\stau$ can even be large enough
to make $\stau_1$ the lightest standard model superpartner.
The ratio $m_{\tilde \tau_1}/m_{\chi^0_1}$ is plotted as
a function of $\tan \beta$ in Fig. \ref{sfig10n}.
For $m_{\bino}(M)=115$ GeV and $\Lambda=M$ the $\stau_1$
becomes lighter than $\na$ at $\tan \beta = 38 \pm 4$ for
$m_t^{\rm pole} = 175 \pm 15$ GeV.
The negative shift from mixing is an
${\cal O}(m_{\tau} \mu \tan \beta / (m_{\stau_L}^2 - m_{\stau_R}^2))$
correction to the lightest eigenvalue,
and therefore becomes smaller as the overall
scale of the soft masses is increased,
as can be seen in Fig. \ref{sfig10n}.
In addition, $\tan \beta$ is bounded from above if the
$b$ quark Yukawa coupling remains perturbative up to a large scale.
We find that for
$m_{\bino}(M) \gsim 200$ GeV the $\stau_1$ is never lighter
than $\na$ in the minimal model without $h_b$ becoming
non-perturbative somewhere below the GUT scale.
The slight decrease in $m_{\stau_1} / m_{\na}$
which can be seen in Fig. \ref{sfig10n}
at small $\tan \beta$
is due partly to the increase in $\mu$
which decreases the $\na$ eigenvalue
for ${\rm sgn}(\mu)=+1$,
and partly to classical $U(1)_Y$ $D$-terms which increase
$\meR$.
Both these effects are less important for a larger overall
scale.
The negative shift of $\stau_1$
relative to the other sleptons and $\na$ at large $\tan \beta$
can have important implications
for collider signatures, as discussed in section
\ref{collidersection}.
The $\tau$ Yukawa also gives a negative shift in the mass of the
$\nu_{\stau}$ sneutrino under renormalization group evolution.
The magnitude of this
shift is however smaller than for that of $\stau_R$ by
${\cal O}(m_{\lR}^2 / m_{\lL}^2)$.
For example,
with $m_{\bino}(M)=115$ GeV, $\Lambda=M$, and $\tan \beta = 40$,
the Yukawa renormalization contribution amounts to a $-2$ GeV shift
in $m_{\nu_{\stau}}$ with respect to $m_{\nu_{\tilde e}}$.
\subsubsection{Strongly Interacting States}
\label{strongsection}
The gluino and squarks receive mass predominantly
from $SU(3)_C$ interactions with the messenger sector.
The gluino mass increases under renormalization
in proportion to $\alpha_3$ at lowest order,
$m_3 = m_3(M) ( \alpha_3(m_3) / \alpha_3(M))$.
The physical pole mass of the gluino is related to the
renormalized $\overline{\rm DR}$ mass by finite corrections
\cite{finitegluino}.
With very heavy squarks, the general expression
for the finite corrections given in Ref.
\cite{finitegluino} reduces to
\begin{equation}
m_{\tilde{g}}^{\rm pole} \simeq m_3 \left[ 1 + {\alpha_3 \over 4 \pi}
\left( 15 + 12 I(r) \right) \right]
\label{gluinopole}
\end{equation}
where $m_3$ and $\alpha_3$ are the renormalized $\overline{\rm DR}$
parameters evaluated at the scale $m_3$.
The first term in the inner parenthesis is from QCD corrections,
and the second from squark-quark couplings, where
the loop function is
$I(r) = {1 \over 2} \ln r + {1 \over 2} (r-1)^2 \ln(1-r^{-1})
+ {1 \over 2} r -1$ for $r \geq 1$
where $r = m_{\tilde{q}}^2 / m_3^2$.
For $r \gg 1$ $I(r) \rightarrow {1 \over 2} \ln r$, $I(2)=0$,
and $I(1)=-{1 \over 2}$.
The largest corrections to (\ref{gluinopole}) are
${\cal O}(( m_t^2 / m_3^2)\alpha_3 / 4 \pi)$.
In the minimal model $r \lsim {8 \over 3}$,
which happens to be near the zero of $I(r)$.
For example, with the parameters of Fig. \ref{sfig7n},
$I(r)\simeq 0.035$.
The corrections to the gluino pole mass are then dominated by QCD.
For the parameters of Fig. \ref{sfig7n} the finite corrections
amount to a $+70$ GeV contribution to the physical mass.
The squark masses receive a small increase under renormalization
in proportion to the gluino mass at lowest order.
Since the gluino and squark masses are related within
gauge mediation by
$m_3^2(M) \simeq {3 \over 8} m_{\tilde{q}}^2(M)$,
this can be written as a multiplicative shift
of the squark masses
by integrating the one-loop $\beta$-function
\begin{equation}
m_{\tilde{q}} \simeq m_{\tilde{q}}(M)
\left[ 1 + { \not \! \! R^2 \over 3}
\left( { \alpha_3^2(m_{\tilde{q}}) \over \alpha_3^2(M)} -1 \right)
\right]^{1/2}
\label{squarkrenorm}
\end{equation}
where $\not \! \! R$ is the gaugino masses parameter defined in section
\ref{subvariations}.
The
${\cal O}((m_i^2 / m_3^2)
(\alpha_i^2(m_{\tilde{q}}) / \alpha_i^2(M)-1))$, $i=1,2$,
renormalization group
corrections to (\ref{squarkrenorm}) are quite small since
$\alpha_2$ runs very slowly, and $\alpha_1$ is small.
For the minimal model with $\not \! \! R=1$ and $\Lambda=M$
renormalization amounts to a 7\% upward shift in the squark masses.
For a messenger scale not too far above $\Lambda$, the squark
masses are determined mainly by $\alpha_3(M)$.
The left and right handed squarks are split mainly
by $SU(2)_L$ interactions with the messenger sector
at ${\cal O}(m_{\lL}^2 / m_{\tilde{q}}^2)$.
The much smaller splitting between up and down
type left handed squarks is due to
classical $SU(2)_L$ $D$-terms at ${\cal O}(m_{W^{\pm}}^2 / m_{\tilde{q}}^2)$.
Finally, the small splitting
between up and down
right handed sleptons is from classical $U(1)_Y$ $D$-terms
at ${\cal O}(m_{Z^0}^2 / m_{\tilde{q}}^2)$
and $U(1)_Y$ interactions with the messenger sector
at ${\cal O}(m_{\lR}^2 / m_{\tilde{q}}^2)$.
The magnitude of these small splittings can be seen in Table 1.
The gluino is lighter than all the first and second generation
squarks for any messenger scale in the minimal model.
The stop squarks receive additional contributions because
of the large top quark Yukawa.
For a messenger scale not too much larger than $\Lambda$,
the positive renormalization group contribution from
gauge interactions is largely offset by a negative
contribution from the top Yukawa.
In addition the left and right handed stops are
mixed in the presence of electroweak symmetry
breaking
\begin{equation}
m_{\tilde t}^2 = \left( \begin{array}{cc}
m_{\tilde{t}_L}^2 + m_t^2 + \Delta_{\tilde{t}_L} &
m_t ( A_t - \mu \cot \beta) \\
m_t ( A_t - \mu \cot \beta) &
m_{\tilde{t}_R}^2 + m_t^2 + \Delta_{\tilde{t}_R}
\end{array}
\right)
\label{stopmatrix}
\end{equation}
where
$\Delta_{\tilde{t}_L}=({1 \over 2} - {2 \over 3} \sin^2 \theta_W)
\cos 2 \beta m_{Z^0}^2$ and
$ \Delta_{\tilde{t}_R}={2 \over 3} \sin^2 \theta_W \cos 2 \beta
m_{Z^0}^2$
are classical $D$-term contributions.
The radiatively generated $A$-terms for squarks are
somewhat larger
than for the electroweak scalars because of the larger gluino
mass.
\jfig{sfig12n}{fig9.eps}{Squark spectrum as a function of
$\tan\beta$ for $m_{\tilde B}(M)=115$ GeV and $\Lambda=M$.}
For the parameters of Fig. \ref{sfig7n} $A_{\tilde{t}} \simeq -250$ GeV,
and does not vary significantly over all $\tan \beta$.
At large $\tan \beta$ mixing induces only an
${\cal O}(m_t A_t / m_{\tilde{t}}^2)$
correction to the lightest eigenvalue.
Because of the large squark masses,
$A$-terms therefore do not contribute significantly to mixing.
At small $\tan \beta$ the top Yukawa and $\mu$ become large,
and $\tilde{t}_1$ can be pushed down.
The squark spectrum is shown
in Fig. \ref{sfig12n} as a function of $\tan \beta$ for
$m_{\bino}(M)=115$ GeV and $\Lambda = M$.
In contrast to the $\stau$, most of the negative shift in the
stop masses
relative to the first two generations comes from renormalization
group evolution (except for small $\tan \beta$).
In Fig. \ref{sfig12n}, for large $\tan \beta$, the renormalization
and mixing contributions to $m_{\tilde{t}_1}$ are
$-50$ GeV and $-5$ GeV respectively.
Because of the large overall squark masses, and relatively small
$\mu$ and $A$-terms, a light stop is never obtained in the MGM
parameter space.
\subsubsection{Higgs Bosons}
\label{higgssection}
The qualitative features of the Higgs boson spectrum are
determined by the pseudo-scalar mass
$m_{A^0}^2 = 2 m_{12}^2 / \sin 2 \beta$.
The pseudo-scalar mass is shown in Fig. \ref{sfig14n} as a function
$m_{\na}$, for
$\tan\beta =2,3,5,30$, and $\Lambda=M$.
\jfig{sfig14n}{fig10.eps}{The pseudo-scalar Higgs mass, $m_{A^0}$, as
a function of the lightest neutralino mass, $m_{\chi^0_1}$, for
$\tan\beta =2,3,5,30$, and $\Lambda=M$.}
The lightest neutralino mass is plotted in Fig. \ref{sfig14n}
as representative of the overall scale of the superpartner
spectrum.
Using the minimization conditions (\ref{mincona}) and (\ref{minconb})
the pseudo-scalar mass may be written, for $\tan \beta \gg 1$,
as $m_{A^0}^2 \simeq |\mu|^2 + (m_{H_d}^2 + \Sigma_d) - {1 \over 2} m_{Z^0}^2$.
For moderate values of $\tan \beta$ this gives the inequality
$m_{A^0} \gsim |\mu|$ over all the allowed parameter space.
Since electroweak symmetry breaking implies $3 m_1 \lsim |\mu|
\lsim 6 m_1$, $m_{A^0} \gg m_{\na}$ in this range.
For small $\tan \beta $ the corrections from (\ref{mincona})
to this approximate relation make $m_{A^0}$ even larger.
For $\tan \beta \gsim 35$ the negative contribution of the bottom
Yukawa to the renormalization group evolution of $m_{H_d}^2$,
and finite corrections, allow $m_{A^0} \lsim |\mu|$.
Also note that
since $\mu$ is determined by the overall scale of the superpartner
masses, $m_{A^0}$ scales linearly with $m_{\na}$ for $|\mu|^2 \gg m_{Z^0}^2$.
This scaling persists for moderate values of $\tan \beta$,
as can be seen in Fig. \ref{sfig14n}.
The non-linear behavior at small $m_{A^0}$ is due to
${\cal O}(m_{Z^0}^2 / \mu^2)$ contributions to the mass.
Over essentially all the allowed parameter space $m_{A^0} \gg m_{Z^0}$.
In this case the Higgs decoupling limit is reached in which
$A^0$, $H^0$ and $H^{\pm}$ form an approximately
degenerate complex doublet of $SU(2)_L$,
with fractional splittings of ${\cal O}(m_{Z^0}^2 / m_{A^0}^2)$.
This limit is apparent in Table 1.
Since $m_{A^0} \gsim |\mu|$ over most of the parameter space, the
heavy Higgs bosons are heavier than the Higgsinos, except
for $\tan \beta$ very large.
\jfig{sfig13n}{fig11.eps}{The lightest Higgs mass, $m_{h^0}$, as a
function of the lightest neutralino mass, $m_{\chi^0_1}$, for
$\tan\beta =2,3,5,30$, and $\Lambda=M$.}
In the decoupling limit the light Higgs, $h^0$, remains light
with couplings approaching standard model values.
The radiative corrections to $m_{h^0}$ are sizeable~\cite{higgsrad} since
the stop squarks are so heavy with MGM boundary conditions \cite{riot}.
The physical $h^0$ mass is shown in Fig. \ref{sfig13n}
as a function of $m_{\na}$ for $\tan \beta = 2,3,5,30$,
and $\Lambda=M$.
Since $m_{\tilde t_{1,2}},m_{A^0}\gg m_{Z^0}$ and stop
mixings are small, as discussed in section \ref{strongsection},
$m_{h^0}$ is well approximated by the
leading log correction to the tree level mass in the decoupling limit
\begin{equation}
m^2_{h^0} \simeq \cos^2 2 \beta m_{Z^0}^2
+ \frac{3g^2m^4_t}{8 \pi^2m^2_W}\ln
\left( \frac{m_{\tilde t_1}m_{\tilde t_2}}{m^2_t} \right).
\label{higgsmass}
\end{equation}
For moderate values of $\tan \beta$ (\ref{higgsmass})
overestimates the full one-loop mass shown
in Fig. \ref{sfig13n} by $4-5\gev$.
The $\tan \beta$ dependence of $m_{h^0}$ in Fig. \ref{sfig13n}
comes mainly from the tree level contribution.
In general, the one-loop corrections are
largely independent of $\tan \beta$ and depend mainly on the overall
scale for the superpartners through the stop masses.
This is
apparent from the log dependence of $m_{h^0}$ on $m_{\na}$
in Fig.~\ref{sfig13n},
and in the approximation (\ref{higgsmass}).
Note that for $m_{\na} < 100$ GeV, $m_{h^0} \lsim 120$ GeV.
\subsubsection{Messenger Scale Dependence}
Much of the spectroscopy discussed above assumed a low messenger
scale
$M \sim \Lambda \sim {\cal O}(100\tev)$.
However, in principle $M$ can be anywhere between $\Lambda$
and $M_{GUT}$.
The physical spectrum for $m_{\bino}(M) = 115$ GeV and $\tan \beta =3$
is shown in Fig. \ref{sfig15n} as a function of the messenger scale.
\jfig{sfig15n}{fig12.eps}
{The physical spectrum as a function of
the messenger scale for $m_{\tilde B}(M)=115$ GeV and $\tan\beta =3$.}
The scalar masses are sensitive to the gauge couplings at
the messenger scale.
For a fixed $B$-ino mass at the messenger scale
(proportional to $\alpha_1(M)$),
the squarks become lighter as the messenger scale is increased
because $\alpha_3 / \alpha_1$ is smaller at higher scales.
Conversely the right handed sleptons become heavier as $M$
is increased because
of the larger contribution from renormalization group
evolution.
The gauge coupling $\alpha_2$ increases more slowly than
$\alpha_1$ as the scale is increased.
For fixed $m_{\bino}(M)$, as
in Fig. \ref{sfig15n}, the left handed slepton masses therefore
become smaller as the messenger scale is increased.
The sensitive dependence of the squark masses on $\alpha_3(M)$
provides a logarithmically sensitive probe of the messenger
scale, as discussed in the next section.
For larger messenger scales the spread among the superpartner
masses becomes smaller.
This is simply because all soft masses are proportional to
gauge couplings squared, and the gauge couplings
converge at larger scales.
The boundary conditions for the scalar masses with $M = M_{GUT}$
satisfy the relations
$m_{\eR}^2 : m_{\eL}^2 : m_{\tilde{Q}_L}^2 : m_{\tilde{u}_R}^2
: m_{\tilde{d}_R}^2 =
(3/5) : (9/10) : 3 : (8/5) : (21/15)$.
These do not satisfy GUT relations because only
$SU(3)_C \times SU(2)_L \times U(1)_Y$ interactions are included.
If the full $SU(5)$ gauge interactions are included
$m_{\bar{\bf 5}}^2 : m_{\bar{\bf 10}}^2 = 2 : 3$
where $\eL, \tilde{d}_R \in \overline{\bf 5}$ and
$\eR, \tilde{Q}_L, \tilde{u}_R \in \overline{\bf 10}$.
Of course, for a messenger scale this large, gravitational
effects are also important.
For a messenger scale slightly above $\Lambda$,
$m_{H_u}^2$ is driven to more negative values by the top Yukawa
under renormalization group evolution.
Obtaining correct electroweak symmetry breaking therefore requires
larger values of $\mu$ and $m_{12}^2$.
This can be seen in Fig. \ref{sfig15n}
as an increase in $\mu$ and $m_{A^0}$ for $M \gsim \Lambda$.
For larger messengers scales the increase in the magnitude of $m_{H_u}^2$
from more running is eventually offset by the smaller stop masses.
This can be seen in Fig. \ref{sfig15n} as a decrease in $\mu$
and $m_{A^0}$ for $M \gsim 10^7$ GeV.
The spectra as a function of the messenger scale for different
values of $\tan \beta$ are essentially identical to Fig. \ref{sfig15n}
aside from $\mu$, $m_{A^0}$, and $m_{\tilde{t}}$.
This is because $\tan \beta$ only affects directly the Higgs sector
parameters, which in turn
influence the mass of the other states
only through two-loop corrections
(except for the third generation scalars discussed in
sections \ref{electroweaksection} and \ref{strongsection}).
\subsubsection{Relations Among the Superpartner Masses}
\label{relations}
The minimal model of gauge-mediated supersymmetry breaking
represents a very constrained theory of the soft terms.
In this section we present some quantitative
relations among the superpartner masses.
These can be used to distinguish the MGM from other theories
of the soft terms and within the MGM can be logarithmically
sensitive to the messenger scale.
The gaugino masses at the messenger scale are in proportion
to the gauge couplings squared,
$m_1 : m_2 : m_3 = \alpha_1 : \alpha_2 : \alpha_3$.
Since $\alpha_i m_{\lambda_i}^{-1}$ is a renormalization group
invariant at one loop, this relation is preserved to
lowest order at the electroweak scale,
where
$m_{\lambda_i}$ are the $\overline{DR}$ masses.
The MGM therefore yields, in leading log approximation, the
same ratios of gaugino masses as high scale supersymmetry breaking
with universal gaugino boundary conditions.
``Gaugino unification'' is a generic feature of any
weakly coupled gauge-mediated messenger sector
which forms a representation of any GUT group and
which has a single spurion.
The gaugino mass ratios are independent of $\not \! \! R$.
However, as discussed in section \ref{subvariations},
with multiple sources of supersymmetry breaking and/or messenger
fermion masses the gaugino masses can be sensitive to
messenger Yukawa couplings.
``Gaugino unification'' therefore does {\it not} follow
just from the anzatz
of gauge-mediation, even for messenger sectors which can
be embedded in a GUT theory.
An example of such a messenger sector is given in appendix
\ref{appnonmin}.
Of course, a messenger sector which forms an incomplete GUT
multiplet (and modifies gauge coupling unification
unification at one-loop under renormalization
group evolution) does not in general yield
``gaugino unification'' \cite{alon}.
With gauge-mediated supersymmetry breaking
the scalar and gaugino masses are related at the messenger scale.
For a messenger sector well below the GUT scale,
$\alpha_3 \gg \alpha_2 > \alpha_1$, so the most important
scalar-gaugino correlations
are between squarks and gluino, left handed sleptons and
$W$-ino, and right handed sleptons and $B$-ino.
\jfig{sfig4x}{fig13.ps}
{Ratios of $\overline{\rm DR}$ mass parameters
with MGM boundary conditions as a function
of the messenger scale:
$m_{\tilde{q}_R} / m_{\lR}$ (upper dashed line),
$m_{\lL} / m_{\lR}$ (lower dashed line),
$m_3 / m_{\tilde{q}_R}$ (upper solid line),
$m_2 / m_{\lL}$ (middle solid line), and
$m_1 / m_{\lR}$ (lower solid line).}
The ratios are of course proportional to $\not \! \! R$ which
determines the overall scale of the gaugino masses at
the messenger scale,
and are modified by renormalization group evolution
to the low scale.
Ratios of the $\overline{\rm DR}$ masses
$m_3 / m_{\tilde{q}_R}$,
$m_2 / m_{\lL}$, and
$m_1 / m_{\lR}$ in the minimal model
are shown in Fig. \ref{sfig4x} as a function
of the messenger scale.
As discussed in the next section, these ratios can be altered
with non-minimal messenger sectors.
In the minimal model, with $\not \! \! R=1$, a measurement
of any ratio $m_{\lambda_i}/ m $ gives a logarithmically
sensitive measurement of the messenger scale.
Because of the larger magnitude of the $U(1)_Y$
gauge $\beta$-function the ratio
$m_{\lR} / m_1$ is most sensitive to the messenger scale.
Notice also that $m_3 / m_{\tilde{q}}$ is larger
for a larger messenger scale, while
$m_{\lL} / m_2$ and
$m_{\lR} / m_1$ decrease with the messenger scale.
Because of this disparate sensitivity,
within the anzatz of minimal gauge-mediation,
both $\not \! \! R$ and $\ln M$ could be extracted from a precision
measurement of all three ratios.
For $\not \! \! R \leq 1$ the ratio of scalar mass
to associated gaugino mass is always $ \geq 1$
for any messenger scale.
Observation of a first or second generation scalar
lighter than the associated gaugino is therefore a
signal for $\not \! \! R >1$.
As discussed in section \ref{multiple},
$\not \! \! R >1$ is actually possible with larger messenger
sector representations.
In fact, as discussed in appendix \ref{appgeneral} in models
with a single spurion in the messenger sector,
$\not \! \! R$ is senstive to the index of the messenger sector
matter.
Additional matter which transforms under the standard model
gauge group between the electroweak and messenger scales
would of course modify these relations slightly through
renormalization group evolution contributions.
Ratios of
scalar masses at the messenger scale are related by ratios
of gauge couplings squared.
These ratios are reflected in the low energy spectrum.
In particular, since $\alpha_3 \gg \alpha_1$ if the messenger
scale is well below the GUT scale, the ratio
$m_{\tilde{q}} / m_{\lR}$ is sizeable.
Ratios of the $\overline{\rm DR}$ masses
$m_{\tilde{q}_R} / m_{\lR}$ and
$m_{\lL} / m_{\lR}$ are shown in Fig. \ref{sfig4x} as a function
of the messenger scale.
For $\Lambda = M$, $m_{\tilde{q_R}} / m_{\lR} \simeq 6.3$.
Notice that $m_{\tilde{q}_R} / m_{\lR}$ is smaller for larger
messenger scales,
and is fairly sensitive to $\ln M$.
This is because
$\alpha_3$ decreases rapidly at larger scales,
while $\alpha_1$ increases.
This sensitivity allows an indirect measure of $\ln M$.
The ratio $m_{\lL} / m_{\lR}$ is also fairly sizeable
but not as sensitive to the messenger scale.
For $\Lambda = M$, $m_{\lL} / m_{\lR} \simeq 2.1$.
It is important to note that with
$SU(3)_C \times SU(2)_L \times U(1)_Y$
gauge-mediated supersymmetry
breaking, any parity and charge conjugate invariant
messenger sector
which forms a representation of any GUT group and
which has a single spurion
yields, at leading order, the same
scalar mass ratios as in the minimal model.
These mass ratios therefore represent a fairly generic
feature of minimal gauge-mediation.
The sizeable hierarchy which arises in gauge-mediated supersymmetry
breaking between
scalar masses of particles with different
gauge charges generally does not arise with universal
boundary conditions with a large overall scalar mass.
With gravity mediated supersymmetry breaking and universal
boundary conditions the largest hierarchy results
for the no-scale boundary condition $m_0=0$.
In this case the scalar masses are ``gaugino-dominated,''
being generated in proportion to the gaugino masses under
renormalization group evolution.
The scalar mass
ratios turn out to be just slightly smaller
than the maximum gauge-mediated ratios.
With no-scale boundary conditions at $M_{GUT}$,
$m_{\tilde{q}_R} / m_{\eR} \simeq 5.6$ and
$m_{\lL} / m_{\lR} \simeq 1.9$.
However, the scalars in this case
are just slightly lighter than the
associated gauginos, in contrast to the MGM with $\not \! \! R=1$,
in which they are heavier.
It is interesting to note,
however, that for $M \sim 1000$ TeV
and $N=2$ or
$\not \! \! R \simeq \sqrt{2}$, gauge mediation coincidentally
gives almost identical
mass ratios as high scale supersymmetry breaking with
the no-scale boundary condition at the GUT scale.
With gauge-mediation,
scalar masses at the messenger scale receive contributions
proportional to gauge couplings squared.
Splitting among squarks with different gauge charges
can therefore be related to
right and left handed slepton masses (cf. Eq. \ref{scalarmass}).
This can be quantified in the form of sum rules which involve
various linear combinations of
all the first generation scalar masses squared \cite{martin}.
The splitting due to $U(1)_Y$ interactions with the messenger
sector can be quantified by
${\rm Tr}(Ym^2)$, where ${\rm Tr}$ is over
first generation sleptons and squarks.
As discussed in section \ref{electroweaksection}
this quantity vanishes with gauge-mediated boundary conditions
as the result of anomaly cancelation.
It is therefore interesting to consider the low scale quantity
$$
M_Y^2 =
{1 \over 2} \left( m_{\tilde{u}_L}^2 + m_{\tilde{d}_L}^2 \right)
-2 m_{\tilde{u}_R}^2 + m_{\tilde{d}_L}^2
- {1 \over 2} \left( m_{\tilde{e}_L}^2 + m_{\tilde{\nu}_L}^2
\right)
+ m_{\tilde{e}_R}^2
$$
\begin{equation}
+ {10 \over 3} \sin^2 \theta_W \cos 2 \beta m_{Z^0}^2
\label{Ysumrule}
\end{equation}
where the the sum of the $m^2$ terms is
${1 \over 2}{\rm Tr}(Ym^2)$ over the first generation, and
the ${\cal O}(m_{Z^0}^2)$ term is a correction for
classical $U(1)_Y$ $D$-terms.
The contribution of the gaugino masses to $M_Y^2$ under
renormalization group evolution cancels at one-loop.
So this quantity is independent of the gaugino spectrum.
In addition, the $\beta$-function for ${\rm Tr}(Ym^2)$ is homogeneous
\cite{alvarez}
and independent of the Yukawa couplings at one-loop, even
though the individual masses are affected.
So if
$M_Y^2=0$ at the messenger scale,
it is not generated above scalar thresholds.
It only receives very small contributions below the squark
thresholds of ${\cal O}((\alpha_1 / 4 \pi) m_{\tilde{q}}^2
\ln( m_{\tilde{q}} / m_{\tilde{l}} )) $.
The relation $M_Y^2 \simeq 0$ tests the assumption that
splittings within the squark and slepton spectrum
are related to $U(1)_Y$ quantum numbers.
The quantity $M_Y^2$ also vanishes in any model in which
soft scalar masses are univeral within GUT multiplets.
This is because ${\rm Tr} Y=0$ over any GUT multiplet.
Within the anzatz of gauge-mediation, a violation of
$M_Y^2 \simeq 0$ can result from a number of sources.
First, the messengers might not transform under
$U(1)_Y$.
In this case the $B$-ino should also be very light.
Second, a large $U(1)_Y$ $D$-term can be generated
radiatively if the messenger sector is not parity and
charge conjugate invariant.
Finally, the squarks and/or sleptons might
transform under additional gauge interactions
which couple with the messenger sector
so that ${\rm Tr}(Ym^2)$ does not vanish over
any generation.
This implies the existence of additional
electroweak scale matter in order to cancel the
${\rm Tr}(Y \{T^a, T^b \})$ anomaly, where $T^a$ is a
generator of the extra gauge interactions.
Unfortunately, sum rules which involve near cancelation among squark
and slepton
masses squared, such as $M_Y^2 =0$, if in fact satisfied, are
often not particularly useful experimentally.
This is because the squark masses are split only at
${\cal O}(m_{\tilde{l}}^2 / m_{\tilde{q}}^2)$
by $SU(2)_L$ and $U(1)_Y$ interactions with the messenger
sector, and at ${\cal O}(m_{Z^0}^2 / m_{\tilde{q}}^2)$ from
classical $SU(2)_L$ and $U(1)_Y$ $D$-terms.
Testing such sum rules therefore requires,
in general, measurements of
squark masses at the sub-GeV level, as can be determined
from the masses given in Table 1.
It is more useful to
consider sum rules, such as the ones given below,
which isolate the dominant splitting arising from $SU(2)_L$ interactions,
and are only violated by $U(1)_Y$ interactions.
These violations
are typically smaller than the experimental resolution.
The sum rules may then be tested with somewhat less precise
determinations of squark masses.
The near degeneracy among squarks may be quantified by
the splitting between right handed squarks
\begin{equation}
\Delta_{\tilde{q}_R}^2 = m_{\tilde{u}_R}^2 - m_{\tilde{d}_R}^2 .
\label{sumruleright}
\end{equation}
Ignoring $U(1)_Y$ interactions,
this quantity is a renormalization group invariant.
It receives non-zero contributions at
${\cal O}(m_{\eR}^2/ m_{\tilde{q}}^2 )$
from $U(1)_Y$ interactions with the messenger sector and
renormalization group contributions from the $B$-ino mass,
and
${\cal O}(m_{Z^0}^2/ m_{\tilde{q}}^2 )$
from classical $U(1)_Y$ $D$-terms at the low scale.
Numerically
$\Delta_{\tilde{q}_R}^2 / ( m_{\tilde{u}_R}^2+m_{\tilde{d}_R}^2)
\simeq 0$
to better than 0.3\% with MGM boundary conditions.
The near degeneracy between right handed squarks
is a necessary condition if squarks receive
mass mainly from $SU(3)_C$ interactions.
The quantity $\Delta_{\tilde{q}_R}^2$
also vanishes to the same order with universal boundary
conditions, but need not even approximately
vanish in theories in which
the soft masses are only universal within GUT multiplets.
An experimentally more interesting measure
which quantifies the splitting between left and
right handed squarks is
\begin{equation}
M_{L-R}^2 =
m_{\tilde{u}_L}^2 + m_{\tilde{d}_L}^2 -
\left( m_{\tilde{u}_R}^2 + m_{\tilde{d}_R}^2 \right)
- \left( m_{\lL}^2 + m_{\tilde{\nu}_L}^2
\right)
\end{equation}
This quantity is also a renormalization group invariant ignoring
$U(1)_Y$ interactions.
It formally vanishes at the same order as (\ref{sumruleright}).
Numerically
$M_{L-R}^2 / ( m_{\tilde{u}_R}^2 + m_{\tilde{d}_R}^2)
\simeq 0$
to better than 1\% with
MGM boundary conditions.
Without the left handed slepton contribution,
$M_{L-R}^2 / ( m_{\tilde{u}_R}^2 + m_{\tilde{d}_R}^2)
\simeq 0$
can be violated by up to 10\%.
This relation tests the assumption that the splitting between
the left and right handed squarks is due mainly to
$SU(2)_L$ interactions within the messenger sector.
The splitting is therefore correlated with the left handed slepton
masses, which receive masses predominantly from the same source.
If the squarks and sleptons receive mass predominantly from
gauge interactions with the messenger sector,
the masses depend only on gauge quantum numbers, and
are independent of generation up to very small
${\cal O}(m_f^2 / M^2)$ corrections at the messenger scale, where
$m_f$ is the partner fermion mass.
However, third generation masses are modified by Yukawa
contributions under renormalization group evolution and
mixing.
Mixing effects can be eliminated by considering the quantity
${\rm Tr}(m_{LR}^2)$ where $m_{LR}^2$ is the left-right
scalar mass squared
matrix.
In addition, it is possible to choose linear combinations
of masses which are independent of Yukawa couplings
under renormalization group evolution at
one-loop,
$m_{\tilde{u}_L}^2 + m_{\tilde{u}_R}^2 - 3 m_{\tilde{d}_L}^2$,
and similarly for sleptons
\cite{ssconstraints}.
The quantities
\begin{equation}
M^2_{\tilde{t} - \tilde{q}} =
m_{\tilde{t}_1}^2 + m_{\tilde{t}_2}^2 - 2 m_t^2
- 3 m_{\tilde{b}_2}^2
- \left( m_{\tilde{u}_L}^2 + m_{\tilde{u}_R}^2
- 3 m_{\tilde{d}_L}^2 \right)
\label{thirdsquark}
\end{equation}
\begin{equation}
M^2_{\tilde{\tau} - \tilde{e}} =
m_{\tilde{\tau}_1}^2 + m_{\tilde{\tau}_2}^2
- 3 m_{\tilde{\nu}_{\tau}}^2
- \left( m_{\tilde{e}_L}^2 + m_{\tilde{e}_R}^2
- 3 m_{\tilde{\nu}_e}^2 \right)
\label{thirdslepton}
\end{equation}
only receive contributions at two-loops under renormalization,
and in the case
of $M^2_{\tilde{t} - \tilde{q}}$ from $\tilde{b}$ mixing effects
which are negligible unless $\tan \beta$ is very large.
The relations $M^2_{\tilde{t} - \tilde{q}} \simeq 0$
and $M^2_{\tilde{\tau} - \tilde{e}} \simeq 0$
test the assumption that scalars with different
gauge quantum numbers have a flavor independent mass
at the messenger scale.
They vanish in any theory of the
soft terms with flavor independent masses at the messenger scale,
but need not vanish in theories in which alignment of the
squark mass matrices with the quark masses
is responsible for the lack of supersymmetric
contributions to flavor changing neutral currents.
Within the anzatz of gauge-mediation, violations of these relations
would imply additional flavor dependent interactions with the
messenger sector.
If the quantities (\ref{thirdsquark}) and (\ref{thirdslepton})
are satisfied, implying the masses are generation independent
at the messenger scale, it is possible to extract the
Yukawa contribution to the renormalization group evolution.
The quantities
\begin{equation}
M_{h_t}^2 = m_{\tilde{t}_1}^2 + m_{\tilde{t}_2}^2 - 2 m_t^2
- \left( m_{\tilde{u}_L}^2 + m_{\tilde{u}_R}^2 \right)
\end{equation}
\begin{equation}
M_{h_{\tau}}^2 = m_{\tilde{\tau}_1}^2 + m_{\tilde{\tau}_2}^2
- \left( m_{\tilde{e}_L}^2 + m_{\tilde{e}_R}^2 \right)
\end{equation}
are independent of third generation mixing effects.
Under renormalization group evolution
$M_{h_t}^2$ receives an ${\cal O}((h_t / 4 \pi)^2 m_{\tilde{t}}^2
\ln(M/m_{\tilde{t}}) )$
negative contribution from the top Yukawa.
For moderate values of $\tan \beta$ this amounts to a
14\% deviation from
$M_{h_t}^2 / (m_{\tilde{t}_1}^2 + m_{\tilde{t}_1}^2)=0$
for $M = \Lambda$ and
grows to 29\% for $M= 10^5 \Lambda$.
Given an independent measure of $\tan \beta$ to fix the value
of $h_t$, this quantity gives an indirect probe of $\ln M$.
Unfortunately it requires a fairly precise measurement of the
squark and stop masses, but is complimentary to the
$\ln M$ dependence of the mass ratios of scalars with different
gauge charges discussed above.
The quantity $M_{h_{\tau}}^2$ is only significant if $\tan \beta$
is very large.
If this is the case, the splitting between $\tilde{\nu}_{\tau}$
and $\tilde{\nu}_e$, $\Delta^2_{\tilde{\nu}_{\tau} - \tilde{\nu}_e} =
m^2_{\tilde{\nu}_{\tau}} - m^2_{\tilde{\nu}_{e}}$,
gives an independent check of the renormalization contribution
through the relation
\begin{equation}
M_{h_{\tau}}^2 = 3 \Delta^2_{\tilde{\nu}_{\tau} - \tilde{\nu}_e}
\end{equation}
\section{Variations of the Minimal Model}
\label{varcon}
The results of the renormalization group analysis given above
are for the minimal model of gauge-mediated
supersymmetry breaking.
In this section we discuss how variations of the minimal
model affect the
form of the superpartner spectrum and the constraints
imposed by electroweak symmetry breaking.
\subsection{Approximate $U(1)_R$ Symmetry}
\label{appUR}
Soft scalar masses require supersymmetry breaking, while
gaugino masses require both supersymmetry and $U(1)_R$
breaking, as discussed in section \ref{subvariations}.
It is therefore possible that the scale for gaugino
masses is somewhat different than that for the scalar masses,
as quantified by the parameter $\not \! \! R$.
An example of a messenger sector with an approximate $U(1)_R$
symmetry is given in appendix \ref{appnonmin}.
The gaugino masses affect the scalar masses only through
renormalization group evolution.
For $\not \! \! R < 1$ the small positive contribution to scalar masses
from gaugino masses is slightly reduced.
The scalar mass relations discussed in section
\ref{relations} are not affected by this renormalization and
so are not altered.
The main effect of $\not \! \! R < 1$ is simply to lower the overall
scale for the gauginos relative to the scalars.
This also does not affect the relation among gaugino masses.
\subsection{Multiple Messenger Generations}
\label{multiple}
The minimal model contains a single messenger generation of
${\bf 5} + \overline{\bf 5}$ of $SU(5)$.
This can be extended to any vector
representation of the standard model gauge group.
Such generalizations may be parameterized by the equivalent number of
${\bf 5} + \overline{\bf 5}$ messenger generations,
$N = C_3$, where $C_3$ is
defined in Appendix \ref{appgeneral}.
For a ${\bf 10} + \overline{\bf 10}$ of $SU(5)$ $N=3$.
{}From the general expressions given in appendix \ref{appgeneral}
for gaugino and scalar masses, it is apparent that
gaugino masses grow like $N$ while scalar masses grow
like $\sqrt{N}$ \cite{signatures}.
This corresponds roughly to the gaugino mass parameter
$\not \! \! R = \sqrt{N}$.
Messenger sectors with larger matter representations
therefore result
in gauginos which are heavier relative to the scalars than
in the minimal model.
\jfig{sfig4n}{fig14.eps}
{Renormalization group evolution of the
${\overline{\rm DR}}$ mass parameters with boundary conditions
of the two generation messenger sector.
The messenger scale is $M = 54$ TeV, $\Lambda=M$,
$m_{\bino}(M) = 163$ GeV, and $\tan \beta = 3$.}
The renormalization group evolution of the $\overline{\rm DR}$
parameters for $N=2$ with a messenger scale of
$M=54$ TeV, $\Lambda = M$,
$m_{\bino}(M) = 163$ GeV, and $\tan \beta =3$ is
shown in Fig. \ref{sfig4n}.
The renormalization group contribution to the scalar masses
proportional to the gaugino masses is slightly larger than
for $N=1$.
Notice that at the low scale the renormalized
right handed slepton masses are slightly smaller than the
$B$-ino mass.
The physical slepton masses, however, receive a positive
contribution from the classical $U(1)_Y$ $D$-term, while
the physical $\na$ mass receives a negative contribution from
mixing with the Higgsinos
for ${\rm sgn}(\mu)=+1$.
With $N=2$ and the messenger scale not too far above $\Lambda$,
the $\lR$ and $\na$ are therefore very close in mass.
For the parameters of Fig. \ref{sfig4n}
$m_{\na} = 138$ GeV and $m_{\lR} = 140$ GeV,
so that $\na$ remains the lightest standard model superpartner.
The $D$-term and Higgsino mixing contributions become smaller
for a larger overall scale.
For $M=60$ TeV, $\Lambda=M$, and $\tan \beta =3$, the
$\na$ and $\lR$ masses cross
at $m_{\na} = m_{\lR} \simeq 153$ GeV.
Since the $B$-ino mass decreases while the right handed slepton
masses increase under renormalization,
$\meR > m_{\na}$ for a messenger scale well above $\Lambda$.
The near degeneracy of $\lR$ and $\na$ is just a coincidence
of the numerical boundary conditions for $N=2$
and ${\rm sgn}(\mu)=+1$.
For messenger sectors with $N \geq 3$ a right handed slepton
is naturally the lightest standard model superpartner.
The heavier gauginos which result for $N \geq 2$ only
slightly modify electroweak symmetry breaking through
a larger positive renormalization group
contribution to the Higgs soft masses,
and finite corrections at the low scale.
The negative contribution to the $\stau_1$ mass
relative to $\eR$ and $\tilde{\mu}_R$ from
mixing and Yukawa contributions to renormalization
are therefore also only slightly modified.
For a given physical scalar mass at the low scale, the
ratio $m_{\stau_1} / m_{\eR}$ is very similar to the $N=1$
case.
For $N \geq 3$, and the regions of the $N=2$ parameter space
in which $m_{\lR} < m_{\na}$, the $\stau_1$
is the lightest standard model superpartner.
As discussed in section \ref{collidersection}, collider signatures for
these cases are much different than for the MGM with $N=1$
with $\na$ as the lightest standard model superpartner.
\subsection{Additional Soft Terms in the Higgs Sector}
\label{addhiggs}
The Higgs sector parameters $\mu$ and $m_{12}^2$ require
additional interactions
with the messenger sector beyond the standard model gauge
interactions.
In the minimal model the precise form of these interactions is
not specified, and $\mu$ and $m_{12}^2$ are taken
as free parameters.
The additional interactions which couple to the Higgs sector
are likely to contribute to the Higgs soft masses
$m_{H_u}^2$ and $m_{H_d}^2$, and split these from
the left handed sleptons, $m_{\lL}^2$.
Splittings of ${\cal O}(1)$ are not unreasonable since
the additional interactions must generate $\mu$ and $m_{12}^2$
of the same order.
The Higgs splitting may be parameterized by $\Delta_{\pm}^2$,
defined in Eq. (\ref{splithiggs}) of section \ref{subvariations}.
It is possible that other scalars also receive additional
contributions to soft masses.
The right handed sleptons
receive a gauge-mediated mass only from
$U(1)_Y$ coupling, and are therefore most
susceptible to a shift in mass from additional
interactions.
Right handed sleptons represent the potentially
most sensitive probe for such interactions \cite{tevatron}.
Note that additional messenger sector interactions do not modify
at lowest order
the relations among gaugino masses.
Since additional interactions {\it must} arise in the Higgs sector,
we focus in this section on
the effect of
additional contributions to the Higgs soft masses
on electroweak
symmetry breaking and the superpartner spectrum.
We also consider the possibility that $m_{12}^2$ is generated
entirely from renormalization group evolution \cite{frank}.
Additional contributions to Higgs sector masses can in principle
have large effects on electroweak symmetry breaking.
With the Higgs bosons split from the left handed sleptons
the minimization condition (\ref{mincona}) is modified
to
\begin{equation}
\label{newEWSB}
|\mu|^2+\frac{m_Z^2}{2} =
\frac{(m^2_{H_d,0}+\Sigma_d)+
(m^2_{H_u,0}+\Sigma_u)\tan^2\beta}{\tan^2\beta -1}
-\frac{\Delta^2_+}{2}+\frac{\Delta^2_-}{2}\left(
\frac{\tan\beta^2 +1}{\tan\beta^2-1}\right)
\label{minsplit}
\end{equation}
where all quantities are evaluated at the minimization scale,
and
$m^2_{H_u,0}$ and $m^2_{H_d,0}$ are the gauge-mediated contributions
to the soft masses.
\jfig{sfig5n}{fig15.eps}
{The relation between the low scale $|\mu|$ parameter and
$\Delta_-\equiv {\rm sgn}(\Delta^2_-(M))(|\Delta^2_-(M)|)^{1/2}$
at the messenger scale
imposed by electroweak symmetry breaking for
$\Delta_+(M)=0$,
$m_{\tilde B}(M)=115,180,250$, GeV $\tan\beta =3$,
and $\Lambda=M$.}
The relation between $\mu$ at the minimization scale and
$\Delta_-\equiv {\rm sgn}(\Delta^2_-(M) )\sqrt{|\Delta^2_-(M)|}$
at the messenger scale is shown in
Fig. \ref{sfig5n} for
$\Delta_+(M)=0$,
$m_{\tilde B}(M)=115,180,250$, GeV, $\tan\beta =3$,
and $\Lambda=M$.
For moderate values of $\Delta_-$, the additional Higgs splittings
contribute in quadrature
with the gauge-mediated contributions,
and only give ${\cal O}(\Delta^2_- / \mu^2)$ corrections
to the minimization condition (\ref{minsplit}).
This is the origin of the shallow plateau in Fig.
\ref{sfig5n} along which $\mu$ does not significantly vary.
The plateau extends over the range $|\Delta_-^2| \lsim |m_{H_u,0}^2|$
at the messenger scale.
For $\tan \beta \gg 1$ the minimization condition
(\ref{minsplit}) becomes
$|\mu|^2 \simeq -m_{H_u}^2 + {1 \over 2}
(\Delta_-^2 - \Delta_+^2 - m_{Z^0}^2)$.
For very large $(\Delta_-^2 - \Delta_+^2)$ this reduces to
$\sqrt{2}|\mu| \simeq (\Delta_-^2 - \Delta_+^2)^{1/2}$.
This linear correlation between $\mu$ and $\Delta_-$
for $\Delta_-$ large and $\Delta_+=0$ is apparent
in Fig. \ref{sfig5n}.
The non-linear behavior at small $\Delta_-$
arises from ${\cal O}(m_{Z^0}^2 / \mu^2)$ contributions to the
minimization condition (\ref{minsplit}).
The physical correlation between $\mu$ and $\Delta_{\pm}$ is
easily understood in terms of $m_{H_u}^2$ at the messenger scale.
For $\Delta_+=0$ and $\Delta_- > 0$, $m_{H_u}^2$
is more negative than in the minimal model,
leading to a deeper minimum
in the Higgs potential.
In fact, for the $m_{\bino}(M)=115$ GeV case shown in Fig.
\ref{sfig5n}, $m_{H_u}^2 <0$ already at the messenger scale
for $\Delta_- \gsim 260$ GeV.
Obtaining correct electroweak symmetry breaking for $\Delta_- > 0$
therefore requires
a larger value of $|\mu|$,
as can be seen in Fig. \ref{sfig5n}.
Conversely,
for $\Delta_+=0$ and $\Delta_- < 0$, $m_{H_u}^2$
is less
negative than in the minimal model, leading to a more shallow minimum
in the Higgs potential.
Obtaining correct electroweak symmetry breaking in this limit
therefore requires
a smaller value of $|\mu|$,
as can also be seen in Fig. \ref{sfig5n}.
Eventually,
for $\Delta_-$ very negative,
$m_{H_u}^2$ at the messenger scale is large enough that
the negative
renormalization group evolution from the top Yukawa is
insufficient to drive electroweak symmetry breaking.
In Fig. \ref{sfig5n} this corresponds to $|\mu| < 0$.
With $\Delta_-=0$ and $\Delta_+ > 0$ both $m_{H_u}^2$ and $m_{H_d}^2$
are larger at the messenger scale than in the minimal model,
leading to a more shallow minimum in the Higgs potential.
This results in smaller values of $\mu$, and conversely
larger values of $\mu$ for $\Delta_+ < 0$.
Again, there is only a significant effect for
$|\Delta_+^2| \gsim |m_{H_u}^2|$.
The pseudo-scalar Higgs mass also depends on additional
contributions to the Higgs soft masses,
$m_{A^0}^2 = 2 |\mu|^2 + (m^2_{H_u,0} + \Sigma_u) +
(m^2_{H_d,0} + \Sigma_d) + \Delta_+^2$.
For large $\tan \beta$ the minimization condition (\ref{minsplit})
gives
$m_{A^0}^2 \simeq - (m^2_{H_u,0} + \Sigma_u) + (m^2_{H_d,0} + \Sigma_d)
+ \Delta_-^2$.
Again, for $|\Delta_-^2| \lsim |m^2_{H_u,0}|$ the pseudo-scalar
mass is only slightly affected, but can be altered significantly
for $\Delta_-$ very large in magnitude.
Notice that $m_{A^0}$ is independent of $\Delta_+$ in this limit.
This is because in the contribution
to $m_{A^0}^2$ the change in $|\mu|^2$ induced by $\Delta_+^2$
is cancelled by a
compensating change in $m_{H_d}^2$.
This approximate independence of $m_{A^0}$ on $\Delta_+$ persists
for moderate values of $\tan \beta$.
For example,
for the parameters of Fig. \ref{sfig5n} with
$m_{\bino}(M) = 115$ GeV and $\Delta_-=0$, $m_{A^0}$ only varies
between 485 GeV and 525 GeV for
$-500$ GeV $< \Delta_+ < $ $500$ GeV, while $\mu$ varies from 510 GeV to
230 GeV over the same range.
The additional contributions to the Higgs soft masses
can, if large enough, change the form of the superpartner
spectrum.
The charginos and neutralinos are affected mainly through
the value of $\mu$ implied by electroweak symmetry breaking.
For very large $|\mu|$, the approximately degenerate
singlet $\chi_3^0$ and triplet $(\chi_2^{+}, \chi_4^0, \chi_2^-)$
discussed in section \ref{electroweaksection} are mostly
Higgsino, and have mass $\mu$.
For $\mu \lsim m_2$ the charginos and neutralinos are
a general mixture of gaugino and Higgsino.
A value of $\mu$ in this range,
as evidenced by a sizeable Higgsino component of
$\chi_1^0$, $\chi_2^0$, or $\chi_1^{\pm}$, or
a light $\chi_3^0$ or $\chi_2^{\pm}$,
would be strong evidence for
deviations from the minimal model in the Higgs sector.
The heavy Higgs masses are determined by $m_{A^0}$.
Since $m_{A^0}^2$ is roughly independent of $\Delta_+^2$, while
$|\mu|$ is sensitive to $( \Delta_-^2 - \Delta_+^2)$,
the relative shift between
the Higgsinos and heavy Higgses
is sensitive to the individual
splittings of $m_{H_u}^2$ and $m_{H_d}^2$ from the left handed sleptons,
$m_{\lL}^2$.
Within the MGM,
given an independent measure of $\tan \beta$ (such as from
left handed slepton - sneutrino splitting,
$m_{\lL}^2 - m_{\nu_L}^2 = -m_{W^{\pm}}^2 \cos 2 \beta$)
the mass of the Higgsinos and heavy Higgses
therefore provides an indirect probe
for additional contributions to the Higgs soft masses.
\jfig{sfig6n}{fig16.eps}
{The ratio $m_{\tilde l_R}/m_{\chi^0_1}$ as a
function of $\Delta_-$ at the messenger scale
for $m_{\tilde B}(M)=115,180,250\gev$, $\tan\beta =3$,
${\rm sgn}(\mu)=+1$, and $\Lambda=M$.}
Non-minimal contributions to Higgs soft masses can
also affect the other scalar masses through renormalization
group evolution.
The largest effect comes from the radiative contribution to
the $U(1)_Y$ $D$-term, which is generated in
proportion to $S = {1 \over 2}{\rm Tr}(Ym^2)$.
In the minimal model the Higgs contribution to $S$
vanishes at the messenger scale because the Higgses
are degenerate and have opposite hypercharge.
For $\Delta_- > 0$ they are no longer degenerate and
give a negative contribution to $S$.
This increases the magnitude of the contribution in the minimal
model from running below the squark thresholds.
To illustrate the effect of the Higgs contribution to $S$ on
the scalar masses,
$m_{\tilde l_R}/m_{\chi^0_1}$ is shown
in Fig. \ref{sfig6n} as a
function of $\Delta_-$ at the messenger scale
for $m_{\tilde B}(M)=115,180,250\gev$, $\tan\beta =3$,
${\rm sgn}(\mu)=+1$, and $\Lambda=M$.
For $\Delta_-$ very large and
positive the radiatively generated
$U(1)_Y$ $D$-term contribution to right handed slepton masses
increases the ratio $m_{\tilde l_R}/m_{\chi^0_1}$.
For $\Delta_-$ very negative,
the rapid increase in $m_{\tilde l_R}/m_{\chi^0_1}$
occurs because $|\mu|$ is so small that $\chi_1^0$ becomes
mostly Higgsino with mass $\mu$.
All these modifications of the
form of the superpartner spectrum are significant
only if the Higgs bosons receive
additional contributions to the soft masses which are
roughly larger in magnitude than the gauge-mediated contribution.
The $\mu$ parameter is renormalized multiplicatively while
$m_{12}^2$ receives renormalization group contributions proportional
to $\mu m_{\lambda}$, where $m_{\lambda}$ is the $B$-ino
or $W$-ino mass.
As suggested in Ref. \cite{frank}, it is therefore interesting
to investigate the possibility that $m_{12}^2$ is generated
only radiatively below the messenger scale, with
the boundary condition $m_{12}^2(M)=0$.
Most models of the Higgs sector interactions actually suffer
from $m_{12}^2 \gg \mu^2$ \cite{dnmodels,dgphiggs}, but
$m_{12}^2(M)=0$ represents a potentially interesting, and
highly constrained subspace of the MGM.
\jfig{sfig8n}{fig17.eps}
{The relation between $m_{12}(M)$ and $\tan\beta$ imposed by electroweak
symmetry breaking for
$m_{\tilde B}(M) =115,180,250$ GeV, and $\Lambda=M$.}
In order to illustrate what constraints this boundary condition
implies, the relation between $m_{12}(M)$ and
$\tan \beta$ imposed by electroweak symmetry breaking is
shown in Fig. \ref{sfig8n} for
$m_{\tilde B}(M) =115,180,250$ GeV and $\Lambda=M$.
The non-linear feature at $m_{12}(M)\simeq 0$ is a square root
singularity since the $\beta$-function for $m_{12}^2$
is an implicit function of $\tan \beta$ only through the slow
dependence of $\mu$ on $\tan \beta$.
The value of $\tan \beta$ for which $m_{12}(M)=0$ is almost
entirely independent of the overall scale of the superpartners.
This is because to lowest order the minimization condition
(\ref{minconb}) fixes $m_{12}^2$ at the low scale
to be a homogeneous function of the overall superpartner scale
(up to $\ln(m_{\tilde{t}_1} m_{\tilde{t}_2} / m_t^2)$ finite
corrections)
$m_{12}^2 \simeq f(\alpha_i,\tan \beta) (\alpha / 4 \pi)^2 \Lambda^2$.
If $m_{12}$ vanishes at any scale, then the function $f$ vanishes
at that scale, thereby determining $\tan \beta$.
For $m_{12}(M)=0$ and $\Lambda=M$ we find $\tan \beta \simeq 46$.
With the boundary condition $m_{12}(M)=0$, the resulting
large value of $\tan \beta$ is natural.
This is because
$m_{12}^2(Q)$ at the minimization scale, $Q$, is small.
With the parameters given above $m_{12}(Q) \simeq -80$ GeV.
For $m_{12}(Q) \rightarrow 0$, $H_d$ does not participate
in electroweak symmetry breaking, and $\tan \beta \rightarrow \infty$.
As discussed in section \ref{electroweaksection},
at large $\tan \beta$,
$m_{\stau_1}$ receives a large negative contribution
from the $\tau$ Yukawa due to renormalization
group evolution and mixing.
For the values of $\tan \beta$ given above
we find $m_{\stau_1} \lsim m_{\na}$.
It is important to note that for such large values of
$\tan \beta$, physical quantities, such as
$m_{\stau_1} / m_{\na}$, depend sensitively on
the precise value of the
$b$ Yukawa through renormalization group and finite
contributions to the Higgs potential.
\subsection{$U(1)_Y$ $D$-term}
The $U(1)_Y$ $D$-term can be non-vanishing at the messenger scale,
as discussed in section 2.2.
This gives an additional contribution to the soft scalar masses
proportional to the $U(1)_Y$ coupling, as given in
Eq. (\ref{Dmass}).
This splits $m_{H_u}$ and $m_{H_d}$, and has the same affect on electroweak
symmetry breaking as $\Delta_-$ discussed in the previous subsection.
The right handed sleptons have the smallest gauge-mediated
contribution to soft masses, and are therefore most
susceptible to $D_Y(M)$.
The biggest effect
on the scalar spectrum
is therefore a modification of the splitting
between left and right handed sleptons.
This splitting can have an important impact on the relative
rates of
$p \bar{p} \rightarrow l^+ l^- \gamma \gamma + \not \! \! E_{T}$ and
$p \bar{p} \rightarrow l^{\pm} \gamma \gamma + \not \! \! E_{T}$ at
hadron colliders \cite{tevatron} as compared with the minimal
model discussed in section \ref{misssig}.
\section{Phenomenological Consequences}
Since the parameter space of the MGM is so constrained
it is interesting to investigate what phenomenological
consequences follow.
In the first subsection below
we discuss virtual effects, with
emphasis on the constraints within the MGM from $b \rightarrow s \gamma$.
In the second subsection we discuss the collider signatures
associated with the gauge-mediated supersymmetry breaking.
These can differ significantly from the standard MSSM with
$R$-parity conservation and
high scale supersymmetry breaking.
This is because first, the lightest
standard model superpartner can decay within the detector
to its partner plus the Goldstino,
and second, the lightest standard model superpartner
can be either either $\na$ or $\lR^{\pm}$.
\subsection{Virtual Effects}
Supersymmetric theories can be probed indirectly by virtual
effects on low energy, high precision, processes \cite{dpf}.
Among these are precision electroweak measurements,
electric dipole moments, and flavor changing neutral currents.
In the minimal model of gauge-mediation, supersymmetric
corrections to electroweak observables are unobservably
small since the charginos, left handed sleptons,
and squarks are too heavy.
Likewise, the effect on
$R_b = \Gamma(Z^0 \rightarrow b \bar{b})/ \Gamma(Z^0 \rightarrow {\rm had})$
is tiny since the Higgsinos and both stops are heavy.
Electric dipole moments can arise from the single $CP$-violating
phase in the soft terms, discussed in section \ref{minimalsection}.
The dominant contributions to the dipole moments of atoms
with paired or unpaired electrons, and the neutron,
come from one-loop chargino processes, just as
with high scale supersymmetry breaking.
The bounds on the phase are therefore comparable to those
in the standard MSSM,
${\rm Arg}(m_{\lambda} \mu (m_{12}^2)^*) \lsim 10^{-2}$
\cite{edm,edma}.
It is important to note that in some schemes for
generating the Higgs sector parameters $\mu$ and $m_{12}^2$,
the soft terms are $CP$ conserving \cite{dnmodels},
in which case electric dipole moments are unobservably small.
This is also true for the boundary condition $m_{12}^2(M)=0$
since
$(m_{\lambda} \mu (m_{12}^2)^*)$ vanishes in this case.
Contributions to flavor changing neutral currents can
come from two sources in supersymmetric theories.
The first is from flavor violation in the squark or
slepton sectors.
As discussed in section \ref{UVinsensitive} this source
for flavor violation is naturally small with gauge-mediated
supersymmetry breaking.
The second source is from second order electroweak virtual
processes which are sensitive to flavor violation in the
quark Yukawa couplings.
At present the most sensitive probe for contributions of this
type beyond those of the standard model is $b \rightarrow s \gamma$.
In a supersymmetric theory one-loop
$\chi^{\pm}-\tilde{t}$ and $H^{\pm}-t$ contributions
can compete with the standard model
$W^{\pm}-t$ one-loop effect.
The standard model effect is dominated by the transition
magnetic dipole operator which arises from the electromagnetic penguin,
and the tree level charged current operator, which contributes
under renormalization group evolution.
The dominant supersymmetric contributions
are through the transition dipole operator.
It is therefore convenient to parameterize the
supersymmetric contributions as
\begin{equation}
R_7 \equiv { C^{\rm MSSM}_7(m_{W^{\pm}}) \over C^{\rm SM}_7(m_{W^{\pm}}) } -1
\end{equation}
where $C_7(m_{W^{\pm}})$ is the coefficient of the dipole
operator at a renormalization scale $m_{W^{\pm}}$, and
$ C^{\rm MSSM}_7(m_{W^{\pm}})$ contains the entire MSSM contributions
(including the $W^{\pm}-t$ loop).
In the limit of
decoupling the supersymmetric states and heavy Higgs bosons
$R_7 =0$.
\jfig{sfig9n}{fig18.ps}
{The parameter
$R_7\equiv C^{\rm{MSSM}}_7(m_{W^{\pm}})/C^{\rm{SM}}_7(m_{W^{\pm}})-1$ as a function
of the lightest neutralino mass, $m_{\chi^0_1}$, for
$\tan\beta =2,3,20$, and $\Lambda=M$.
The solid lines are for $\mu >0$ and the dashed lines for $\mu <0$.
}
The parameter $R_7$ is shown in Fig. \ref{sfig9n} as a function
of the lightest neutralino mass, $m_{\chi^0_1}$, for
both signs of $\mu$,
$\tan\beta =2,3,20$, and $\Lambda=M$ \cite{bsgammaref}.
The $\chi_1^0$ mass is plotted in Fig. \ref{sfig9n}
as representative of the overall scale of the superpartner
masses.
The dominant contribution comes from the
$H^{\pm}-t$ loop which adds constructively to the
standard model $W^{\pm}-t$ loop.
The $\chi^{\pm}-\tilde{t}$ loop gives a
destructive contribution which is smaller in magnitude
because the stops are so heavy.
The ${\rm sgn}~\mu$ dependence of $R_7$ results from this small
destructive contribution mainly because the Higgsino
component of the lightest chargino is larger(smaller) for
${\rm sgn}~\mu = +(-)$ (cf. Eq. \ref{winoshift}).
The $\chi^{\pm}-\tilde{t}$ loop amounts to roughly a
$-$15(5)\% contribution compared with the $H^{\pm}-t$ loop for
${\rm sgn}~\mu=+(-)$.
The non-standard model contribution to $R_7$ decreases for small
$\tan \beta$ since $m_{H^{\pm}} \simeq m_{A^0}$ increases
in this region.
In order to relate $R_7$ to ${\rm Br}( b \rightarrow s \gamma)$
the dipole and tree level charged current operators must
be evolved down to the scale $m_b$.
Using the results of Ref. \cite{buras}, which include
the leading QCD contributions to the anomalous dimension
matrix, we find
\begin{equation}
\frac{{\rm Br}^{\rm{MSSM}}(b\rightarrow s\gamma)}{{\rm Br}^{\rm{SM}}
(b\rightarrow s\gamma)} \simeq |1+0.45 ~R_7(m_{W^{\pm}})|^2.
\end{equation}
for $m_t^{\rm pole} = 175$ GeV.
For this top mass
${\rm Br}^{\rm SM}(b \rightarrow s \gamma) \simeq (3.25 \pm 0.5)\times 10^{-4}$
where the uncertainties are estimates of the theoretical
uncertainty coming mainly from $\alpha_s(m_b)$ and
renormalization scale
dependence \cite{greub}.
Using the ``lower'' theoretical value and the 95\% CL
experimental
upper limit of ${\rm Br}(b \rightarrow s \gamma) < 4.2 \times 10^{-4}$
from the CLEO measurement \cite{CLEO}, we find
$R_7 < 0.5$.\footnote{
This is somewhat more conservative than the bound
of $R_7 < 0.2$ suggested in Ref. \cite{cho}.}
This bound assumes that the non-standard model effects
arise predominantly in the dipole operator, and are constructive
with the standard model contribution.
In the MGM
for $\mu > 0$,
$\tan \beta =3$, and $\Lambda=M$, this bound corresponds to
$m_{\na} \gsim 45$ GeV, or a charged Higgs mass of
$m_{H^{\pm}} \gsim 300$ GeV.
The present experimental limit does not severely constrain
the parameter space of the MGM.
This follows from the fact that the charged Higgs is very
heavy over most of the allowed parameter space.
Except for very large $\tan \beta$
$m_{H^{\pm}} \gsim |\mu|$, and imposing electroweak symmetry
breaking implies
$ 3 m_{\na} \lsim |\mu| \lsim 6 m_{\na}$,
as discussed
in sections \ref{EWSB} and \ref{higgssection}.
For example, with the parameters of Table 1
$m_{H^{\pm}} \simeq 5.4 m_{\na}$.
Note that since the stops are never light in the minimal
model there is no region of the parameter space for which the
$\ca - \tilde{t}$ loop can cancel the $H^{\pm} -t$ loop.
Precise measurements of ${\rm Br}(b \rightarrow s \gamma)$ at future
$B$-factories, and improved calculations of the anomalous
dimension matrix and finite contributions at the scale $m_b$, will
improve the uncertainty in $R_7$ to $\pm 0.1$ \cite{postb}.
Within the MGM, even for $\tan \beta =2$ and $\mu > 0$,
a measurement of ${\rm Br}(b \rightarrow s \gamma)$ consistent with the standard
model would give a bound on
the charged Higgs mass of $m_{H^{\pm}} \gsim 1200 $ GeV,
or equivalently an indirect bound on the chargino mass of
$\ca$ mass of $m_{\ca} \gsim 350 $ GeV.
Such an indirect bound on the chargino mass
is more stringent than the direct bound that
could be obtained at the main injector
upgrade at the Tevatron \cite{tevatron}, and significantly
better than the direct bound that will be available at LEP II.
\subsection{Collider Signatures}
\label{collidersection}
Direct searches for superpartner production at high energy
colliders represent the best probe for supersymmetry.
Most searches assume that $R$-parity is conserved and that
the lightest standard model
superpartner is a stable neutralino.
Pair production of supersymmetric states then takes place
through gauge or gaugino interactions, with cascade decays
to pairs of neutralinos.
The neutralinos escape the detector leading to the classic
signature of missing energy.
With gauge-mediated supersymmetry breaking the collider
signatures can be much different in some circumstances.
First, for a messenger scale well below the Planck scale,
the gravitino is naturally the lightest supersymmetric particle.
If the supersymmetry breaking scale is below a
few 1000 TeV,
the lightest standard model superpartner can decay to its partner
plus the Goldstino component of the gravitino
inside the detector \cite{sbtalk,signatures}.
The Goldstino, and associated decay rates, are discussed in
appendix \ref{appgoldstino}.
Second, as discussed in sections \ref{electroweaksection}
and \ref{multiple} it is possible that the lightest standard
model superpartner is a slepton \cite{sbtalk,signatures}.
If the supersymmetry breaking scale is larger than a few 1000
TeV, the signature for supersymmetry is then a pair of heavy
charged particles plowing through the detector,
rather than missing energy.
The form of the superpartner spectrum has an important impact
on what discovery modes are available at a collider.
With gauge-mediation, all the strongly interacting states,
including the stops, are generally too heavy to be relevant to discovery
in the near future.
In addition, the constraints of electroweak symmetry breaking
imply that the heavy Higgs bosons and mostly Higgsino
singlet $\chi_3^0$ and triplet $(\chi_2^+, \chi_4^0, \chi_2^-)$
are also too heavy.
The mostly $B$-ino $\chi_1^0$, mostly $W$-ino triplet
$(\chi_1^+, \chi_2^0, \chi_1^-)$, right handed sleptons
$\lR^{\pm}$, and lightest Higgs boson, $h^0$, are the accessible
light states.
In this section we discuss the collider signatures of
gauge-mediated supersymmetry breaking associated with the
electroweak supersymmetric states.
In the next two subsections the signatures associated
with either a neutralino or slepton as the lightest standard
model superpartner are presented.
\subsubsection{Missing Energy Signatures}
\label{misssig}
The minimal model has a conserved
$R$-parity by assumption.
At moderate $\tan \beta$, $\chi_1^0$ is the lightest
standard model superpartner.
If decay to the Goldstino takes place well outside the detector
the classic signature of missing energy results.
However,
the form of the low lying spectrum largely dictates
the modes which can be observed.
The lightest charged states are the right handed sleptons,
$\lR^{\pm}$.
At an $e^+e^-$ collider the most relevant mode
is then $e^+ e^- \rightarrow \lR^+ \lR^-$
with $\lR^{\pm} \rightarrow l^{\pm} \na$.
For small $\tan \beta$ all the the sleptons are essentially
degenerate so the rates
to each lepton flavor should be essentially identical.
For large $\tan \beta$ the $\stau_1$ can become
measurably lighter than $\tilde{e}_R$ and
$\tilde{\mu}_R$ (cf. Fig. \ref{sfig17n}).
If sleptons receive masses at the messenger scale
only from standard model gauge interactions, the only
source for splitting of $\tau_1$ from $\eR$ and $\tilde{\mu}_R$
is the $\tau$ Yukawa in renormalization group evolution and
mixing.
As discussed in section \ref{electroweaksection} the largest
effect is from $\stau_L - \stau_R$
mixing proportional to $\tan \beta$.
A precision measurement of $m_{\stau_1}$ therefore
provides an indirect probe of
whether
$\tan \beta$ is large or not.
\jfig{sfig2x}{fig19.ps}
{Production cross sections (fb) for $p \bar{p}$ initial state to the
final states $\chi_1^{\pm} \chi_2^0$ (upper solid line),
$\chi_1^+ \chi_1^-$ (lower solid line),
$\lR^+ \lR^-$ (dot-dashed line),
$\nL \lL^{\pm}$ (upper dashed line), and
$\lL^+ \lL^-$ (lower dashed line).
Lepton flavors are not summed.
The center of mass energy is 2 TeV, ${\rm sgn}(\mu)=+1$,
and $\Lambda=M$.}
At a hadron collider both the mass and gauge quantum numbers determine
the production rate for supersymmetric states.
The production cross sections for electroweak states in
$p \bar{p}$ collisions at $\sqrt{s} = 2$ TeV
(appropriate for the main injector upgrade at the Tevatron)
are shown in Fig. \ref{sfig2x} as a function of
$m_{\na}$ for MGM boundary conditions with $\Lambda=M$
and ${\rm sgn}(\mu)=+1$.
The largest cross section is for pairs of the mostly $W$-ino
$SU(2)_L$ triplet
$(\chi_1^+, \chi_2^0, \chi_1^-)$ through off-shell $W^{\pm *}$ and
$Z^{0*}$.
Pair production of $\lR^+ \lR^-$ is relatively
suppressed even though $m_{\lR} < m_{\chi_1^{\pm}}$ because
scalar production suffers a $\beta^3$ suppression near threshold,
and the right handed sleptons couple only through $U(1)_Y$
interactions via off-shell $\gamma^*$ and $Z^{0*}$.
However,
as the overall scale of the superpartner masses is increased
$\lR^+ \lR^-$ production becomes relatively more important
as can be seen in Fig. \ref{sfig2x}.
This is because production of the more massive $\ca \na$ and
$\chi_1^+ \chi_1^-$ is reduced by the rapidly falling
parton distribution functions.
Pair production of $\lL^+ \lL^-$, $\lL^{\pm} \nL$, and
$\nL \nL$ through off-shell $\gamma^*$, $Z^{0*}$, and
$W^{\pm*}$ is suppressed relative to $\lR^+ \lR^-$
by the larger left handed slepton
masses.
The renormalization group and classical $U(1)_Y$ $D$-term
contributions which slightly increase $m_{\lR}$,
and the renormalization group contribution which decreases
$m_{\na}$, have an impact on the
relative importance $\lR^+ \lR^-$ production.
These effects, along with the radiatively generated $U(1)_Y$ $D$-term,
``improve'' the kinematics of the leptons arising
from $\lR^{\pm} \rightarrow l^{\pm} \na$
since $m_{\lR} - m_{\na}$ is increased \cite{frank}.
However, the overall rate is simultaneously reduced
to a fairly insignificant level \cite{tevatron}.
For example, with ${\rm sgn}(\mu)=+1$
an overall scale which would give an average of one
$\tilde{l}_R^+ \tilde{l}_R^-$ event
in 100 pb$^{-1}$ of integrated luminosity, would result
in over 80
chargino events.
As discussed in section \ref{electroweaksection}, the shift
in the triplet $(\chi_1^+, \chi_2^0, \chi_1^-)$ mass
from mixing with the Higgsinos is anti-correlated with
${\rm sgn}(\mu)$.
For ${\rm sgn}(\mu)=-1$ the splitting between the right handed
sleptons and triplet is larger, thereby reducing slightly
chargino production.
For example, with ${\rm sgn}(\mu)=-1$, a
single $\tilde{l}_R^+ \tilde{l}_R^-$ event
in 100 pb$^{-1}$ of integrated luminosity, would result
in 30 chargino events.
The relative rate of the $\lR^+ \lR^-$ initial state
is increased in the minimal model for $\not \! \! R > 1$.
However, as discussed in Ref. \cite{tevatron},
obtaining a rate comparable to $\chi_1^{\pm} \chi_2^0$
results in ``poor'' kinematics, in that the leptons
arising from $\lR^{\pm} \rightarrow l^{\pm} \na$ are fairly
soft since
$m_{\lR} - m_{\na}$ is reduced.
Note that for $\not \! \! R < 1$ chargino production becomes even more
important than $\lR^+ \lR^-$ production.
In the minimal model pair production of
$\ca \nb$ and $\chi_1^+ \chi_1^-$ are the most important modes at a
hadron collider.
The cascade decays of $\ca$ and $\nb$
are largely fixed by the form of the
superpartner spectrum and couplings.
If open, $\ca$ decays predominantly through its
Higgsino components to the Higgsino components of $\na$ by
$\ca \rightarrow \na W^{\pm}$.
Likewise, $\nb$ can also decay by $\nb \rightarrow \na Z^0$.
However, if open $\nb \rightarrow h^0 \na$ is suppressed by
only a single Higgsino component in
either $\nb$ or $\na$, and represents the dominant
decay mode for $m_{\nb} \gsim m_{h^0} + m_{\na}$.
The decay $\nb \rightarrow \lR^{\pm} l^{\mp}$ is suppressed by
the very small $B$-ino component of $\nb$, and is only important
if the other two-body modes given above are closed.
If the two body decay modes for $\ca$ are closed, it decays
through three-body final states predominantly
through off-shell $W^{\pm*}$.
Over much of the parameter space the minimal model therefore
gives rise to the signatures
$p \bar{p} \rightarrow W^{\pm} Z^0 + \not \! \! E_{T}$,
$W^{\pm} h^0 + \not \! \! E_{T}$, and
$W^+ W^- + \not \! \! E_{T}$.
If decay to the Goldstino takes place well outside the
detector, the minimal model yields
the ``standard'' chargino signatures at a hadron collider
\cite{charginoref}.
If the intrinsic supersymmetry breaking scale is below a few
1000 TeV, the lightest standard model superpartner can
decay to its partner plus the Goldstino within the
detector \cite{sbtalk,signatures}.
For the case of $\na$ as the lightest standard model superpartner,
this degrades somewhat the missing energy, but leads to
additional visible energy.
The neutralino $\na$ decays by
$\na \rightarrow \gamma + G$ and if kinematically accessible
$\na \rightarrow (Z^0, h^0, H^0, A^0) + G$.
In the minimal model $m_{A^0}, m_{H^0} > m_{\na}$ so the only two
body final states potentially open are
$\na \rightarrow (\gamma, Z^0, h^0) + G$.
However, as discussed in section
\ref{electroweaksection}, with MGM boundary conditions,
electroweak symmetry breaking implies
that $\na$ is mostly $B$-ino,
and therefore decays predominantly to the
gauge boson final states.
The decay $\na \rightarrow h^0 + G$
takes place only through the small Higgsino components.
In appendix \ref{appgoldstino}
the decay rate to the $h^0$ final state is
shown to be suppressed by
${\cal O}(m_{Z^0}^2 m_{\na}^2 / \mu^4)$
compared with the
gauge boson final states, and is therefore
insignificant in the minimal model.
Observation of the decay $\na \rightarrow h^0 + G$ would
imply non-negligible Higgsino components in $\na$,
and be a clear signal for deviations from the minimal
model in the Higgs sector.
\jfig{sfig1x}{fig20.ps}
{The branching ratios for $\na \rightarrow \gamma + G$ (solid line)
and $\na \rightarrow Z^0 + G$ (dashed line) as a function of
$m_{\na}$ for $\Lambda=M$.}
For example, as discussed in section \ref{addhiggs},
$\Delta_-$ large and negative leads to a mostly
Higgsino $\na$, which decays predominantly by
$\na \rightarrow h^0 + G$.
The branching ratios in the minimal model
for $\na \rightarrow \gamma +G$ and
$\na \rightarrow Z^0 + G$ are shown in Fig.
\ref{sfig1x} as a function of $m_{\na}$ for $\Lambda=M$.
In the minimal model, with $\na$ decaying within the detector,
the signatures are the same as those given above, but
with an additional pair of $\gamma \gamma$,
$\gamma Z^0$, or $Z^0 Z^0$.
At an $e^+ e^-$ collider $e^+ e^- \rightarrow \na \na \rightarrow
\gamma \gamma + \not \! \! E$ becomes the discovery mode
\cite{sbtalk, signatures, stump}.
At a hadron collider the reduction in $\not \! \! E_{T}$
from the secondary decay is
more than compensated by the additional
very distinctive visible energy.
The presence of hard photons
significantly reduces the background compared
with standard supersymmetric signals
\cite{sbtalk, signatures, tevatron, akkm, SUSY96}.
In addition, decay of $\na \rightarrow \gamma + G$
over a macroscopic distance leads to displaced
photon tracks, and of
$\na \rightarrow Z^0 + G$ to displaced charged particle tracks.
Measurement of the displaced vertex distribution
gives a measure of the supersymmetry breaking scale.
\jfig{sfig3x}{fig21.ps}
{The ratio $\sigma( p \bar{p} \rightarrow \stau_1^+ \stau_1^-) /
\sigma( p \bar{p} \rightarrow \eR^+ \eR^-)$ as a function of
$\tan \beta$ for $m_{\bino}(M)=115$ GeV and $\Lambda=M$.
The center of mass energy is 2 TeV.}
In the minimal model, for large $\tan \beta$ the
$\stau_1$ can become significantly lighter than
$\eR$ and $\tilde{\mu}_R$.
This enhances the $\stau_1^+ \stau_1^-$ production cross
section at a hadron collider.
The ratio $\sigma( p \bar{p} \rightarrow \stau_1^+ \stau_1^-) /
\sigma( p \bar{p} \rightarrow \eR^+ \eR^-)$ for $\sqrt{s}=2$ TeV
is shown in Fig. \ref{sfig3x} as a function of
$\tan \beta$ for $m_{\bino}(M)=115$ GeV and $\Lambda=M$.
Measurement of this ratio gives a measure of the
$\tilde{\tau}_1$ mass.
Within the minimal model this allows an indirect probe
of $\tan \beta$.
\subsubsection{Heavy Charged Particle Signatures}
In the minimal model
the $\stau_1$ becomes lighter than $\na$
for $\tan \beta$ large enough,
as discussed in section \ref{electroweaksection}.
The $\stau_1$ is then the lightest standard model superpartner.
This is not a cosmological problem since the $\stau_1$
can decay to the Goldstino component of the gravitino,
$\stau_1 \rightarrow \tau + G$,
on a cosmologically short time scale.
However, if the supersymmetry breaking
scale is larger than a few 1000 TeV,
at a collider this decay
takes place well outside the detector.
The signature for supersymmetry in this case is
heavy charged particles passing through the detector, rather than
missing energy.
At an $e^+e^-$ collider the most relevant mode is then
$e^+e^- \rightarrow \stau_1^+ \stau_1^-$.
At a hadron collider the signatures are very different
since $\na$ decays by $\na \rightarrow \stau_1^{\pm} \tau^{\mp}$.
Over much of the parameter space
the dominant chargino production then gives rise to
the signatures
$p \bar{p} \rightarrow W^{\pm} Z^0 \tau^+ \tau^- \stau_1^+ \stau_1^-$,
$W^{\pm} h^0 \tau^+ \tau^- \stau_1^+ \stau_1^-$, and
$W^+ W^- \tau^+ \tau^- \stau_1^+ \stau_1^-$.
The additional cascade decays
$\nb \rightarrow \stau_1^{\pm} \tau^{\mp}$ and
$\chi_1^{\pm} \rightarrow \stau_1^{\pm} \nu_{\tau}$
are also available through the $\stau_L$ component of
$\stau_1$.
Chargino production can therefore also give the signatures
$p \bar{p} \rightarrow W^{\pm} \tau^{\pm} \tau^{\pm}
\stau_1^{\mp} \stau_1^{\mp}$,
$Z^0 \tau^{\pm} \stau_1^{\pm} \stau_1^{\mp} + \not \! \! E_{T}$,
$h^0 \tau^{\pm} \stau_1^{\pm} \stau_1^{\mp} + \not \! \! E_{T}$,
$\tau^{\pm} \stau_1^{\mp} \stau_1^{\mp} + \not \! \! E_{T}$,
$W^{\pm} \tau^{\pm} \stau_1^{\mp} \stau_1^{\pm} + \not \! \! E_{T}$,
and
$\stau_1^+ \stau_1^- + \not \! \! E_{T}$.
Finally, direct pair production gives the
signature
$p \bar{p} \rightarrow \stau_1^+ \stau_1^-$,
while $\tilde{l}_{R}^+ \tilde{l}_{R}^-$
production gives
$p \bar{p} \rightarrow l^+ l^{-} \tau^+ \tau^- \stau_1^+ \stau_1^-$
for $l = e, \mu$.
If the supersymmetry breaking scale is well below a few
1000 TeV the $\stau_1$ decays within the detector
to a Goldstino by $\stau_1 \rightarrow \tau + G$.
The signature of heavy charged particle production is
then lost, but missing energy results since the Goldstinos
escape the detector.
In the signatures given above then all
the $\stau_1^{\pm}$ are replaced by $\tau^{\pm} + \not \! \! E_{T}$.
The signature of heavy charged particles can also result
with multiple generations in the messenger sector.
As discussed in section \ref{multiple}, messenger
sectors with larger matter representations result
in gauginos which are heavier relative to the scalars
than in the minimal model.
For $N \geq 3$, and over much of the parameter
space of the $N=2$ model, a right handed slepton
is the lightest standard model superpartner.
Because of the larger Yukawa coupling, the $\stau_1$
is always lighter than $\tilde{e}_R$ and $\tilde{\mu}_R$.
However, for small to moderate $\tan \beta$
$m_{\tilde{\mu}_R} - m_{\stau_1} < m_{\tau} + m_{\mu}$,
and the decay
$\tilde{\mu}_R^{\pm} \rightarrow \stau_1^+ \tau^- \mu^{\pm}$
through the $B$-ino component of off-shell $\chi_1^{0*}$
is kinematically blocked, and
likewise for $\tilde{e}_R$ \cite{SUSY96}.
In addition, the second order electroweak decay
$\tilde{\mu}_R^{+} \rightarrow \stau_1^{+} {\nu}_{\tau} \bar{\nu}_{\mu}$
is highly suppressed and not relevant for decay within the
detector.
In this case all three sleptons $\tilde{e}_R$,
$\tilde{\mu}_R$, and $\stau_1$, are effectively
stable on the scale of the detector
for a supersymmetry breaking scale larger than a few 1000 TeV.
At an $e^+e^-$ collider the most relevant signature
becomes $e^+e^- \rightarrow \lR^+ \lR^-$ with the sleptons
leaving a greater than minimum ionizing track in the detector.
At a hadron collider $\chi_1^{\pm} \chi_2^0$ and
$\chi_1^+ \chi_1^-$ production gives the signatures
$p \bar{p} \rightarrow W^{\pm} Z^0 l^+ l^{\prime -} \tilde{l}_R^-
\tilde{l}_R^{\prime +}$,
$W^{\pm} h^0 l^+ l^{\prime -} \tilde{l}_R^-
\tilde{l}_R^{\prime +}$, and
$W^+ W^- l^+ l^{\prime -} \tilde{l}_R^- \tilde{l}_R^{\prime +}$,
while
direct slepton pair production gives
$p \bar{p} \rightarrow \tilde{l}_R^+ \tilde{l}_R^-$.
If $\tan \beta$ is large then
$m_{\tilde{\mu}_R} - m_{\stau_1} > m_{\tau} + m_{\mu}$,
so that the decay
$\tilde{\mu}_R^{+} \rightarrow \stau_1^{\pm} \tau^{\mp} \mu^{+}$
can take place within the detector,
and likewise for $\eR$.
All the cascades then end with $\stau_1^{\pm}$.
The additional $\tau^{\pm} l^+$, $\tau^{\pm} l^-$ which result
from $\tilde{l}_R^{\pm}$ decay are very soft unless the
splitting $m_{\lR} - m_{\stau_1}$ is sizeable.
If the supersymmetry breaking scale is below a few 1000 TeV,
the sleptons can decay to the Goldstino by
$\tilde{l}_R \rightarrow l + G$ within the detector.
A missing energy signature then results from the escaping
Goldstinos, and all the $\tilde{l}_R^{\pm}$ in the above signatures
are replaced by $l^{\pm} + \not \! \! E_{T}$.
If the decay $\lR \rightarrow l + G$ takes place over a macroscopic
distance the spectacular signature of a greater than
minimizing ionizing track with a kink to a minimum
ionizing track results \cite{signatures,SUSY96}.
Again, measurement of the decay length distribution would
give a measure of the supersymmetry breaking scale.
All these interesting heavy charged particle signatures should
not be overlooked in the search for supersymmetry at
future colliders.
\section{Conclusions}
Gauge-mediated supersymmetry breaking has many consequences
for the superpartner mass spectrum, and phenomenological
signatures.
In a large class of gauge-mediated models
(including all the single spurion models given in this paper)
the general features include:
\begin{itemize}
\item The natural absence of flavor changing neutral currents.
\item A large hierarchy among scalars with different gauge
charges, {$m_{\tilde q_R}/m_{\tilde{l}_R}\lsim 6.3$}, and
{$m_{\tilde l_L}/m_{\tilde{l}_R}\lsim 2.1$}, with the inequalities
saturated for a messenger scale of order the supersymmetry
breaking scale.
\item Mass splittings between scalars with different gauge
quantum numbers are related by various sum rules.
\item ``Gaugino unification'' mass relations.
\item{Precise degeneracy among the first two generation scalars,
and sum rules
for the third generation that test the flavor symmetry of
masses at the messenger scale.}
\item Radiative electroweak symmetry breaking induced by
heavy stops, even for a low messenger scale.
\item Small $A$-terms.
\item The lightest standard model superpartner is either
$\na$ or $\lR^{\pm}$.
\item
The possibility of
the lightest standard model superpartner
decaying within the detector to its partner plus the Goldstino.
\end{itemize}
The mass relations and sum rules hold in a very large class of
gauge-mediated models and represent fairly generic features.
The possibility that the lightest standard model superpartners is
a charged slepton leads to the dramatic signature of
heavy charged particles leaving a greater than minimum ionizing
track in the detector.
This signature should not be overlooked in searches for supersymmetry
at future colliders.
The possibility that the lightest standard model
superpartner decays within the detector, either
$\na \rightarrow (\gamma, Z^0, h^0) + G$ or $\lR \rightarrow l + G$,
leads to very distinctive signatures, and provides
the possibility of indirectly measuring the supersymmetry
breaking scale.
The minimal model of gauge-mediated supersymmetry breaking
is highly constrained, and gives the additional general
features:
\begin{itemize}
\item Gauginos are lighter than the associated scalars,
$m_3 < m_{\tilde{q}}$,
$m_2 < m_{\lL}$, and
$m_1 < m_{\lR}$.
\item The Higgsinos are heavier than the electroweak gauginos,
$3 m_1 \lsim |\mu| \lsim 6 m_1$.
\item Absence of a light stop.
\item The mass of the lightest Higgs boson receives large
radiative corrections from the heavy stops,
{$80\gev\lsim m_{h^0}\lsim 140\gev$.}
\item Unless $\tan \beta$ is very large,
the lightest standard model superpartner is the
mostly $B$-ino $\na$, which decays predominantly by
$\na \rightarrow \gamma +G$.
\item At a hadron collider the largest supersymmetric production
cross section is for $\chi_1^{\pm} \chi_2^0$ and
$\chi_1^+ \chi_1^-$.
\item{Discernible deviation in $Br(b\rightarrow s\gamma)$ from the standard model
with data from future $B$-factories.}
\end{itemize}
If superpartners are detected at a high energy collider, one of the most
important tasks will be to match the low energy spectrum with a more
fundamental theory.
Patterns and relations among the superpartner masses
can in general give information about the messenger sector
responsible for transmitting supersymmetry breaking.
As discussed in this paper, gauge-mediated supersymmetry
breaking leads to many distinctive patterns in the superpartner
spectrum.
Any spectroscopy can of course
be trivially mocked by postulates of
non-universal boundary conditions at any messenger scale.
However, gauge-mediation in its minimal form represents
a {\it simple} anzatz which is highly predictive.
In addition, if decay of the lightest standard model superpartner
takes place within the detector, implying a low supersymmetry breaking
scale, the usual gauge interactions
are likely to play some role in the messenger sector.
The overall scale for the superpartner masses is of course a free
parameter.
However, the Higgs sector mass parameters set the scale
for electroweak symmetry breaking.
Since all the superpartner masses are related to a single
overall scale with gauge-mediated supersymmetry breaking,
it is reasonable that the states transforming under
$SU(2)_L$ have mass of order the electroweak scale.
From the low energy point of view,
masses much larger than this scale would appear
to imply that electroweak symmetry breaking is tuned, and
that the electroweak scale is unnaturally small.
Quantitative measures of tuning are of course subjective.
However,
when the overall scale is large compared to $m_{Z^0}$,
tuning among the Higgs sector parameters arises in
the minimization condition (\ref{mincona}) as a
near cancelation between $(\tan^2 \beta-1)|\mu|^2$ and
$m_{H_u}^2 - \tan^2 \beta m_{H_d}^2$, resulting in
$m_{Z^0}^2 \ll |\mu|^2$.
In this regime the near cancelation enforces constraints
among some of the Higgs sector parameters in order to obtain
proper electroweak symmetry breaking.
As the overall superpartner scale is increased these
tuned constraints are reflected by ratios
in the physical spectrum which become independent of the
electroweak scale.
This tuning is visually apparent in Fig. \ref{sfig14n}
as the linear dependence of $m_{A^0}$ on $m_{\na}$
at large overall scales.
The ``natural'' regime in which the Higgs sector parameters
are all the same order as the electroweak and superpartner scale
can be seen in Fig. \ref{sfig14n}
as the non-linear dependence of $m_{A^0}$ on $m_{\na}$.
In Fig. \ref{sfig16n} this ``natural''
non-linear regime with light superpartners
is in the far lower
left corner, and hardly discernible in the linearly scaled plot.
Although no more subjective than any measure of tuning,
this bodes well for the prospects
indirectly detecting the effects of superpartners and Higgs
bosons in precision measurements, and
for directly producing superpartners at future colliders.
\medskip
\noindent
{\it Acknowledgements:} We would like to thank M. Carena, M. Dine,
G. Giudice, H. Haber, S. Martin,
M. Peskin, D. Pierce, A. Pomarol,
and C. Wagner for constructive comments.
We would also like to thank
the Aspen Center for Physics and CERN, where this work was partially
completed.
| proofpile-arXiv_065-375 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Figure Captions\markboth
{FIGURECAPTIONS}{FIGURECAPTIONS}}\list
{Figure \arabic{enumi}:\hfill}{\settowidth\labelwidth{Figure
999:}
\leftmargin\labelwidth
\advance\leftmargin\labelsep\usecounter{enumi}}}
\let\endfigcap\endlist \relax
\def\tablecap{\section*{Table Captions\markboth
{TABLECAPTIONS}{TABLECAPTIONS}}\list
{Table \arabic{enumi}:\hfill}{\settowidth\labelwidth{Table
999:}
\leftmargin\labelwidth
\advance\leftmargin\labelsep\usecounter{enumi}}}
\let\endtablecap\endlist \relax
\def\reflist{\section*{References\markboth
{REFLIST}{REFLIST}}\list
{[\arabic{enumi}]\hfill}{\settowidth\labelwidth{[999]}
\leftmargin\labelwidth
\advance\leftmargin\labelsep\usecounter{enumi}}}
\let\endreflist\endlist \relax
\def\list{}{\rightmargin\leftmargin}\item[]{\list{}{\rightmargin\leftmargin}\item[]}
\let\endquote=\endlist
\makeatletter
\newcounter{pubctr}
\def\@ifnextchar[{\@publist}{\@@publist}{\@ifnextchar[{\@publist}{\@@publist}}
\def\@publist[#1]{\list
{[\arabic{pubctr}]\hfill}{\settowidth\labelwidth{[999]}
\leftmargin\labelwidth
\advance\leftmargin\labelsep
\@nmbrlisttrue\def\@listctr{pubctr}
\setcounter{pubctr}{#1}\addtocounter{pubctr}{-1}}}
\def\@@publist{\list
{[\arabic{pubctr}]\hfill}{\settowidth\labelwidth{[999]}
\leftmargin\labelwidth
\advance\leftmargin\labelsep
\@nmbrlisttrue\def\@listctr{pubctr}}}
\let\endpublist\endlist \relax
\makeatother
\newskip\humongous \humongous=0pt plus 1000pt minus 1000pt
\def\mathsurround=0pt{\mathsurround=0pt}
\def\eqalign#1{\,\vcenter{\openup1\jot \mathsurround=0pt
\ialign{\strut \hfil$\displaystyle{##}$&$
\displaystyle{{}##}$\hfil\crcr#1\crcr}}\,}
\newif\ifdtup
\def\panorama{\global\dtuptrue \openup1\jot \mathsurround=0pt
\everycr{\noalign{\ifdtup \global\dtupfalse
\vskip-\lineskiplimit \vskip\normallineskiplimit
\else \penalty\interdisplaylinepenalty \fi}}}
\def\eqalignno#1{\panorama \tabskip=\humongous
\halign to\displaywidth{\hfil$\displaystyle{##}$
\tabskip=0pt&$\displaystyle{{}##}$\hfil
\tabskip=\humongous&\llap{$##$}\tabskip=0pt
\crcr#1\crcr}}
\relax
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\bar{\partial}{\bar{\partial}}
\def\bar{J}{\bar{J}}
\def\partial{\partial}
\def f_{,i} { f_{,i} }
\def F_{,i} { F_{,i} }
\def f_{,u} { f_{,u} }
\def f_{,v} { f_{,v} }
\def F_{,u} { F_{,u} }
\def F_{,v} { F_{,v} }
\def A_{,u} { A_{,u} }
\def A_{,v} { A_{,v} }
\def g_{,u} { g_{,u} }
\def g_{,v} { g_{,v} }
\def\kappa{\kappa}
\def\rho{\rho}
\def\alpha{\alpha}
\def {\bar A} {\Alpha}
\def\beta{\beta}
\def\Beta{\Beta}
\def\gamma{\gamma}
\def\Gamma{\Gamma}
\def\delta{\delta}
\def\Delta{\Delta}
\def\epsilon{\epsilon}
\def\Epsilon{\Epsilon}
\def\p{\pi}
\def\Pi{\Pi}
\def\chi{\chi}
\def\Chi{\Chi}
\def\theta{\theta}
\def\Theta{\Theta}
\def\mu{\mu}
\def\nu{\nu}
\def\omega{\omega}
\def\Omega{\Omega}
\def\lambda{\lambda}
\def\Lambda{\Lambda}
\def\s{\sigma}
\def\Sigma{\Sigma}
\def\varphi{\varphi}
\def{\cal M}{{\cal M}}
\def\tilde V{\tilde V}
\def{\cal V}{{\cal V}}
\def\tilde{\cal V}{\tilde{\cal V}}
\def{\cal L}{{\cal L}}
\def{\cal R}{{\cal R}}
\def{\cal A}{{\cal A}}
\defSchwarzschild {Schwarzschild}
\defReissner-Nordstr\"om {Reissner-Nordstr\"om}
\defChristoffel {Christoffel}
\defMinkowski {Minkowski}
\def\bigskip{\bigskip}
\def\noindent{\noindent}
\def\hfill\break{\hfill\break}
\def\qquad{\qquad}
\def\bigl{\bigl}
\def\bigr{\bigr}
\def\overline\del{\overline\partial}
\def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}}
\def$SL(2,\IR)_{-k'}\otimes SU(2)_k/(\IR \otimes \tilde \IR)${$SL(2,\relax{\rm I\kern-.18em R})_{-k'}\otimes SU(2)_k/(\relax{\rm I\kern-.18em R} \otimes \tilde \relax{\rm I\kern-.18em R})$}
\def Nucl. Phys. { Nucl. Phys. }
\def Phys. Lett. { Phys. Lett. }
\def Mod. Phys. Lett. { Mod. Phys. Lett. }
\def Phys. Rev. Lett. { Phys. Rev. Lett. }
\def Phys. Rev. { Phys. Rev. }
\def Ann. Phys. { Ann. Phys. }
\def Commun. Math. Phys. { Commun. Math. Phys. }
\def Int. J. Mod. Phys. { Int. J. Mod. Phys. }
\def\partial_+{\partial_+}
\def\partial_-{\partial_-}
\def\partial_{\pm}{\partial_{\pm}}
\def\partial_{\mp}{\partial_{\mp}}
\def\partial_{\tau}{\partial_{\tau}}
\def \bar \del {\bar \partial}
\def {\bar h} { {\bar h} }
\def \bphi { {\bar \phi} }
\def {\bar z} { {\bar z} }
\def {\bar A} { {\bar A} }
\def {\tilde {A }} { {\tilde {A }}}
\def {\tilde {\A }} { {\tilde { {\bar A} }}}
\def {\bar J} {{\bar J} }
\def {\tilde {J }} { {\tilde {J }}}
\def {1\over 2} {{1\over 2}}
\def {1\over 3} {{1\over 3}}
\def \over {\over}
\def\int_{\Sigma} d^2 z{\int_{\Sigma} d^2 z}
\def{\rm diag}{{\rm diag}}
\def{\rm const.}{{\rm const.}}
\def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}}
\def$SL(2,\IR)\otimes SO(1,1)^{d-2}/SO(1,1)${$SL(2,\relax{\rm I\kern-.18em R})\otimes SO(1,1)^{d-2}/SO(1,1)$}
\def$SL(2,\IR)_{-k'}\otimes SU(2)_k/(\IR \otimes \tilde \IR)${$SL(2,\relax{\rm I\kern-.18em R})_{-k'}\otimes SU(2)_k/(\relax{\rm I\kern-.18em R} \otimes \tilde \relax{\rm I\kern-.18em R})$}
\def$SO(d-1,2)_{-k}/ SO(d-1,1)_{-k}${$SO(d-1,2)_{-k}/ SO(d-1,1)_{-k}$}
\def$SO(d-1,2)/ SO(d-1,1)${$SO(d-1,2)/ SO(d-1,1)$}
\def\ghc{ G^c_h }
\def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}}
\def$SL(2,\IR)\otimes SO(1,1)^{d-2}/SO(1,1)${$SL(2,\relax{\rm I\kern-.18em R})\otimes SO(1,1)^{d-2}/SO(1,1)$}
\def$SL(2,\IR)_{-k'}\otimes SU(2)_k/(\IR \otimes \tilde \IR)${$SL(2,\relax{\rm I\kern-.18em R})_{-k'}\otimes SU(2)_k/(\relax{\rm I\kern-.18em R} \otimes \tilde \relax{\rm I\kern-.18em R})$}
\def$SO(d-1,2)_{-k}/ SO(d-1,1)_{-k}${$SO(d-1,2)_{-k}/ SO(d-1,1)_{-k}$}
\def$SO(d-1,2)/ SO(d-1,1)${$SO(d-1,2)/ SO(d-1,1)$}
\def\ghc{ G^c_h }
\def{\cal M}{{\cal M}}
\def\tilde V{\tilde V}
\def{\cal V}{{\cal V}}
\def\tilde{\cal V}{\tilde{\cal V}}
\def{\cal L}{{\cal L}}
\def{\cal R}{{\cal R}}
\def{\cal A}{{\cal A}}
\begin{document}
\renewcommand{\thesection.\arabic{equation}}}{\arabic{equation}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\eeq}[1]{\label{#1}\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\eer}[1]{\label{#1}\end{eqnarray}}
\newcommand{\eqn}[1]{(\ref{#1})}
\begin{titlepage}
\begin{center}
\hfill THU--96/33\\
\hfill September 1996\\
\hfill hep--th/9609165\\
\vskip .8in
{\large \bf COSET MODELS AND DIFFERENTIAL GEOMETRY
\footnote{Contribution to the proceedings of the
{\em Conference on Gauge Theories,
Applied Supersymmetry and Quantum Gravity}, Imperial College, London,
5-10 July 1996 and the e--proceedings
of Summer 96 Theory Institute, {\em Topics in Non-Abelian Duality},
Argonne, IL, 27 June - 12 July 1996. }}
\vskip 0.6in
{\bf Konstadinos Sfetsos
\footnote{e--mail address: sfetsos@fys.ruu.nl}}\\
\vskip .1in
{\em Institute for Theoretical Physics, Utrecht University\\
Princetonplein 5, TA 3508, The Netherlands}\\
\vskip .2in
\end{center}
\vskip .6in
\begin{center} {\bf ABSTRACT } \end{center}
\begin{quotation}\noindent
\noindent
String propagation on a
curved background defines an embedding problem of surfaces
in differential geometry.
Using this, we show that in a wide class of backgrounds the
classical dynamics of the physical degrees of freedom of the string
involves 2--dim
$\s$--models corresponding to coset conformal field theories.
\vskip .2in
\noindent
\end{quotation}
\end{titlepage}
\def1.2{1.2}
\baselineskip 16 pt
\noindent
Coset models have been used in string theory
for the construction of classical vacua,
either as internal theories in string compactification or as
exact conformal field theories representing curved spacetimes.
Our primary aim in this note, based on \cite{basfe}, is to reveal their
usefulness in a different context by
demonstrating that certain
classical aspects of constraint systems are governed by 2--dim
$\s$--models corresponding to some specific
coset conformal field theories.
In particular, we will examine string propagation
on arbitrary curved backgrounds with Lorentzian signature which
defines an embedding problem in differential geometry,
as it was first shown for 4--dim Minkowski space by Lund and
Regge \cite{LuRe}.
Choosing, whenever possible, the temporal gauge one may solve the Virasoro
constraints and hence be left with $D-2$
coupled non--linear differential equations governing the dynamics of
the physical degrees of freedom of the string.
By exploring their integrability properties, and considering
as our Lorentzian
background $D$--dim Minkowski space or the product form
$R\otimes K_{D-1}$, where $K_{D-1}$ is any WZW model
for a semi--simple compact group,
we will establish
connection with the coset model conformal field theories
$SO(D-1)/SO(D-2)$.
This universal behavior irrespectively of the particular WZW model
$K_{D-1}$ is rather remarkable,
and sheds
new light into the differential geometry of embedding surfaces using
concepts and field variables, which so far have been natural
only in conformal field theory.
Let us consider classical propagation of closed strings on a
$D$--dim background that is
the direct product of the real line $R$ (contributing a minus
in the signature matrix)
and a general manifold (with Euclidean signature) $K_{D-1}$.
We will denote $\s^\pm= {1\over 2}(\tau\pm \s)$, where
$\tau$ and $\s$ are the natural time and spatial variables
on the world--sheet $\Sigma$.
Then,
the 2--dim $\s$--model action is given by
\begin{equation}
S= {1\over 2} \int_\Sigma (G_{\mu\nu} + B_{\mu\nu}) \partial_+ y^\mu \partial_- y^\nu
- \partial_+ y^0 \partial_- y^0 ~ , ~~~~~~~~ \mu,\nu =1,\dots , D-1~ ,
\label{smoac}
\end{equation}
where $G$, $B$ are the non--trivial metric
and antisymmetric tensor fields and
are independent of $y^0$.
The conformal gauge, we have implicitly chosen in writing
down (\ref{smoac}),
allows us to further set $y^0=\tau$ (temporal gauge).
Then we are left with the $D-1$ equations
of motion corresponding to the $y^\mu$'s,
as well as with the Virasoro constraints
\begin{equation}
G_{\mu\nu} \partial_\pm y^\mu \partial_\pm y^\nu = 1 ~ ,
\label{cooss}
\end{equation}
which can be used to further reduce
the degrees of freedom by one, thus leaving only the
$D-2$ physical ones.
We also define an angular variable $\theta$ via the relation
\begin{equation}
G_{\mu\nu} \partial_+ y^\mu \partial_- y^\nu = \cos \theta ~ .
\label{angu}
\end{equation}
In the temporal gauge we may restrict our analysis
entirely on $K_{D-1}$ and on
the projection of the string world--sheet $\Sigma$ on the
$y^0=\tau$ hyperplane. The resulting 2--dim surface
$S$ has Euclidean signature with metric given by
the metric $G_{\mu\nu}$ on $K_{D-1}$ restricted on $S$.
Using (\ref{cooss}), (\ref{angu}) we find that the
corresponding line element reads
\begin{equation}
ds^2 = d{\s^+}^2 + d{\s^-}^2 + 2 \cos\theta d\s^+ d\s^- ~ .
\label{dsS2}
\end{equation}
In general, determining the classical
evolution of the string is equivalent to
the problem of determining the 2--dim surface
that it forms as it moves.
Phrased in purely geometrical terms this is equivalent,
in our case, to
the embedding problem of the 2--dim surface $S$
with metric (\ref{dsS2}) into the $(D-1)$--dim space
$K_{D-1}$. The solution requires that a complete set
of $D-1$ vectors tangent and normal to the surface $S$ as functions
of $\s_+$ and $\s_-$ is found.
In our case the 2 natural tangent vectors are
$\{\partial_+ y^\mu, \partial_- y^\mu\}$,
whereas the remaining $D-3$ normal ones will be denoted by
$\{\xi^\mu_\s, \s=3,4,\dots, D-1\}$.
These vectors obey first order partial
differential equations \cite{Eisenhart} that depend, as expected,
on the detailed structure of
$K_{D-1}$. Since we are only interested
in some universal aspects we will solely restrict to the
corresponding \underline{compatibility} equations.
In general, these involve the
Riemann curvatures for the metrics of the two spaces
$S$ and $K_{D-1}$, as well as the second
fundamental form with components $\Omega^\s_{\pm\pm}$,
$\Omega^\s_{+-}=\Omega^\s_{-+}$ and the third
fundamental form ($\equiv$ torsion) with components
$\mu^{\s\tau}_\pm =-\mu^{\tau\s}_\pm$ \cite{Eisenhart}. It turns out
that the $D-1$ classical equations of motion for \eqn{smoac}
(in the gauge $y^0 = \tau$) and the two
constraints (\ref{cooss}) completely determine the components
of the second fundamental form $\Omega^\s_{+-}$ \cite{basfe}.
In what follows we will also use instead of $\mu_\pm^{\s\tau}$
a modified, by a term that
involves $H_{\mu\nu\rho}=\partial_{[\mu}B_{\nu\rho]}$,
torsion $M_\pm^{\s\tau}$ \cite{basfe}.
Then the compatibility equations
for the remaining components $\Omega^\s_{\pm\pm}$ and
$M_\pm^{\s\tau}$ are \cite{basfe}:
\begin{eqnarray}
&& \Omega^\tau_{++} \Omega^\tau_{--} + \sin\theta \partial_+ \partial_- \theta
= - R^+_{\mu\nu\alpha\beta}
\partial_+ y^\mu \partial_+ y^\alpha \partial_- y^\nu \partial_- y^\beta ~ ,
\label{gc1} \\
&& \partial_{\mp} \Omega^\s_{\pm\pm} - M_\mp^{\tau\s} \Omega^\tau_{\pm\pm}
-{1\over \sin\theta} \partial_\pm\theta \Omega^\s_{\mp\mp}
= R^\mp_{\mu\nu\alpha\beta}
\partial_\pm y^\mu \partial_\pm y^\alpha \partial_\mp y^\beta \xi^\nu_\s ~ ,
\label{gc2} \\
&& \partial_+ M_-^{\s\tau} - \partial_- M_+^{\s\tau}
- M_-^{\rho[\s} M_+^{\tau]\rho}
+ {\cos\theta \over \sin^2\theta} \Omega^{[\s}_{++} \Omega^{\tau]}_{--}
= R^-_{\mu [\beta \alpha]\nu}
\partial_+ y^\mu \partial_- y^\nu \xi^\alpha_\s \xi^\beta_\tau ~ ,
\label{gc3}
\end{eqnarray}
where the curvature tensors and the covariant derivatives $D^\pm_\mu$
are defined using the generalized
connections that include the string torsion
$H_{\mu\nu\rho}$.\footnote{We have written \eqn{gc3}
in a slightly different form compared to the same equation in \cite{basfe}
using the identity $D^-_\mu H_{\nu\alpha\beta} = R^-_{\mu[\nu\alpha\beta]}$.}
Equations (\ref{gc1})--\eqn{gc3}
are generalizations of the
Gauss--Codazzi and Ricci equations for a surface
immersed in Euclidean space.
For $D\geq 5$ there are
${1\over 2} (D-3)(D-4)$ more unknown functions ($\theta$, $\Omega^\s_{\pm\pm}$
and $M_\pm^{\s\tau}$) than equations in \eqn{gc1}--\eqn{gc3}.
However, there is an underlying gauge invariance \cite{basfe}
which accounts for the extra (gauge) degrees of freedom
and can be used to eliminate them (gauge fix).
Making further progress with
the embedding system of equations (\ref{gc1})--(\ref{gc3})
as it stands seems a difficult task. This is
due to the presence of terms
depending explicitly on $\partial_\pm y^\mu$ and $\xi^\mu_\s$,
which can only be determined by solving the
actual string evolution equations.
Moreover, a Lagrangian from which
(\ref{gc1})--(\ref{gc3}) can be derived as equations of
motion is also lacking. Having such a description is advantageous in
determining the operator content of the theory and for quantization.
Rather remarkably, all of these problems can be simultaneously
solved by considering for $K_{D-1}$ either flat space with zero torsion or
any WZW model based on a
semi--simple compact group $G$, with $\dim(G)=D-1$.
This is due to the identity
\begin{equation}
R^\pm_{\mu\nu\alpha\beta} = 0 ~ ,
\label{rdho}
\end{equation}
which is valid not only for flat space with zero torsion but also
for all WZW models \cite{zachos}.
Then we completely get rid of the bothersome terms on the right
hand side of (\ref{gc1})--(\ref{gc3}).\footnote{Actually, the same
result is obtained by demanding the weaker condition
$R^-_{\mu\nu\alpha\beta}=R^-_{\mu\alpha\nu\beta}$, but we are not aware of any examples
where these weaker conditions hold.}
In is convenient to
extend the range of definition of
$\Omega^\s_{++}$ and $M_\pm^{\s\tau}$ by appending new components
defined as: $\Omega^2_{++}= \partial_+ \theta$,
$M_+^{\s 2}= \cot \theta \Omega^\s_{++}$ and
$M_-^{\s2} = - \Omega^\s_{--}/\sin\theta$.
Then equations (\ref{gc1})--(\ref{gc3}) can be recast into the
suggestive form
\begin{eqnarray}
&& \partial_- \Omega^i_{++} + M_-^{ij} \Omega^j_{++} = 0 ~ ,
\label{new1} \\
&& \partial_+ M_-^{ij} - \partial_- M_+^{ij} + [M_+,M_-]^{ij} = 0 ~ ,
\label{new2}
\end{eqnarray}
where the new index $i=(2,\s)$.
Equation (\ref{new2}) is a
zero curvature condition for the matrices $M_\pm$ and it is locally
solved by $M_\pm = \Lambda^{-1} \partial_\pm \Lambda$,
where $\Lambda \in SO(D-2)$. Then (\ref{new1}) can be cast into
equations for $Y^i=\Lambda^{i2} \sin \theta$ \cite{basfe}
\begin{equation}
\partial_- \left( {\partial_+ Y^i \over \sqrt{1-\vec Y^2}} \right) = 0~ ,
~~~~~ i = 2,3,\dots ,D-1 ~ .
\label{fiin}
\end{equation}
These equations were derived before in \cite{barba}, while
describing
the dynamics of a free string propagating in $D$--dimensional
{\it flat} space--time. It is remarkable that they remain
unchanged even if the flat $(D-1)$--dim space--like part is replaced
by a curved background corresponding to a general WZW model.
Nevertheless, it should be emphasized that
the actual evolution equations of the normal and tangent
vectors to the surface are certainly different from those
of the flat space free string and can be found in \cite{basfe}.
As we have already mentioned, it would be advantageous if
(\ref{fiin}) (or an equivalent system) could be derived
as classical equations of motion for a 2--dim action of the
form
\begin{equation}
S = {1\over 2\pi \alpha'} \int (g_{ij} + b_{ij})
\partial_+ x^i \partial_- x^j ~ , ~~~~~~~~ i,j = 1,2,\dots, D-2 ~ .
\label{dynsm}
\end{equation}
The above action has a $(D-2)$--dim target space and only
models the non--trivial dynamics of the physical degrees
of freedom of the
string which itself
propagates on the background corresponding to \eqn{smoac} which
has a $D$--dim target space.
The construction of such an action involves
a non--local change
of variables and is based on
the observation \cite{basfe} that (\ref{fiin})
imply chiral conservation laws, which
are the same as the
equations obeyed by the classical
parafermions for the coset model $SO(D-1)/SO(D-2)$ \cite{BSthree}.
We recall that the classical $\s$--model action
corresponding to a coset $G/H$ is derived from the associated
gauged WZW model and the result is given by
\begin{equation}
S= I_0(g) + {1\over \pi \alpha'} \int
{\rm Tr}(t^a g^{-1} \partial_+ g) M^{-1}_{ab} {\rm Tr}
(t^a \partial_- g g^{-1}) ~ , ~~~~
M^{ab} \equiv {\rm Tr}(t^a g t^b g^{-1}- t^a t^b) ~ ,
\label{dualsmo}
\end{equation}
where $I_0(g)$ is the WZW action for a group element $g\in G$ and
$\{t^A\}$ are representation matrices of the Lie algebra for
$G$ with indices split as $A=(a,\alpha)$, where $a\in H$
and $\alpha\in G/H$.
We have also assumed that a unitary gauge has been chosen
by fixing $\dim(H)$
variables among the total number of $\dim(G)$ parameters
of the group element $g$. Hence, there are
$\dim(G/H)$ remaining variables, which will be denoted by $x^i$.
The natural objects generating infinite dimensional symmetries
in the background \eqn{dualsmo} are the classical parafermions
(we restrict to one chiral sector only) defined in general as \cite{BCR}
\begin{equation}
\Psi_+^\alpha = {i \over \pi \alpha'} {\rm Tr} (t^\alpha f^{-1} \partial_+ f ) ~ ,
~~~~~~~~~ f\equiv h_+^{-1} g h_+ \in G ~ ,
\label{paraf}
\end{equation}
and obeying on shell $\partial_- \Psi_+^\alpha = 0 $.
The group element $h_+\in H$ is given as a path order exponential
using the on shell value of the gauge field $A_+$
\begin{equation}
h_+^{-1} = {\rm P} e^{- \int^{\s^+} A_+}~ , ~~~~~~~~
A_+^a = M^{-1}_{ba} {\rm Tr} (t^b g^{-1}\partial_+ g) ~ .
\label{hphm}
\end{equation}
Next we specialize to the $SO(D-1)/SO(D-2)$ gauged WZW models.
In this case the index $a=(ij)$ and the index $\alpha=(0i)$ with
$i=1,2,\dots , D-2$. Then the
parafermions \eqn{paraf} assume the
form (we drop $+$ as a subscript) \cite{BSthree,basfe}
\begin{eqnarray}
&& \Psi^i = {i\over \pi \alpha'}
{\partial_+ Y^i\over \sqrt{1-\vec Y^2}} =
{i \over \pi \alpha'} {1\over \sqrt{1-\vec X^2}} (D_+X)^j h_+^{ji} ~ ,
\nonumber \\
&& (D_+X)^j = \partial_+ X^j - A_+^{jk} X^k ~ , ~~~~~~
Y^i = X^j (h_+)^{ji}~ .
\label{equff}
\end{eqnarray}
Thus, equation $\partial_- \Psi^i = 0$ is
precisely (\ref{fiin}), whereas \eqn{dualsmo}
provides the action \eqn{dynsm} to our embedding problem.
The relation between the $X^i$'s and the $Y^i$'s in \eqn{equff}
provides
the necessary non--local change of variables that transforms
(\ref{fiin}) into a Lagrangian system of equations.
It is highly non--intuitive
in differential geometry, and only
the correspondence with parafermions makes it natural.
It remains to conveniently parametrize the group element
$g\in SO(D-1)$. In the right coset decomposition with respect to
the subgroup $SO(D-2)$ we may write \cite{BSthree}
\begin{equation}
g = \left( \begin{array} {cc}
1 & 0 \\
& \\
0 & h \\
\end{array}
\right) \cdot
\left( \begin{array} {cc}
b & X^j \\
& \\
- X^i & \delta_{ij} - {1\over b+1} X^i X^j \\
\end{array}\right) ~ ,
\label{H}
\end{equation}
where $h\in SO(D-2)$ and $b \equiv \sqrt{1-\vec X^2}$.
The range of the parameters in the vector $\vec X$ is restricted
by $\vec X^2\leq 1$.
A proper gauge fixing is to choose
the group element $h$ in the Cartan torus of $SO(D-2)$
and then use the remaining gauge symmetry to gauge fix
some of the components of the vector $\vec X$.
If \underline{$D=2 N + 3= {\rm odd}$} then we may
cast the orthogonal matrix $h\in SO(2N+1)$ and
the row vector $\vec X$ into the form \cite{basfe}
\begin{eqnarray}
&& h={\rm diagonal}\left(h_1,h_2,\dots,h_N,1\right)~ ,~~~~~
h_i = \pmatrix{
\cos 2\phi_i & \sin 2\phi_i \cr
-\sin 2\phi_i & \cos 2\phi_i \cr} \nonumber \\
&& \vec X =\left(0,X_2,0,X_4,\dots,0,X_{2N},X_{2N+1}\right) ~ .
\label{hdixn}
\end{eqnarray}
On the other hand if \underline{$D=2 N + 2= {\rm even}$}
then $h\in SO(2N)$ can be gauge fixed in a
form similar to the one in \eqn{hdixn} with the 1 removed.
Similarly in the vector $\vec X$ there is no
$X_{2N+1}$ component.
In both cases the total number of
independent variables is $D-2$, as it should be.
\underline{\em Examples:}
As a first example we consider the Abelian coset $SO(3)/SO(2)$ \cite{BCR}.
In terms of our original problem it arises after solving the
Virasoro constraints for
strings propagating on 4--dim Minkowski space or on the
direct product of the real line $R$ and the WZW model for $SU(2)$.
Using $X_2= \sin 2\theta$ one finds that
\begin{equation}
A_+ = \pmatrix{ 0 & 1\cr -1 & 0 }
(1- \cot^2\theta) \partial_+ \phi ~ ,
\label{gsu2}
\end{equation}
and that the background corresponding to \eqn{dynsm}
has metric \cite{BCR}
\begin{equation}
ds^2 = d\theta^2 + \cot^2\theta d\phi^2 ~ .
\label{S1}
\end{equation}
Using (\ref{equff}), the corresponding Abelian parafermions
$\Psi_\pm = \Psi_2 \pm i\Psi_1$ assume the familiar form
\begin{equation}
\Psi_\pm = (\partial_+ \theta \pm i \cot\theta \partial_+ \phi)
e^{\mp i \phi \pm i \int \cot^2\theta \partial_+ \phi } ~ ,
\label{pasu2}
\end{equation}
up to an overall normalization. An alternative way of seeing the
emergence of the coset
$SO(3)/SO(2)$ is from the original system of
embedding equations \eqn{gc1}--\eqn{gc3} for $D=4$ and zero
curvatures. They just reduce to the classical equations of
motion for the 2--dim $\s$--model corresponding to the metric
\eqn{S1} \cite{LuRe}, as it was observed in \cite{Baso3}.
Our second example is the simplest
non--Abelian coset $SO(4)/SO(3)$ \cite{BSthree}.
In our context it
arises in string
propagation on 5--dim Minkowski space or on the direct
product of the real line $R$ and the WZW model based on
$SU(2)\otimes U(1)$.
Parametrizing $X_2 = \sin 2\theta \cos \omega$
and $X_3 = \sin 2\theta \sin \omega$ one finds that
the $3 \times 3$ antisymmetric matrix for the $SO(3)$
gauge field $A_+$ has independent components given by
\begin{eqnarray}
A^{12}_+ & = & -\left( {\cos 2\theta \over \sin^2\theta \cos^2\omega }
+ \tan^2\omega {\cos^2\theta -\cos^2\phi \cos 2\theta\over
\cos^2\theta \sin^2 \phi} \right)
\partial_+\phi - \cot\phi \tan\omega \tan^2 \theta
\partial_+\omega ~ ,\nonumber \\
A^{13}_+ & = & \tan\omega
{\cos^2\theta -\cos^2\phi \cos 2\theta\over
\cos^2\theta \sin^2 \phi} \partial_+\phi
+ \cot\phi \tan^2 \theta \partial_+\omega ~ ,
\label{expap} \\
A^{23}_+ & = & \cot\phi \tan \omega {\cos 2\theta\over \cos^2\theta}
\partial_+ \phi - \tan^2 \theta \partial_+\omega ~ .
\nonumber
\end{eqnarray}
Then, the
background metric for the action \eqn{dynsm} governing the
dynamics of the 3 physical string degrees of freedom
is \cite{BSthree}
\begin{equation}
ds^2 = d\theta^2 + \tan^2\theta (d\omega + \tan\omega \cot \phi d\phi)^2
+ {\cot^2\theta \over \cos^2\omega} d\phi^2 ~ ,
\label{ds3}
\end{equation}
and the antisymmetric tensor is zero.
The parafermions of the $SO(4)/SO(3)$ coset are non--Abelian and are
given by (\ref{equff}) with some explicit expressions
for the covariant derivatives \cite{basfe}.
In addition to the two examples above, there also
exist explicit results for the coset $SO(5)/SO(4)$
\cite{BShet}.
This would correspond in our context to string
propagation on a 6--dim Minkowski space or
on the background
$R$ times the $SU(2)\otimes U(1)^2$
WZW model.
An obvious extension one could make is to
consider the same embedding problem but with Lorenzian instead of
Euclidean backgrounds representing the ``spatial'' part $K_{D-1}$.
This would necessarily involve $\s$--models for cosets based on
non--compact groups. The case for $D=4$
has been considered in \cite{vega}.
It is interesting to consider supersymmetric
extensions of the present work in connection also with \cite{susyre}.
In addition, formulating
classical propagation of $p$--branes
on curved backgrounds as a geometrical problem of embedding surfaces
(for work in this direction see \cite{kar}) and
finding the $p+1$--dim $\s$--model action (analog of \eqn{dynsm} for
strings ($p=1$)) that governs the
dynamics of the physical degrees of freedom of the $p$--brane
is an open interesting problem.
The techniques we have presented in this note can also be used to
find the Lagrangian description of the symmetric space
sine--Gordon models \cite{Pohlalloi} which
have been described as perturbations of coset conformal field
theories \cite{bapa}.
Hence, the corresponding parafermion variables will play
the key role in such a construction.
Finally, an interesting issue is the quantization of constrained
systems.
Quantization in string theory usually proceeds by quantizing
the unconstrained degrees of freedom and then imposing the
Virasoro constraints
as quantum conditions on the physical states.
However, in the
present framework the physical degrees of freedom should be quantized
directly using the quantization of the associated parafermions.
Quantization of the $SO(3)/SO(2)$ parafermions has been
done in the seminal work of \cite{zafa},
whereas for higher dimensional cosets there
is already some work in the literature \cite{BABA}.
A related problem is also finding a consistent quantum theory for
vortices.
This appears to have been the initial motivation of Lund and Regge
(see \cite{LuRe}).
\bigskip\bs
\centerline{\bf Acknowledgments }
\noindent
I would like to thank the organizers of the conferences in Imperial college
and in Argonne Nat.Lab. for their warm hospitality and for financial support.
This work was also carried out with the financial support
of the European Union Research Program
``Training and Mobility of Researchers'', under contract ERBFMBICT950362.
It is also work supported by the European Commision TMR program
ERBFMRX-CT96-0045.
\newpage
| proofpile-arXiv_065-376 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A quasi-two-dimensional (quasi-2D) weakly disordered conductor (i.e. weakly
coupled planes system) exhibits a MIT for a critical value of the
single particle interplane hopping $t_\perp $. The existence of
this quantum phase transition results from the localization of the
electronic states in a 2D system by arbitrary weak disorder, while a 3D
system remains metallic below a critical value of the disorder.
Although the existence of this MIT is well established, its properties are
far from being understood. In particular, the critical value $t_\perp
^{(c)}$ of the interplane coupling remains controversial. The
self-consistent diagrammatic theory of the Anderson localization
\cite{Vollhardt92}
predicts an exponentially small critical coupling \cite{Prigodin84}
$t_\perp ^{(c)}\sim \tau ^{-1}e^{-\alpha k_Fl}$ ($\alpha \sim 1$) in
agreement with simple scaling arguments \cite{ND92} and other analytical
arguments.\cite{Li89} (Here $\tau $ is the elastic scattering time, $k_F$
the 2D Fermi wave vector and $l$ the intraplane mean free path.
$k_Fl\gg 1$ for weak
disorder.) A recent numerical analysis predicts a completely
different result, $t_\perp ^{(c)}\sim 1/\sqrt{\tau }$, which is supported by
analytical arguments. \cite{Zambetaki96} It has also been claimed, on the
basis of diagrammatic perturbative calculations, that the MIT depends on
the propagating direction, in contradiction with the scaling theory of
localization. \cite{Abrikosov94} (Note that for a system of weakly coupled
chains, it is well established, both from numerical and analytical
approaches, that $t_\perp ^{(c)}\sim 1/\tau $. \cite{Dorokhov83})
The aim of this paper is to reconsider the MIT in quasi-2D conductors on the
basis of a NL$\sigma $M.\cite{Belitz94} In the next
section, we use a renormalization group (RG) approach to show how the system
crosses over from a 2D behavior at small length scales to a 3D behavior at
larger length scales. The crossover length $L_x$ is explicitly
calculated. We show that the 3D regime is described by an effective
NL$\sigma $M with renormalized coupling constants.
We obtain the critical value of
the interplane coupling and the anisotropy of the correlation (localization)
lengths in the metallic (insulating) phase. Next, we show that a parallel
magnetic field tends to decouple the planes and thus can induce a MIT. The
results of the RG approach are recovered in section \ref{sec:AF} by means of
an auxiliary field method. We emphasize that the latter is a general
approach to study phase transitions in weakly coupled systems.
\section{Renormalization Group approach }
We consider spinless electrons propagating in a quasi-2D
system with the dispersion law
\begin{equation}
\epsilon_{\bf k}={\bf k}_\parallel ^2/2m -2t_\perp \cos (k_\perp d) \,,
\end{equation}
where ${\bf k}_\parallel $ and $k_\perp $ are the longitudinal
(i.e. parallel to the planes) and transverse components of ${\bf k}$,
respectively.
$m$ is the effective mass in the planes, $d$ the interplane spacing and
$t_\perp $ the transfer integral in the transverse ($z$) direction.
The effect of disorder is taken into account by adding a random potential
with zero mean and gaussian probability distribution:
\begin{equation}
\langle V_l({\bf r}) V_{l'}({\bf r}') \rangle =(2\pi N_2(0)\tau )^{-1}
\delta _{l,l'}\delta ({\bf r}-{\bf r}') \,,
\end{equation}
where $N_2(0)=m/2\pi $ is the 2D density of states at the Fermi level and $\tau
$ the elastic scattering time. ${\bf r}$ is the coordinate in the plane and the
integers $l,l'$ label the different planes. We note $k_F$ the 2D Fermi wave
vector and $v_F=k_F/m$ the 2D Fermi velocity.
\subsection{A simple argument}
We first determine the critical value $t_\perp ^{(c)}$ by means of a
simple argument whose validity will be confirmed in the next sections.
Consider
an electron in a given plane. If the coupling $t_\perp $ is sufficiently
weak, the electron will first diffuse in the plane and then hop to the
neighboring plane after a time $\tau _x$. The corresponding diffusion length
$L_x$ is determined by the equations
\begin{eqnarray}
L_x^2&=&D(L_x) \tau _x \,, \nonumber \\
d^2&=&D_\perp \tau _x=2t_\perp ^2d^2\tau \tau _x \,.
\label{harg}
\end{eqnarray}
The length dependence of the coefficient $D(L)$ results from quantum
corrections to the semiclassical 2D diffusion coefficient $D=v_F^2\tau /2$.
We have assumed
that the transverse diffusion is correctly described by the semiclassical
diffusion coefficient $D_\perp $. As shown in the next sections, this is
a consequence of the vanishing scaling dimension of the field in the
NL$\sigma $M approach. Eqs.\ (\ref{harg}) give $\tau _x\sim 1/t_\perp ^2\tau
$ and $L_x^2\sim D(L_x)/t_\perp ^2\tau $. The critical value $t_\perp ^{(c)}$
is obtained from the condition $L_x\sim \xi _{2D}$ or $g(L_x)\sim
1$, where $\xi _{2D}$ is the 2D localization length and $g(L)$ the 2D
(dimensionless) conductance. These two conditions are equivalent since
$g(L)=N_2(0)D(L)\sim
1$ for $L\sim \xi _{2D}$ (see Eq.\ (\ref{g2D}) below). We thus obtain
\begin{equation}
t_\perp ^{(c)} \sim \frac{1}{\sqrt{\xi _{2D}^2N_2(0)\tau }}
\sim \frac{l}{\xi _{2D}\sqrt{k_Fl}}\frac{1}{\tau } \,,
\label{tcrit}
\end{equation}
where $l=v_F\tau $ is the elastic mean free path. In the weak disorder limit
($k_Fl\gg 1$),
$\xi _{2D}\sim le^{\alpha k_Fl}$ ($\alpha \sim 1$) so that
$t_\perp ^{(c)}$ is exponentially small with respect to $1/\tau $. Apart
from the factor $1/\sqrt{k_Fl}$, Eq.\ (\ref{tcrit}) agrees with the result
of the self consistent diagrammatic theory of Anderson
localization\cite{Prigodin84} and with estimates based on the weak
localization correction to the Drude-Boltzmann conductivity. \cite{WL}
\subsection{NL$\sigma $M for weakly coupled planes}
The procedure to derive the NL$\sigma $M describing electrons in a random
potential is well established \cite{Belitz94} and we only quote the final
result in the quasi-2D case (see however section \ref{subsec:MF}). Averaging
over disorder by introducing $N$ replica of the system and using an
imaginary time formalism, the effective action of the model is $S_\Lambda
=S_{2D}+S_\perp $ with
\begin{eqnarray}
S_{2D} \lbrack Q \rbrack &=& {\pi \over 8}N_2(0)D \sum _l \int d^2r {\rm
Tr} \lbrack \bbox{ \nabla }_\parallel
Q_l({\bf r})\rbrack ^2 - {\pi \over
2}N_2(0) \sum _l \int d^2r
{\rm Tr}\lbrack \Omega Q_l({\bf r})\rbrack \,, \nonumber \\
S_\perp \lbrack Q \rbrack &=& - {\pi \over 4}N_2(0)t_\perp ^2\tau \sum
_{\langle l,l' \rangle } \int d^2r
{\rm Tr} \lbrack Q_l({\bf r}) Q_{l'}({\bf r})\rbrack \,,
\label{action1}
\end{eqnarray}
where $Q_l({\bf r})$ is a matrix field with elements
$_{ij}Q_{lnm}^{\alpha \beta }({\bf r})$. $\alpha ,\beta =1...N$ are
replica indices, $n,m$ refer to fermionic Matsubara frequencies $\omega
_n,\omega _m$, and $i,j=1,2$ describe the particle-hole ($i=j$) and
particle-particle ($i\neq j$) channels.
$\Omega $ is the matrix $_{ij}\Omega _{lnm}^{\alpha \beta }=
\delta _{i,j}\delta _{n,m}\delta _{\alpha ,\beta }\omega _n$.
In (\ref{action1}), Tr denotes the trace over all discrete
indices. The field $Q$ satisfies the constraints $Q_l^2({\bf r})=\underline
1$ (with $\underline 1$ the unit matrix), ${\rm
Tr}\, Q_l({\bf r})=0$ and $Q^+=C^TQ^TC=Q$ where $C$ is the charge conjugaison
operator. \cite{Belitz94} $\Lambda \sim 1/l$ is an ultraviolet cut-off for
the fluctuations of the field $Q$ in the planes. The Josephson-like
interplane coupling $S_\perp $ is obtained retaining the
lowest order contribution in $t_\perp $ ($\langle l,l' \rangle $ denotes
nearest neighbors).
If $t_\perp ^2\tau \gg D\Lambda ^2\sim 1/\tau $, the fluctuations in the
transverse direction are weak. In this case, ${\rm Tr}\lbrack
Q_lQ_{l'}\rbrack $ can be approximated by $-(d^2/2){\rm Tr}\lbrack \nabla
_zQ\rbrack ^2$ and we obtain a NL$\sigma
$M with anisotropic diffusion coefficients $D$ and $D_\perp $. For $t_\perp
\tau \gg 1$, an electron in a given plane hops to the neighboring plane
before being scattered. There is therefore no 2D diffusive motion in that
limit so that the quasi-2D aspect does not play an essential role. The
anisotropy can be simply eliminated by an appropriate length rescaling in the
longitudinal and transverse directions. \cite{Wolfle84,possible}
We consider in the following only the limit $t_\perp \tau \ll 1$ where we
expect a 2D/3D dimensional crossover at a characteristic length $L_x$.
\subsection{RG approach}
We analyze the action (\ref{action1}) within a RG approach following Ref.\
\onlinecite{Affleck}. The condition $t_\perp \tau \ll 1$ ensures that the
transverse coupling is weak ($D\Lambda ^2\gg t_\perp ^2\tau $). The initial
stage of the renormalization will therefore be essentially 2D, apart from
small corrections due to the interplane coupling. If we neglect the latter,
we then obtain the renormalized action
\begin{eqnarray}
S_{\Lambda '} \lbrack Q \rbrack &=& {\pi \over 8}N_2(0)D(\Lambda ')
\sum _l \int d^2r {\rm Tr} \lbrack \bbox{
\nabla }_\parallel Q_l({\bf r})\rbrack ^2 - {\pi \over 2}N_2(0) \sum _l
\int d^2r
{\rm Tr}\lbrack \Omega Q_l({\bf r})\rbrack \nonumber \\
&& - {\pi \over 4}N_2(0)t_\perp ^2\tau \sum _{\langle l,l' \rangle } \int d^2r
{\rm Tr} \lbrack Q_l({\bf r}) Q_{l'}({\bf r})\rbrack \,,
\label{action2}
\end{eqnarray}
where $\Lambda '<\Lambda $ is the reduced cut-off after renormalization and
$D(\Lambda ')$ the renormalized value of the 2D longitudinal diffusion
coefficient.
Since the scaling dimension of the field $Q$ vanishes (i.e. there is no
rescaling of the field), \cite{Belitz94} there is no renormalization of the
interplane coupling. \cite{Affleck} The 2D/3D dimensional crossover occurs
when the transverse and longitudinal couplings become of the same order:
\begin{equation}
{1 \over 2}D_x\Lambda _x^2 \sim t_\perp ^2\tau \,,
\label{critXover}
\end{equation}
where $D_x=D(\Lambda _x)$ is the longitudinal diffusion coefficient at the
crossover. In the 3D regime ($\Lambda '\leq \Lambda _x$), it is appropriate to
take the continuum limit in the transverse direction.\cite{Affleck} Using
\begin{equation}
{\rm Tr}\lbrack Q_lQ_{l'}\rbrack = -{1 \over 2} {\rm Tr}\lbrack Q_l-Q_{l'}
\rbrack ^2 +{\rm const}
\to -\frac{d^2}{2} {\rm Tr}\lbrack \nabla _zQ_l\rbrack ^2
\end{equation}
for $l$ and $l'$ nearest neighbors, and $d\sum _l\to \int dz $, we obtain
\begin{equation}
S_{\Lambda _x} \lbrack Q \rbrack = {\pi \over 8}N_3(0)D_x
\int d^3r \Bigl \lbrack {\rm Tr} \lbrack \bbox{
\nabla }_\parallel Q({\bf r})\rbrack ^2 +(d\Lambda _x)^2 {\rm Tr} \lbrack
\nabla _z Q({\bf r})\rbrack ^2 \Bigr \rbrack
- {\pi \over 2}N_3(0) \int d^3r
{\rm Tr}\lbrack \Omega Q({\bf r})\rbrack \,,
\label{action3}
\end{equation}
where $N_3(0)=N_2(0)/d$ is the 3D density of states at the Fermi level. The
cut-offs are $\Lambda _x$ and $1/d$ in the longitudinal and transverse
directions, respectively. Note that ${\bf r}$ is now a 3D coordinate.
The 3D regime is thus described by an anisotropic NL$\sigma $M.
However, the anisotropy is the same for the
diffusion coefficients and the cut-offs and can therefore easily
be suppressed by an appropriate rescaling of the lengths: ${\bf
r}'_\parallel = {\bf r}_\parallel /s_1$, $z'=z/s_2$, with
\begin{eqnarray}
s_1^{-2}D_x&=&s_2^{-2}D_x(\Lambda _xd)^2 \,, \nonumber \\
s_1^2s_2&=&1 \,.
\label{rescale}
\end{eqnarray}
The last equation ensures that $\int d^3{\bf r}'=\int d^3{\bf r}$ and lets
invariant the last term of (\ref{action3}). From (\ref{rescale}), we obtain
$s_1=(1/\Lambda _xd)^{1/3}$, $s_2=(\Lambda _xd)^{2/3}$, and the new
effective action
\begin{equation}
S_{\Lambda _x} \lbrack Q \rbrack = {\pi \over 8}N_3(0)\bar D
\int d^3r {\rm Tr} \lbrack \bbox{ \nabla } Q({\bf r})\rbrack ^2
- {\pi \over 2}N_3(0) \int d^3r
{\rm Tr}\lbrack \Omega Q({\bf r})\rbrack \,,
\label{action4}
\end{equation}
where $\bar D=D_x(\Lambda _xd)^{2/3}$. The cut-off is now isotropic:
$s_1\Lambda _x\sim s_2/d \sim \bar \Lambda =(\Lambda _x^2/d)^{1/3}$. The
dimensionless coupling constant of the NL$\sigma $M (\ref{action4}) is
\begin{equation}
\lambda =\frac{4}{\pi N_3(0)\bar D} \bar \Lambda = \frac{4}{\pi N_2(0)D_x}
=\frac{4}{\pi g_x} \,.
\end{equation}
The MIT occurs for $\lambda =\lambda _c=O(1)$, i.e. when the
2D conductance $g_x=g_c=O(1)$. Using $g_x=g_c\sim 1$
for $\Lambda _x\sim \xi _{2D}^{-1}$, we recover the result (\ref{tcrit}) for
the critical value of the interplane coupling.
Further information about the metallic and insulating phases can be obtained
from the $L$-dependence of the 2D dimensionless conductance. The
self-consistent diagrammatic theory of Anderson localization
gives\cite{Vollhardt92}
\begin{equation}
g(L)={1 \over {2\pi ^2}} \ln \left ( 1+\frac{\xi _{2D}^2}{L^2} \right )
\left ( 1+\frac{L}{\xi_{2D}} \right ) e^{-L/\xi _{2D}} \,,
\label{g2D}
\end{equation}
where $L$ should be identified with $\Lambda ^{-1}$.
Eq.\ (\ref{g2D}) gives $g(L)\simeq (1/\pi ^2)\ln (\xi _{2D}/L)$ for
$L\ll \xi _{2D}$ (in agreement with perturbative RG
calculation\cite{Belitz94}) and $g(L)\simeq
(1/2\pi ^2)(\xi _{2D}/L)e^{-L/\xi _{2D}}$ for $L\gg \xi _{2D}$. The
crossover length $L_x$ obtained from (\ref{critXover}) and (\ref{g2D}) is
shown in Fig.\ \ref{FigLX}. Since the crossover
length is not precisely defined, we have multiplied $L_x$ by a constant in
order to have $L_x\simeq v_F/t_\perp $ deep in the metallic phase ($t_\perp \gg
t_\perp ^{(c)}$) (this will allow a detailed comparison with the results of
Sec.\ \ref{sec:AF}).
In the metallic phase, $t_\perp \geq t_\perp ^{(c)}$, we have
\begin{eqnarray}
g_x &\simeq & 2N_2(0)t_\perp ^2 \tau \xi _{2D}^2 e^{-2\pi ^2g_x} \gtrsim 1\,,
\nonumber \\
L_x &\simeq & \xi _{2D} e^{-\pi ^2g_x} \lesssim \xi _{2D} \,.
\end{eqnarray}
Since the renormalization in the 3D regime does not change the anisotropy,
we obtain from (\ref{action3}) the anisotropy of the correlation lengths
(see Fig.\ \ref{FigAnis}):
\begin{equation}
\frac{L_\perp }{L_\parallel }= \frac{\sigma _\parallel }{\sigma _\perp }
=\frac{1}{(\Lambda _xd)^2} \,.
\end{equation}
Deep in the metallic phase ($t_\perp \gg t_\perp ^{(c)}$), we have $L_\perp
/L_\parallel \sim (t_\parallel /t_\perp )^2$ (where $t_\parallel \sim
v_Fk_F\sim v_F/d$ is the transfer integral in the planes) while close to the
transition ($t_\perp \gtrsim t_\perp ^{(c)}$) we have $L_\perp /L_\parallel
\sim (\xi _{2D}/d)^2\sim (t_\parallel /t_\perp )^2/(k_Fl)$. Thus, as we move
towards the transition, the anisotropy deviates from the result $L_\perp
/L_\parallel \sim (t_\parallel /t_\perp )^2$ predicted by the 3D anisotropic
NL$\sigma $M. \cite{Wolfle84}
In the insulating phase, $t_\perp \leq t_\perp ^{(c)}$, we have
\begin{eqnarray}
g_x &=& \frac{1}{2\pi ^2}\xi _{2D} \sqrt{\frac{2N_2(0)t_\perp ^2\tau }{g_x}}
e^{ -\sqrt{\frac{g_x}{2N_2(0)t_\perp ^2\tau }} \frac{1}{\xi _{2D}}} \lesssim
1 \,, \nonumber \\
L_x &\simeq & \sqrt{\frac{g_x}{2N_2(0)t_\perp ^2\tau }} \gtrsim \xi _{2D}
\,.
\end{eqnarray}
The anisotropy of the localization lengths is given by (see Fig.\
\ref{FigAnis})
\begin{equation}
\frac{\xi _\perp }{\xi _\parallel }=\frac{s_2}{s_1}=\Lambda _xd \,.
\end{equation}
Close to the transition, $\Lambda _x\sim \xi _{2D}^{-1}$, so that $\xi
_\perp /\xi _\parallel \sim \sqrt{k_Fl}(t_\perp /t_\parallel )$. Again this
differs from the result $\xi _\perp /\xi _\parallel \sim t_\perp
/t_\parallel $ predicted by a 3D anisotropic NL$\sigma $M.\cite{Wolfle84}
\subsection{Effect of a parallel magnetic field}
\label{subsec:MF}
We consider in this section the effect of an external magnetic field ${\bf
H}=(0,H,0)$ parallel to the planes. In second quantized form, the interplane
hopping Hamiltonian is
\begin{eqnarray}
{\cal H}_\perp &=& -t_\perp \sum _{\langle l,l'\rangle } \int d^2{\bf r}\,
e^{ie\int _{({\bf r},ld)}^{({\bf r},l'd)} {\bf A}({\bf s}) \cdot d{\bf s} }
\psi ^\dagger ({\bf r},l) \psi ({\bf r},l') \nonumber \\
&=& -t_\perp \sum _{\langle l,l'\rangle ,{\bf k}_\parallel } \psi ^\dagger
({\bf k}_\parallel +(l-l'){\bf G},l') \psi ({\bf k}_\parallel ,l)
\label{Hperp}
\end{eqnarray}
in the gauge ${\bf A}(0,0,-Hx)$. ${\bf G}=(G,0,0)$ with $G=-eHd$.
In the second line of (\ref{Hperp}), we have
used a mixed representation by taking the Fourier transform with respect to
the intraplane coordinate ${\bf r}$. The effective action $S\lbrack Q\rbrack
$ of the NL$\sigma $M can be simply obtained by calculating the
particle-hole bubble $\Pi ({\bf q},\omega _\nu )$ in the semiclassical
(diffusive) approximation. To lowest order in $t_\perp $, we have $\Pi =\Pi
^{(0)}+\Pi ^{(1)}+\Pi
^{(2)}$ where $\Pi ^{(0)}\simeq 2\pi N_2(0)\tau (1-\vert \omega _\nu \vert
\tau -D\tau q_\parallel ^2)$ (for $\vert \omega _\nu \vert \tau ,D\tau
q_\parallel ^2\ll 1$) is the 2D result. Here $\omega _\nu $ is a bosonic
Matsubara frequency. $\Pi ^{(1)}$ and $\Pi ^{(2)}$ are
given by (see Fig.\ \ref{FigDia})
\begin{eqnarray}
\Pi ^{(1)} &=& \frac{2t_\perp ^2}{L^2} \sum _{{\bf k}_\parallel ,\delta
=\pm 1} G^2({\bf k}_\parallel ,\omega _n)
G({\bf k}_\parallel -\delta {\bf G},\omega _n)
G({\bf k}_\parallel ,\omega _{n+\nu }) \nonumber \\
&=& -8\pi N_2(0)\frac{t_\perp ^2\tau ^3}{(1+\omega _c^2\tau
^2)^{1/2}} \,, \nonumber \\
\Pi ^{(2)} &=& \frac{t_\perp ^2}{L^2} \sum _{{\bf k}_\parallel ,\delta
=\pm 1} e^{iq_\perp \delta d}
G({\bf k}_\parallel ,\omega _n)
G({\bf k}_\parallel -\delta {\bf G},\omega _n)
G({\bf k}_\parallel ,\omega _{n+\nu })
G({\bf k}_\parallel -\delta {\bf G},\omega _{n+\nu }) \nonumber \\
&=& 8\pi N_2(0)\frac{t_\perp ^2\tau ^3}{(1+\omega _c^2\tau
^2)^{1/2}}\cos (q_\perp d) \,,
\end{eqnarray}
for $\omega _n\omega _{n+\nu }<0$ and $q_\parallel ,\omega _\nu \to
0$. $L^2$ is the area of the planes and
\begin{equation}
G({\bf k}_\parallel ,\omega _n)=(i\omega
_n+(i/2\tau ){\rm sgn}(\omega _n)-{\bf k}_\parallel ^2/2m+\mu )^{-1}
\end{equation}
is the 2D one-particle Green's function ($\mu $ is the Fermi energy).
We have introduced the characteristic
magnetic energy $\omega _c=v_FG$. The diffusion modes are determined by
\begin{equation}
1-(2\pi N_2(0)\tau )^{-1} \Pi ({\bf q},\omega _\nu )= \vert \omega _\nu
\vert \tau +D\tau q_\parallel ^2 + \frac{8t_\perp ^2\tau ^2}{(1+\omega _c^2\tau
^2)^{1/2}} \sin ^2(q_\perp d/2) \,,
\end{equation}
which yields the following interplane coupling in the NL$\sigma $M:
\begin{equation}
S_\perp \lbrack Q\rbrack = -\frac{\pi }{4} N_2(0) \frac{t_\perp ^2\tau }
{(1+\omega _c^2\tau ^2)^{1/2}} \sum _{\langle l,l'\rangle } \int d^2r
\sum _{inm\alpha \beta } {_{ii}Q}_{lnm}^{\alpha \beta }({\bf r})
_{ii}Q_{l'mn}^{\beta \alpha }({\bf r}) \,.
\label{SperpH}
\end{equation}
Notice that we have retained only the diagonal part of the field $Q$ since the
magnetic field suppresses the interplane diffusion modes in the
particle-particle channel. This is strictly correct only above
a characteristic field corresponding to a ``complete'' breakdown of time
reversal symmetry (this point is further discussed below). Eq.\
(\ref{SperpH}) shows that the magnetic field not only breaks down time
reversal symmetry but also reduces the amplitude of the interplane
hopping. This quantum effect (important only when $\omega _c\tau \gg
1$) can be understood from the consideration of the semiclassical electronic
orbits.\cite{ND94} Notice that such an effect cannot be described in the
semiclassical phase integral (or eikonal) approximation for the magnetic
field. The action $S_{2D}$ of the independent planes is
not modified by the magnetic field since the latter is parallel to the
planes.
We can now apply the same RG procedure as in the preceding section. The
initial (2D) stage of the renormalization is not modified by the parallel
magnetic field, and the 2D/3D dimensional crossover is determined by
\begin{equation}
\frac{1}{2}D_x\Lambda _x^2\sim \frac{t_\perp ^2\tau }{(1+\omega _c^2\tau
^2)^{1/2}} \,.
\label{LxH}
\end{equation}
The crossover length $L_x(t_\perp ,\omega _c)$ therefore satisfies the
scaling law
\begin{eqnarray}
L_x (t_\perp ,\omega _c)&=& L_x \left ( \frac{t_\perp }{(1+\omega _c^2\tau
^2)^{1/4}},0 \right ) \nonumber \\
&\equiv & L_x \left ( \frac{t_\perp }{(1+\omega _c^2\tau
^2)^{1/4}} \right ) \,,
\end{eqnarray}
where $L_x(t_\perp )$ is the zero field crossover length obtained in the
preceding section.
In the 3D regime, the diffusion modes are 3D. The smallest volume
corresponding to a
diffusive motion is of order $L_x^2d$. This corresponds to a magnetic flux
$HL_xd$ for a parallel field. Time reversal symmetry is ``completely''
broken when this magnetic flux is at least of the order of the flux quantum,
i.e. when $L_x \gtrsim 1/G$. It is easy to verify that this defines a
characteristic field $H_0$ such that $\omega _c\tau \ll 1$. Above $H_0$, the
diffusion modes are completely suppressed in the particle-particle channel.
The effective action is then given by (\ref{action3})
where only the diagonal part ($i=j$) of the field $Q$ should be
considered. $D_x$ and $\Lambda _x$ depend on $H$ according to
(\ref{LxH}).
The critical value of the interplane coupling for $H>H_0$
is obtained from $g_x=g_c'$
where $g_c'$ is the critical value of the dimensionless conductance in the
unitary case (no time-reversal symmetry). Thus, we have
\begin{equation}
t_\perp ^{(c)}(H)= {t_\perp ^{(c)}}' (1+\omega _c^2\tau ^2)^{1/4} \,,
\end{equation}
where ${t_\perp ^{(c)}}'\sim \lbrack g_c'/(\tau N_2(0)\xi _{2D}^2)\rbrack
^{1/2}< t_\perp ^{(c)}$ is the critical
coupling for $\omega _c=0$ in the unitary case. Close to the MIT, we have
$L_x\sim \xi _{2D}$ (since $g_c'\sim 1$) so that
$H_0$ is defined by $\omega _c\sim v_F/\xi _{2D}$, which corresponds to an
exponentially small value of the field: $\omega _c\sim \tau ^{-1}e^{-\alpha
k_Fl}$. The phase diagram in the $(t_\perp -\omega _c)$ plane is
shown in Fig.\ \ref{FigPhase}. For $H\lesssim H_0$, the curve is not correct
and should reach $t_\perp ^{(c)}$ at $H=0$. Therefore, we obtain that a weak
magnetic field favors the metallic phase ($t_\perp ^{(c)}(H)< t_\perp
^{(c)}$ for $0<H\ll H_0$) while a strong magnetic field favors the
insulating phase ($t_\perp ^{(c)}(H)> t_\perp ^{(c)}$ for $H\gg H_0$). This
agrees with the results of Ref.\ \onlinecite{ND92} obtained from scaling
arguments based on the weak localization correction to the Drude-Boltzmann
conductivity. \cite{notaND92}
In the metallic phase, the anisotropy of the correlation
lengths is given by
\begin{equation}
\frac{L_\perp }{L_\parallel } = L_x^2 \left ( \frac{t_\perp }{(1+\omega
_c^2\tau ^2)^{1/4}} \right ) \frac{1}{d^2} \,.
\end{equation}
In the insulating phase, the anisotropy of the localization lengths is given
by
\begin{equation}
\frac{\xi _\perp }{\xi _\parallel } = \frac{d}{ L_x \left ( \frac{t_\perp
}{(1+\omega _c^2\tau ^2)^{1/4}} \right )} \,.
\end{equation}
\section{Auxiliary field method}
\label{sec:AF}
The aim of this section is to recover the results of the RG approach by
means of a completely different method. In order to study how the
``correlations'' are able to propagate in the transverse direction, we will
study an effective field theory which is generated by a Hubbard
Stratonovitch transformation of the interplane coupling $S_\perp \lbrack Q
\rbrack $. This methods bears some obvious similarities with standard
approaches in critical phenomena.\cite{Ising} It is particularly useful when
the low dimensional problem (here the 2D Anderson localization) can be
solved (at least approximately) by one or another method. This kind of
approach has already been used for weakly coupled 1D Ginzburg-Landau models
\cite{McKenzie} or weakly coupled Luttinger liquids. \cite{Boies95} While
these studies were done in the high temperature (or disordered) phase, we
start in this paper from the ``ordered'' (i.e. metallic) phase where the
low-energy modes are Goldstone modes.
Introducing an auxiliary matrix field $\zeta $ to decouple $S_\perp
$, we rewrite the partition function as
\begin{equation}
Z= \int {\cal D}\zeta \,
e^{ -\sum _{l,l'} \int d^2r {\rm Tr}\lbrack \zeta _l({\bf r}) J_{\perp
l,l'}^{-1}
\zeta _{l'}({\bf r})\rbrack } \int {\cal D}Q \,e^{-S_{2D}\lbrack Q\rbrack
+2 \sum _l \int d^2r {\rm Tr} \lbrack \zeta _l({\bf
r})Q_l({\bf r})\rbrack } \,.
\end{equation}
The field $\zeta $ should have the same structure as the field $Q$ and
therefore satisfies the condition $\zeta ^+=C^T\zeta ^TC=\zeta $. $J_{\perp
l,l'}^{-1}$ is the inverse of the matrix $J_{\perp l,l'}=J_\perp (\delta
_{l,l'+1}+\delta _{l,l'-1})$ where $J_\perp =(\pi /4)N_2(0)t_\perp ^2\tau $
(the matrix $J_\perp $ is diagonal in the indices $\alpha ,n,i$).
We first determine the value of the auxiliary field in the saddle point
approximation. Assuming a
solution of the form $_{ij}(\zeta ^{\rm SP})_{lnm}^{\alpha \beta }({\bf
r})=\zeta _0\delta _{i,j}\delta _{\alpha ,\beta }\delta _{n,m}$, we obtain the
saddle point equation
\begin{equation}
\zeta _0=J_\perp (q_\perp =0)\left \langle _{ii}Q_{lnn}^{\alpha \alpha
}({\bf r}) \right \rangle _{\rm SP} \,,
\label{SPeq}
\end{equation}
where $J_\perp (q_\perp )=2J_\perp \cos (q_\perp d)$ is the Fourier transform
of $J_{\perp l,l'}$. The average $\langle \cdot \cdot \cdot \rangle _{\rm
SP}$ should be taken with the saddle point action
\begin{equation}
S_{\rm SP}\lbrack Q\rbrack = S_{2D}\lbrack Q\rbrack-2\sum _l\int d^2r {\rm
Tr} \lbrack \zeta ^{\rm SP}Q_l({\bf r})\rbrack \,.
\end{equation}
Note that the saddle point value $\zeta ^{\rm SP}$ acts as a finite external
frequency. The mean value of the field Q is related to the density of states
and is a non-singular quantity. We have\cite{Belitz94} $\langle
_{ii}Q_{lnn}^{\alpha \alpha }
({\bf r}) \rangle _{\rm SP}={\rm sgn}(\omega _n)$ which yields
\begin{equation}
\zeta _0=J_\perp (q_\perp =0){\rm sgn}(\omega _n)=2J_\perp {\rm sgn}
(\omega _n) \,.
\label{zeta0}
\end{equation}
This defines a characteristic frequency for the 2D/3D dimensional crossover
\begin{equation}
\omega _x=\frac{8}{\pi N_2(0)} \vert \zeta _0\vert =4t_\perp ^2\tau \,.
\end{equation}
We now consider the fluctuations around the saddle point solution. As in the
standard localization problem, the Sp(2$N$) symmetry of the Lagrangian is
spontaneously broken by the ``frequency'' $\Omega $ to ${\rm Sp}(N)\times
{\rm Sp}(N)$. \cite{Belitz94} The term ${\rm
Tr} \lbrack \Omega Q\rbrack $ breaks the symmetry of $S_{2D}\lbrack Q\rbrack
$. Via the coupling ${\rm Tr}\lbrack \zeta Q\rbrack $ between the fields
$\zeta $ and $Q$, it also breaks the symmetry of the effective action
$S\lbrack \zeta \rbrack $ of the field $\zeta $. Our aim is now to
obtain the effective action of the (diffusive) Goldstone modes associated
with this spontaneous symmetry breaking. Following
Ref.\ \onlinecite{Belitz94}, we shift the field according to $\zeta \to \zeta +
\zeta ^{\rm SP}-\pi N_2(0)\Omega /4$ and expand the action to lowest order
in $\Omega $ and $\zeta $. The partition function becomes
\begin{eqnarray}
Z &=& \int {\cal D} \zeta \,e^{ -\sum _{l,l'}\int d^2r \bigl \lbrack
{\rm Tr}\lbrack \zeta _l({\bf r}) J_{\perp l,l'}^{-1} \zeta _{l'}({\bf r})
\rbrack
+2 {\rm Tr}\lbrack \zeta ^{\rm SP} J_{l,l'}^{-1} \zeta _{l'}({\bf r})\rbrack
-(\pi /2)N_2(0){\rm Tr}\lbrack \Omega J_{l,l'}^{-1} \zeta _{l'}({\bf
r})\rbrack \bigr \rbrack }
\nonumber \\ && \times \int {\cal D}Q\,
e^{-\tilde S_{2D}\lbrack Q\rbrack +2\sum _l \int d^2r
{\rm Tr}\lbrack \zeta _l({\bf r}) Q _l({\bf r})\rbrack } \,,
\end{eqnarray}
where we have introduce the 2D action
\begin{eqnarray}
\tilde S_{2D}\lbrack Q\rbrack &=&S_{2D}\lbrack Q\rbrack +\frac{\pi }{2}N_2(0)
\sum _l\int d^2r {\rm Tr}\lbrack \Omega Q_l({\bf r})\rbrack
-2\sum _l \int d^2r {\rm Tr}\lbrack \zeta ^{\rm SP} Q_l({\bf r})\rbrack
\nonumber \\ &=&
\frac{\pi }{8}N_2(0)D\sum _l \int d^2r {\rm Tr}\lbrack
\bbox{ \nabla }_\parallel Q_l({\bf r})\rbrack ^2
-2\sum _l \int d^2r {\rm Tr}\lbrack \zeta ^{\rm SP} Q_l({\bf r})\rbrack \,.
\end{eqnarray}
$\tilde S_{2D}$ is the action of the decoupled planes ($t_\perp =0$) at the
finite frequency $\omega _x$. To proceed further, we note that
\begin{equation}
\int {\cal D}Q\,
e^{-\tilde S_{2D}\lbrack Q\rbrack +2\sum _l \int d^2r
{\rm Tr}\lbrack \zeta _l({\bf r}) Q _l({\bf r})\rbrack } = \tilde Z_{2D}
e^{W\lbrack \zeta \rbrack } \,,
\end{equation}
where $W\lbrack \zeta \rbrack $ is the generating functional of connected
Green's functions calculated with the action $\tilde S_{2D}$.\cite{Negele}
$\tilde Z_{2D}$ is the partition function corresponding to the action
$\tilde S_{2D}$. We have
\begin{eqnarray}
W\lbrack \zeta \rbrack &=& 2\sum _l\int d^2r {\rm Tr}\lbrack \zeta _l({\bf r})
\langle Q_l({\bf r})\rangle _{\tilde S_{2D}}\rbrack \nonumber \\ &&
+2\sum _l \int d^2r_1d^2r_2
\sum _{ijnm\alpha \beta } {_{ij}\zeta }_{lnm}^{\alpha \beta }({\bf r}_1)
_{ij}\tilde R_{lnm}^{\alpha \beta }({\bf r}_1,{\bf r}_2)
_{ji}\zeta _{lmn}^{\beta \alpha }({\bf r}_2) +...
\end{eqnarray}
where\cite{nota1}
\begin{equation}
_{ij}\tilde R_{lnm}^{\alpha \beta }({\bf r}_1,{\bf r}_2) = \langle
_{ji}Q _{lmn}^{\beta \alpha }({\bf r}_1)
_{ij}Q _{lnm}^{\alpha \beta }({\bf r}_2) \rangle _{\tilde S_{2D}} \,.
\end{equation}
Using the saddle point equation (\ref{SPeq}),
we obtain to quadratic order in the field $\zeta $ and lowest order in
$\Omega $ the effective action
\begin{eqnarray}
S\lbrack \zeta \rbrack &=& \sum _{l,l'} \int d^2r_1d^2r_2\,
\sum _{ijnm\alpha \beta }
{_{ij}\zeta }_{lnm}^{\alpha \beta }({\bf r}_1)
\Bigl \lbrack J_{\perp l,l'}^{-1} \delta ({\bf r}_1-{\bf r}_2)
-2 {_{ij}\tilde R}_{lnm}^{\alpha \beta }({\bf r}_1,{\bf r}_2) \delta _{l,l'}
\Bigr \rbrack \,
_{ji}\zeta _{lmn}^{\beta \alpha }({\bf r}_2) \nonumber \\ &&
-\frac{\pi }{2}N_2(0)\sum _{l,l'} \int d^2r {\rm Tr}\lbrack
\Omega J_{\perp l,l'}^{-1} \zeta _l({\bf r})\rbrack \,.
\end{eqnarray}
For $\omega _n\omega _m<0$, $ {_{ij}\tilde R}_{lnm}^{\alpha \beta }({\bf
r}_1,{\bf r}_2)$ is the propagator of the Goldstone modes of the action
$\tilde S_{2D}\lbrack Q\rbrack $. Its Fourier transform is given by
\begin{equation}
{_{ij}\tilde R}_{lnm}^{\alpha \beta }({\bf q}_\parallel )\equiv
\tilde R({\bf q}_\parallel )= \frac{4}{\pi N_2(0)(D_xq_\parallel ^2+\omega
_x)} \,,
\end{equation}
where $D_x$ is the exact 2D diffusion coefficient at the finite frequency
$\omega _x$.
Notice that the finite frequency $\omega _x$ gives a mass to the Goldstone
modes. The preceding equation defines the crossover length $L_x=(D_x/\omega
_x)^{1/2}$. Since $\tilde R({\bf q}_\parallel =0)=1/2J_\perp (q_\perp =0)$,
the fluctuations of $\zeta $ around its saddle point value are massless for
$\omega _n\omega _m<0$. On the other hand, it is clear that the fluctuations
are massive for $\omega _n\omega _m>0$. Having identified the Goldstone
modes resulting from the spontaneous symmetry breaking, we now follow the
conventional NL$\sigma $M approach. \cite{Belitz94} We suppress the massive
fluctuations imposing on the field $\zeta $ the constraints $\zeta ^2={\zeta
^{\rm SP}}^2$ and ${\rm Tr}\,\zeta =0$. Rescaling the field $\zeta $ in
order to have $\zeta ^2=\underline 1$ and introducing the Fourier
transformed fields $\zeta ({\bf q})$, we obtain
\begin{equation}
S\lbrack \zeta \rbrack = J_\perp ^2 \sum _{\bf q} (J_\perp ^{-1}(q_\perp )
-2\tilde R({\bf q}_\parallel )) {\rm Tr}\lbrack \zeta ({\bf q})
\zeta (-{\bf q}) \rbrack -\frac{\pi }{2}N_2(0) {\rm Tr}\lbrack \Omega \zeta
({\bf q}=0)\rbrack \,.
\end{equation}
In the 3D regime, $q_\parallel \lesssim 1/L_x$ and $q_\perp \lesssim 1/d$,
we can expand $J_\perp ^{-1}(q_\perp )-2\tilde R({\bf q}_\parallel )$ in
lowest order in $q_\parallel $ and $q_\perp $ to obtain
\begin{equation}
S\lbrack \zeta \rbrack = \frac{\pi }{8}N_2(0) \sum _{\bf q}
(D_xq_\parallel ^2 +2t_\perp ^2d^2\tau q_\perp ^2) {\rm Tr}\lbrack \zeta
({\bf q}) \zeta (-{\bf q}) \rbrack -\frac{\pi }{2}N_2(0) {\rm Tr}\lbrack
\Omega \zeta ({\bf q}=0)\rbrack \,.
\end{equation}
Going back to real space and taking the continuum limit in the $z$ direction
(which introduces a factor $1/d$), we eventually come to
\begin{equation}
S\lbrack \zeta \rbrack = \frac{\pi }{8}N_3(0) \int d^3r \Bigl \lbrack D_x
{\rm Tr}\lbrack \bbox{ \nabla }_\parallel \zeta \rbrack ^2 +2t_\perp ^2d^2\tau
{\rm Tr}\lbrack \nabla _z \zeta \rbrack ^2 \Bigr \rbrack -\frac{\pi
}{2}N_3(0) \int d^3r {\rm Tr}\lbrack \Omega \zeta \rbrack \,.
\label{action5}
\end{equation}
The cut-offs are $\Lambda _x=L_x^{-1}$ in the longitudinal directions and
$1/d$ in the transverse direction.
Eq.\ (\ref{action5}) is similar to the action (\ref{action3}) we have
obtained in the RG approach. The only difference is that the crossover
length is not defined in the same way. In the RG approach, $L_x\sim
D(L_x)/\omega _x$ is defined via the length dependent 2D diffusion
coefficient while the auxiliary field method involves the frequency
dependent 2D diffusion coefficient. We approximate the latter by
\cite{Vollhardt92}
\begin{equation}
D(\omega _\nu )= \frac{D}{1+\frac{l^2}{\xi _{2D}^2\vert \omega _\nu
\vert \tau }} \,,
\end{equation}
where $\omega _\nu $ is a bosonic Matsubara frequency. This yields
\begin{equation}
\Lambda _x^2=L_x^{-2}=\frac{\omega _x}{D}\left ( 1+\frac{l^2}{\xi
_{2D}^2\omega _x\tau } \right ) \,.
\end{equation}
The critical interplane coupling $t_\perp ^{(c)}$ is determined by
$N_2(0)D_x\equiv N_2(0)D(\omega _x)\sim 1$ which leads again to
(\ref{tcrit}). The crossover length $L_x$ is shown in Fig.\ \ref{FigLX}. For
$t_\perp \gtrsim t_\perp ^{(c)}$, there is an agreement with the RG
approach. The reason is that in the weak coupling limit ($g\gtrsim 1$), $D(L)$
and $D(\omega _\nu )$ approximately coincide. \cite{Vollhardt92} Deep in the
insulating phase, this is not the case and the two approaches give different
results: $L_x\gg \xi _{2D}$ in the RG approach while $L_x\sim \xi _{2D}$ in
the auxiliary field method. This disagreement should not be surprising
since neither one of the two methods is exact. In the RG approach, the
dimensional crossover is treated very crudely since all the
effects due to $t_\perp $ are neglected in the first (2D) stage of the
renormalization procedure. However, this method should give qualitatively
correct results. In particular, we expect the result $L_x\gg \xi _{2D}$ for
$t_\perp \ll t_\perp ^{(c)}$ to be correct. In the auxiliary field method,
the effective action (\ref{action5}) was obtained by completely neglecting
the massive modes (more precisely sending their mass to infinity).
In principle, these latter should be integrated out, which
would lead to a renormalization of the diffusion coefficients appearing in
(\ref{action5}). The comparison with the RG result suggests that this
renormalization is important in the insulating phase.
The generalization of the preceding results in order to include a parallel
magnetic field is straightforward. The interplane coupling is now given by
(\ref{SperpH}). The auxiliary field $\zeta $ has therefore only a diagonal
part ($i=j$). The saddle point equation yields
\begin{equation}
_{ii}(\zeta ^{\rm SP})^{\alpha \beta }_{lnm}({\bf r})=\delta _{\alpha ,\beta
}\delta _{n,m}{\rm sgn}(\omega _n) \frac{\pi }{2} N_2(0)
\frac{t_\perp ^2\tau }{(1+\omega _c^2\tau ^2)^{1/2}} \,.
\end{equation}
This defines the crossover frequency
\begin{equation}
\omega _x=\frac{4t_\perp ^2\tau }{(1+\omega _c^2\tau ^2)^{1/2}} \,.
\end{equation}
In the 3D regime, the massless fluctuations around the saddle point value
are described by the action
\begin{equation}
S\lbrack \zeta \rbrack = \frac{\pi }{8}N_3(0) \int d^3r \Bigl \lbrack D_x
{\rm Tr}\lbrack \bbox{ \nabla }_\parallel \zeta \rbrack ^2 +
\frac{2t_\perp ^2d^2\tau }{(1+\omega _c^2\tau ^2)^{1/2}}
{\rm Tr}\lbrack \nabla _z \zeta \rbrack ^2 \Bigr \rbrack -\frac{\pi
}{2}N_3(0) \int d^3r {\rm Tr}\lbrack \Omega \zeta \rbrack \,,
\label{action6}
\end{equation}
with the usual constraints on the field $\zeta $. Here
$D_x$ is the coefficient diffusion calculated at the magnetic field
dependent frequency $\omega _x$. Again we recover the result of the RG
approach, the only difference coming from the definitions of $L_x$ and
$\Lambda _x$.
\section{Conclusion}
Using two different methods, we have studied the Anderson MIT in quasi-2D
systems. We have found that the critical value of the single particle
interplane coupling is given by (\ref{tcrit}). Apart from the factor
$1/\sqrt{k_Fl}$, this result agrees with the diagrammatic self-consistent
theory of Anderson localization and with estimates based on the weak
localization correction to the Drude-Boltzmann conductivity.
Nevertheless, it differs from recent numerical calculations according to
which $t_\perp ^{(c)}\sim 1/\sqrt{\tau }$. \cite{Zambetaki96} In the weak
disorder limit ($k_Fl\gg1 $), this latter result seems to us in
contradiction with the scaling theory of Anderson localization since
the 2D localization length $\xi _{2D}$ is exponentially large with
respect to the mean free path $l$. Because of the latter property, we indeed
expect an exponentially small value of the critical coupling $t_\perp
^{(c)}$ with respect to the elastic scattering rate $1/\tau
$. \cite{Affleck} For a very large 2D localization length $\xi _{2D}$,
it seems unlikely that the dimensional crossover and the MIT can be studied
with numerical calculations on finite systems. The numerical calculations of
Ref.\ \onlinecite{Zambetaki96} are done in a strong disorder regime
($k_Fl\sim 1$). In this regime, the exponential dependence of
$t_\perp ^{(c)}$ on $k_Fl$ may be easily overlooked.
We have also studied the anisotropy the correlation (localization) lengths
in the metallic (insulating) phase and shown that it differs from the result
predicted by a 3D anisotropic NL$\sigma $M. The phase diagram in presence of
a magnetic field was also derived: our approach formalizes and extends
previous results obtained for weakly coupled chains. \cite{ND92}
| proofpile-arXiv_065-377 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{#1}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newcommand{\newcommand}{\newcommand}
\newcommand{\beq}{\begin{equation}} \newcommand{\eeq}{\end{equation}}
\newcommand{\beqa}{\begin{eqnarray}} \newcommand{\eeqa}{\end{eqnarray}}
\newcommand{\lsim}{\begin{array}{c}\,\sim\vspace{-21pt}\\< \end{array}}
\newcommand{\gsim}{\begin{array}{c}\sim\vspace{-21pt}\\> \end{array}}
\newcommand{\scR}{{\cal R}}
\newcommand{\scL}{{\cal L}}
\newcommand{\al}{\alpha}
\newcommand{\ald}{\dot{\alpha}}
\newcommand{\be}{\beta}
\newcommand{\bed}{\dot{\beta}}
\newcommand{\lam}{\lambda}
\newcommand{\nud}{\dot{\nu}}
\newcommand{\lamd}{\dot{\lam}}
\newcommand{\drawsquare}[2]{\hbox{%
\rule{#2pt}{#1pt}\hskip-#2p
\rule{#1pt}{#2pt}\hskip-#1p
\rule[#1pt]{#1pt}{#2pt}}\rule[#1pt]{#2pt}{#2pt}\hskip-#2p
\rule{#2pt}{#1pt}
\newcommand{\Yfund}{\raisebox{-.5pt}{\drawsquare{6.5}{0.4}}
\newcommand{\Ysymm}{\raisebox{-.5pt}{\drawsquare{6.5}{0.4}}\hskip-0.4pt%
\raisebox{-.5pt}{\drawsquare{6.5}{0.4}}
\newcommand{\Yasymm}{\raisebox{-3.5pt}{\drawsquare{6.5}{0.4}}\hskip-6.9pt%
\raisebox{3pt}{\drawsquare{6.5}{0.4}}
\newcommand{\Ap}[2]{A^\prime_{#1#2}}
\newcommand{\Q}[2]{Q_{#1#2}}
\newcommand{\R}[2]{R_{#1#2}}
\newcommand{\Y}[2]{Y_{#1#2}}
\newcommand{\V}[2]{V_{#1#2}}
\newcommand{\q}[2]{q_{#1#2}}
\newcommand{\G}[2]{G_{#1#2}}
\newcommand{\W}[2]{W_{#1#2}}
\newcommand{\D}[2]{D_{#1#2}}
\newcommand{\A}[2]{A_{#1#2}}
\newcommand{\p}[2]{p_{#1#2}}
\newcommand{\vv}[2]{v_{#1}^{#2}}
\newcommand{\rr}[2]{r_{#1}^{#2}}
\newcommand{\lij}[2]{l_{#1}^{#2}}
\newcommand{\spa}{SU(2)_1}
\newcommand{\spb}{SU(2)_2}
\newcommand{\spc}{SP(2n-4)}
\newcommand{\spd}{SP(4n+2m-10)}
\newcommand{\Lsc}[2]{\scL_{#1#2}}
\newcommand{\Lrm}[2]{L_{#1#2}}
\newcommand{\Rsc}[2]{\scR_{#1#2}}
\begin{document}
\begin{titlepage}
\vspace{2cm}
{\hbox to\hsize{hep-ph/9609529 \hfill EFI-96-35}}
{\hbox to\hsize{September 1996 \hfill Fermilab-Pub-96/338-T}}
{\hbox to\hsize{
\hfill revised version }}
\bigskip
\begin{center}
\vspace{2cm}
\bigskip
\bigskip
\bigskip
{\Large \bf New Models of Gauge and Gravity
Mediated Supersymmetry Breaking}
\bigskip
\bigskip
{\bf Erich Poppitz}$^{\bf a}$ and {\bf Sandip P. Trivedi}$^{\bf b}$ \\
\bigskip
\bigskip
$^{\bf a}${\small \it Enrico Fermi Institute\\
University of Chicago\\
5640 S. Ellis Avenue\\
Chicago, IL 60637, USA\\
{\tt epoppitz@yukawa.uchicago.edu}\\}
\smallskip
\bigskip
$^{\bf b}${ \small \it Fermi National Accelerator Laboratory\\
P.O.Box 500\\
Batavia, IL 60510, USA\\
{\tt trivedi@fnal.gov}\\ }
\vspace{1.3cm}
\begin{abstract}
We show that supersymmetry breaking in a class of theories with
$SU(N) \times SU(N-2)$
gauge symmetry can be studied in a calculable sigma model. We use the
sigma model to show that the supersymmetry breaking vacuum in these theories
leaves
a large subgroup of flavor symmetries intact, and to calculate
the masses of the low-lying states. By embedding the Standard Model
gauge groups in the unbroken flavor symmetry group
we construct a class of models in which supersymmetry breaking is
communicated by both gravitational and gauge
interactions. One distinguishing feature of these models is that the
messenger fields, responsible for the gauge mediated communication of
supersymmetry breaking, are an integral part of the supersymmetry breaking
sector.
We also show how, by lowering the scale that suppresses the nonrenormalizable
operators, a
class of purely gauge mediated
models with a combined supersymmetry breaking-cum-messenger sector
can be built. We briefly
discuss the phenomenological features of the models we construct.
\end{abstract}
\end{center}
\end{titlepage}
\renewcommand{\thepage}{\arabic{page}}
\setcounter{page}{1}
\baselineskip=18pt
\section{Introduction. }
In order to be relevant to nature, supersymmetry must be spontaneously
broken. An attractive idea in this regard is that the breaking occurs
nonperturbatively \cite{witten}
in a strongly coupled sector of the theory and is then
communicated to
the Standard Model fields by some ``messenger" interaction. One possibility
is that the role of the messenger is
played by gravity---giving rise to the so-called hidden sector models (for
a review, see~\cite{nilles}).
Another possibility \cite{oldpapers}, \cite{nilles},
which has received considerable attention recently \cite{dnns}-\cite{details}
is that the supersymmetry
breaking is communicated by gauge interactions---the gauge mediated models.
The past few years have seen some remarkable progress in the understanding
of non-perturbative supersymmetric gauge theories
\cite{seibergexact}, \cite{seiberg}. This progress
has made a more thorough investigation of supersymmetry breaking possible
\cite{susybreaking}.
We begin this paper by extending the study of supersymmetry breaking in
a class of theories with $SU(N) \times SU(N-2)$ gauge symmetry. These theories
were first considered in ref.~\cite{we}. We use some elegant
observations by Y. Shirman \cite{shirman} to show that
the low-energy dynamics of these theories can be studied in terms of a
calculable low-energy sigma model. We use the sigma model to show that the
supersymmetry breaking vacuum in these theories preserves a large group of
flavor symmetries, and to calculate the spectrum of
low-energy excitations.
We then turn to model building. The models we construct have two sectors:
a supersymmetry breaking sector---consisting of
an $SU(N) \times SU(N-2)$ theory
mentioned above---and the usual Standard Model sector.
The basic idea is to embed the Standard Model gauge groups in the unbroken
flavor symmetries of the supersymmetry breaking sector. As a result, in these
models the breaking of supersymmetry can be communicated directly by the
Standard Model gauge groups.
This is to be contrasted
with models of gauge mediated supersymmetry breaking
constructed elsewhere \cite{dnns},
in which a fairly elaborate messenger sector is needed to accomplish the
feed-down of supersymmetry breaking.
In the models under consideration here,
the scale of supersymmetry breaking turns out to be high, of order
the intermediate scale, i.e., $10^{10}$ GeV.
As a result, the gravity mediated effects are comparable
to the gauge mediated ones.
The resulting phenomenology in these ``hybrid" models is different
from both the
gravity and gauge mediated cases.
Scalars acquire both universal soft masses
due to gravity and non-universal masses due to gauge interactions, while
gauginos receive masses only due to gauge interactions.
Since the scale of supersymmetry breaking is large, the gravitino has an
electroweak scale mass. Finally, there is new physics in this theory,
at about 10 TeV, at which scale all light degrees of freedom of the
supersymmetry breaking sector, including those carrying Standard Model
quantum numbers, can be probed.
The biggest drawback of these models is the following. Since the scale of
supersymmetry breaking is so high, one cannot,
at least in the absence of any information regarding the higher dimensional
operators in the K\" ahler potential, rule out the presence of flavor
changing neutral currents. In this respect these models are no better than the
usual hidden sector models.
The high scale of supersymmetry breaking arises as follows.
Within the context of the $SU(N) \times SU(N-2)$ models,
in order to embed the Standard Model gauge groups
in the unbroken flavor symmetries, one is lead to consider large values of
$N$, namely $N \ge 11$. In these theories supersymmetry breaking occurs only
in the presence of non-renormalizable operators in the superpotential
and the dimension of these operators grows as $N$ grows.
On suppressing the effects of these operators by the
Planck scale, one is lead to a large supersymmetry breaking scale.
If one lowers the scale that suppresses the non-renormalizable operators,
the supersymmetry breaking scale is lowered as well. We use the resulting
theories to construct purely gauge mediated models with a combined
supersymmetry breaking and messenger sector.
The lower scale suppressing the non-renormalizable
operators could arise due to new nonperturbative
dynamics. It could also arise if the Standard Model gauge groups are dual
to an underlying microscopic theory. We will not explicitly discuss how
this lower scale arises here.
A brief study of the phenomenology of the purely gauge mediated models
we construct
reveals some features which should be more generally true in models of this
type. We hope to return to a detailed phenomenological study of these models
in the future.
A few more comments are worth making with respect to the models considered
here. First, from the perspective of a hidden sector theory, the hybrid models
are examples of theories without any fundamental gauge singlets in which
gauginos obtain adequately big soft masses.\footnote{For an
example of a hidden sector theory, in which supersymmetry breaking
involves a global supersymmetric theory and a singlet, and yields
reasonable gaugino masses, see ref.~\cite{nelson}.}
Second, one concern about constructing models in which the
supersymmetry breaking sector carries Standard Model charges is
that this typically leads to a loss of asymptotic freedom for the
Standard Model gauge groups and the existence of Landau poles at fairly low energies.
One interesting idea on how to deal with this problem involves dualizing \cite{seiberg}
the theory and regarding the resulting dual theory---which is usually better behaved in
the ultraviolet---as the
underlying microscopic theory. In the ``hybrid" models discussed here, one finds
that the Landau poles are pushed beyond an energy scale of
order $10^{16}$ GeV.
This is a sufficiently high energy scale that even without appealing to duality their
presence might not be a big concern. For example, new GUT scale physics
(or conceivably even string theory related physics) could enter at this scale.
The non-renormalizable operators,
mentioned above, which are responsible for the large scale of supersymmetry
breaking in the ``hybrid models" are also responsible for pushing up the
scale at which Landau poles appear;
to this extent their presence is an attractive feature which one might want to retain.
Finally, we would like to comment on the low-energy effective theory used to study
the breaking of supersymmetry in the $SU(N) \times SU(N-2)$ theories.
This effective theory arises as follows. First, at very high energies,
the $SU(N-2)$ group is broken, giving rise to an effective
theory consisting of some moduli fields and a pure $SU(N)$ theory coupled to a
dilaton. The $SU(N)$ theory then confines at an intermediate energy scale giving
rise to a low-energy theory involving just the dilaton and the moduli.
Gaugino condensation in the $SU(N)$ theory gives rise to a term in the
superpotential of this low-energy theory and as a result, the superpotential
has a runaway behavior characteristic of a theory containing a dilaton.
However, one finds that this runaway behavior is stabilized due to a
non-trivial K\" ahler potential involving the dilaton.
It has been suggested that a similar phenomenon might be responsible for
stabilizing the runaway behavior of the dilaton in string theory \cite{sbandd}.
In the globally supersymmetric models considered here the stabilization
occurs due to a calculable non-trivial K\" ahler potential in
the effective theory
linking the dilaton with the other moduli.
\section{The Supersymmetry Breaking Sector.}
\subsection{The ${\bf SU(N)\times SU(N-2)}$ Models.}
In this section we will briefly review the models, introduced in
\cite{we}, that will play the role of a supersymmetry breaking sector.
They have an $SU(N) \times SU(N-2)$ gauge group,
with {\it odd} $N$,
and matter content consisting of a single field,
$Q_{\alpha {\dot{\alpha}}}$, that transforms as (\Yfund , \Yfund ) under the
gauge groups, $N-2$ fields, $\bar{L}^\alpha_I$, transforming as
$(\overline{\Yfund}, {\bf 1})$, and $N$ fields, $\bar{R}^{\ald}_A$,
that transform as $({\bf 1}, \overline{\Yfund})$. Here, as in the
subsequent discussion, we denote the gauge indices of $SU(N)$
and $SU(N-2)$ by $\alpha$ and $\ald$, respectively, while
$I = 1\ldots N-2$ and $A = 1 \ldots N$ are flavor indices. We note that
these theories are chiral---no mass terms can be added for any of the matter
fields.
We begin by considering the classical moduli
space. It is described by the gauge invariant mesons and baryons:
\beqa
\label{defbaryon}
Y_{IA} &=& \bar{L}_I \cdot Q \cdot \bar{R}_A~, \nonumber \\
b^{A B} &=& {1\over (N-2)!}~
\varepsilon^{A B A_1 \cdots A_{N-2} }~
\varepsilon_{\dot{\alpha}_1 \cdots \dot{\alpha}_{N-2}}
~ \bar{R}_{A_1}^{\dot{\alpha}_1} \cdots \bar{R}_{A_{N-2}}^{\dot{\alpha}_{N-2}} ~,
\eeqa
and $\bar{\cal{B}} = Q^{N - 2} \cdot \bar{L}^{N - 2}$. These invariants are not independent
but subject to classical constraints \cite{we}.
We will consider the theory with
the tree-level superpotential
\beq
\label{wtreebaryon}
W_{tree} = \lambda^{IA} ~ Y_{IA} + {1 \over M^{N-5}} ~ \alpha_{AB} ~ b^{AB} ~.
\eeq
The superpotential $W_{tree}$
lifts all classical flat directions, provided that
$\lambda_{IA}$ has maximal rank, $N-2$,
the matrix $\alpha_{AB}$ also has maximal
rank ($N-1$),
and its cokernel
contains the cokernel of $\lambda^{IA}$ (rank $\lambda = N - 2$).
With this choice of couplings, $W_{tree}$ also
preserves a nonanomalous,
flavor dependent $R$ symmetry. To see this, choose
for example $\alpha^{AN} = 0, \lambda^{I N} = \lambda^{I (N-1)} = 0$
(to lift the classical flat
directions). Then
one sees that
the field $\bar{R}_N$ appears in each of the baryonic terms of the superpotential
(\ref{wtreebaryon}), while it does not appear in any of the Yukawa terms.
Assigning different $R$ charges to the four types of fields, $\bar{R}_N$,
$\bar{R}_{A < N}$, $Q$, and $\bar{L}_I$, one has to satisfy four conditions:
two conditions ensuring that the superpotential (\ref{wtreebaryon}) has
$R$ charge 2, and
two conditions that the gauge anomalies of this $R$ symmetry vanish.
It is easy to see that
there is a unique solution to these four conditions.
The couplings in the superpotential will be chosen to preserve
a maximal global symmetry\footnote{This choice of couplings, which preserves
the maximal global symmetries, has been made for simplicity.
For the discussion of model building that follows, it is enough to preserve an
$SU(3) \times SU(2) \times U(1)$ symmetry. Doing so introduces extra parameters
in the superpotential eq.~(\ref{wtreebaryon}) but does not alter the
subsequent discussion in any significant way.}.
We will take the nonvanishing components of the Yukawa matrix to be
$\lambda^{IA} = \delta^{IA} \lambda$, for
$A = 1,...,N-2$.
The antisymmetric matrix $\alpha_{AB}$ will have the following nonvanishing
elements:
$\alpha_{AB} = a J_{AB}$, for $A,B < N-2$ and $\alpha_{AB} = J_{AB}$,
for $A,B =N-1, N-2$.
This choice of couplings preserves an $SP(N-3)$ global nonanomalous
symmetry.\footnote{In our notation $SP(2k)$ is the
rank $k$ unitary symplectic group with $2k$ dimensional
fundamental representation. $J_{AB}$ is the $SP(2k)$ invariant tensor; we take
$J_{12} = 1$ and $J^{AB} J_{BC} = - \delta^A_C$.}
The dynamics of these models was discussed in \cite{we}, where it was shown that
when the superpotential (\ref{wtreebaryon}) is added, the ground state dynamically
breaks supersymmetry.
In the next section we will study supersymmetry breaking in these
theories in more detail.
\subsection{The Low-Energy Nonlinear Sigma Model.}
\subsubsection{The Essential Ideas.}
We show in this section that for a region of parameter space the
breaking of supersymmetry in the $SU(N) \times SU(N-2)$ theories can be
conveniently studied in a low-energy effective theory. We identify the
degrees of
freedom, which appear in this supersymmetric nonlinear sigma model,
and show that both the
superpotential and the K\" ahler potential in the sigma model can be reliably
calculated in the region of moduli space where the vacuum is expected to occur.
This is interesting since the underlying theory that gives rise to the sigma
model is not weakly coupled.
In the following section, we then
explicitly construct and minimize the potential responsible for supersymmetry
breaking,
thereby deducing the unbroken flavor symmetries and
the spectrum of the low-energy excitations.
It is convenient to begin by considering the limit
$M \rightarrow \infty$. In this limit, the baryonic flat directions,
described by the gauge invariant fields $b^{AB}$, are not lifted and
the model has runaway directions along which the energy goes to zero
asymptotically \cite{shirman}.
As was mentioned above, we take $\lambda^{IA}$ of eq. ~(\ref{wtreebaryon})
to be $\lambda^{IA} = \delta^{IA} \lambda$, for $A = 1,...,N-2$.
The runaway directions are specified
by the condition that $b^{N N-1} \rightarrow \infty$ . The other baryons
$b^{AB}$ can in addition be non-zero along these directions. We will see that
once one is sufficiently far along these directions the low-energy dynamics
can be described by a calculable effective theory.
Let us first consider the simplest runaway direction,
$b^{N N-1} \rightarrow \infty$, with all the other $b^{AB} =0$.
Along this direction the $\bar{R}$ fields
have vacuum expectation values given by
$\bar{R}_A^{\dot{\alpha}} = v \delta_A^{\dot{\alpha}}$ with $v \rightarrow \infty$.
Since the
$SU(N-2) $ symmetry is completely broken at a scale $v$, its
gauge bosons get heavy and can be intergated out.
In the process, several components of the $\bar{R}_A$ fields get
heavy or eaten and can be removed from the low-energy theory
as well. In
addition, on account of the Yukawa coupling in (\ref{wtreebaryon}) all $N-2$
flavors of $SU(N)$ quarks become massive, with mass $\lambda v$, and can be
integrated out. Thus one is left with an intermediate scale effective theory
containing the light components of the $\bar{R}_A$ fields and the
pure $SU(N)$ gauge theory.
There is one slightly novel feature about the $SU(N)$
group in this effective theory: its strong coupling scale $\Lambda_{1L}$ is field
dependent. On integrating out the $Q$ and $L$ fields one finds that
\beq
\label{sllow}
\Lambda_{1L}^{3N} = \Lambda_1^{2N+2} ~ \lambda^{N-2} b^{N ~ N-1},
\eeq
with $ \Lambda_1$ being the scale of the ultraviolet
$SU(N)$ theory. Thus the field $b^{N ~ N-1}$ acts as a dilaton for the
$SU(N)$ group in the low-energy theory. Going further down in energy one
finds that the $SU(N)$ group confines at a scale $\Lambda_{1L}$, leaving the
dilaton, $b^{N N-1}$, and the other light components of $\bar{R}_A$
as the excitations in the final low-energy theory.
Gaugino condensation in the $SU(N)$ theory gives rise to a superpotential \cite{seibergexact}
of the form:
\beq
\label{sabaryonw}
W = \lambda^{N-2 \over N} ~\Lambda_1^{2 N + 2 \over N}
~\left( b^{N-1 ~N} \right)^{1\over N}
\eeq
in this low-energy theory.
So far we have considered the simplest runaway direction,
$b^{N N-1} \rightarrow
\infty$, with all the other $b^{AB} =0$. There are other runaway directions,
along which some of the other baryons go to infinity as well, at a rate
comparable or faster than $b^{N N-1}$.
In these cases the underlying dynamics giving rise to the effective theory
can be sometimes different from
that described above.
However, one can show that the effective theory,
consisting of the light
components of $\bar{R}_A$, with the non-perturbative superpotential
(\ref{sabaryonw}), describes the low-energy dynamics along these
directions as well.
It is not surprising that the exact superpotential can be calculated in this
effective theory.
What is more remarkable is that, as has been
argued in \cite{shirman}, the corrections to the classical K\" ahler potential are
small along these runaway directions and thus the K\" ahler potential can be
calculated in the effective theory as well.
Thus, as promised above, the effective theory is completely calculable.
Let us briefly summarize Shirman's argument here.
Since the $SU(N-2)$
theory is broken at a high scale,
the corrections to the K\" ahler potential one is worried
about must involve the effects of the strongly coupled $SU(N)$
group\footnote{As mentioned above along some of the
runaway directions the underlying
dynamics is somewhat different. Correspondingly the strongly coupled effects
do not always involve the full $SU(N)$ group. However, an analogous argument
shows that the corrections to the classical K\" ahler potential are small along these
directions as well.}
with a strong coupling scale $\Lambda_{1L}$, eq.~(\ref{sllow}). These
corrections are
of the form $\bar{R}^\dagger \bar{R} f(t)$, with $t = \Lambda_{1L}^\dagger
\Lambda_{1L}/(\bar{R}^\dagger \bar{R}) \sim
(\Lambda_{1}^\dagger ~ \Lambda_{1})^{2 N+2 \over 3N}/(\bar{R}^\dagger
\bar{R})^{1 - (N - 2)/(3 N)}$. We are interested in the behavior of $f(t)$ when $R
\rightarrow \infty$, i.e., $t \rightarrow 0$. Now, it is easy to see that this limit
can also be obtained when $\Lambda_1 \rightarrow 0$. In this case it is
clear that the strong coupling effects due to the $SU(N)$ group must go to zero and
thus the corrections to the K\" ahler potential for $\bar{R}$ must be small.
Hereafter, we will take the K\" ahler potential to be classical. The discussion above
shows that this is a good approximation as long as $\Lambda_{1L} \ll v$,
where $v$ denotes the vacuum expectation value of the $\bar{R}$ fields.
Let us now briefly summarize what has been learned about the theory when
$M \rightarrow \infty$. We found that the theory had runaway directions.
The low-energy dynamics along these directions can be described by an effective
theory consisting of the light components of the fields $\bar{R}_A$. Finally,
both the superpotential and the K\" ahler potential in this effective theory can be
calculated.
Armed with this knowledge of the $M \rightarrow \infty$ limit we ask what
happens when we consider $M$ to be large but not infinite. It was shown in \cite{we}
that once the last term in (\ref{wtreebaryon}) is turned on, the theory
does not have any runaway directions and breaks supersymmetry.
However, and this is
the crucial argument, for a large enough value of $M$ the
resulting vacuum must lie along the runaway directions discussed above
(since the runaway behavior is ultimately stopped by
the $1/M^{N-5}$ terms in (\ref{wtreebaryon})), and
therefore the breaking of supersymmetry can be analyzed in terms of the low-energy
theory discussed above.
\subsubsection{The Explicit Construction.}
We now turn to explicitly constructing the low-energy effective theory.
The light degrees of freedom of the $\bar{R}$ fields can be described
either in terms of the appropriate components of $\bar{R}_A$
or the gauge invariant baryons $b^{AB}$.
The use of the baryons is more convenient \cite{ads},
since it automatically takes care of integrating out the
heavy $SU(N-2)$ vector fields and their superpartners at tree level
(see also \cite{BPR}, \cite{RP}), and provides an explicitly
gauge invariant description of the low-energy physics.
The K\" ahler potential for the light fields is
$K = \bar{R}^\dagger e^{V} \bar{R} \big\vert_{V = V(\bar{R}^\dagger, \bar{R})}$,
where the heavy vector superfield $V$ is integrated out by solving its classical equation
of motion.
In terms of the baryons, this K\" ahler potential can
be calculated, as in \cite{ads}:
\beq
\label{Kbaryon1}
K = c_N ~ \left( ~b^\dagger_{A B} ~b^{A B} ~\right)^{1 \over N-2} ~,
\eeq
where $c_N = (N-2) ~ 2^{-{1 \over N - 2}}$.
The baryons $b^{AB}$ are not independent,
but obey the constraints:
\beq
\label{baryonconstraints}
b^{A~B}~ b^{ N-1 ~ N} = b^{N-1 ~A} ~b^{N ~B} - b^{N -1~B} ~b^{N ~A} ~,
\eeq
which follow from their definition (\ref{defbaryon}) and Bose symmetry.
We can now use these constraints to solve for the redundant baryons
in terms of an appropriately chosen set thereby
obtaining the required K\" ahler potential.
Counting the number of eaten degrees
of freedom and comparing with the analysis in terms of the fields $\bar{R}_A$
along the $D$-flat directions, it is easy to
see
that $b^{N-1~ N}, ~b^{N-1~ A}$, and $b^{N~ B}$, with
$ A,B =1,...,N-2$, are good coordinates\footnote{For example,
along the flat direction $\bar{R}_A^{\dot{\alpha}} = v \delta_A^{\dot{\alpha}}$,
discussed in Section 2.2.1, the components of $\bar{R}_A$ that remain light are
$\bar{R}_{N}^{\dot{\alpha}}$, $\bar{R}_{N-1}^{\dot{\alpha}}$, and $v$ (which
describes fluctuations
corresponding to motion along the flat direction). Using the definitions of the
baryons (\ref{defbaryon}) one can see that fluctuations of $b^{N-1~ A}$, with
$A < N-1$,
around these expectation values correspond to the field $\bar{R}_{N}^{\dot{\alpha}}$,
while fluctuations of $b^{N~ A}$ ($A < N-1$) and $b^{N-1~ N}$
correspond to $\bar{R}_{N-1}^{\dot{\alpha}}$ and $v$, respectively.}
(in a vacuum where $b^{N ~ N-1} \ne 0$)
and we consequently use them as the independent fields.
For notational convenience, we introduce the fields $S$ and
$P^{\alpha A}$, (hereafter $A,B =1,...,N-2; \alpha = 1,2$) via the definitions:
$S = b^{N-1~N} $, $P^{1 A} = b^{N-1 A}$ and $P^{2 A} = b^{N A} $.
The K\" ahler potential (\ref{Kbaryon1}) and superpotential of the
effective theory, after using the constraint (\ref{baryonconstraints})
to solve for the redundant degrees of freedom, become:
\beq
\label{effectivelagrangiankah}
K = (N - 2)~ \left( S^\dagger~ S + P_{\alpha A}^\dagger ~P^{\alpha A}
+ {P^{\alpha A} ~P_{\alpha}^B~ P^\dagger_{\beta A} ~P^{\dagger \beta}_B
\over 2~ S^\dagger ~S} \right)^{1\over N-2} ,
\eeq
and
\beq
\label{elsup}
W = \lambda^{N-2 \over N} ~\Lambda_1^{2 N + 2 \over N} ~S^{1\over N}
- {2 \over M^{N-5}}~ P^{1~N-2} +
{2~a \over M^{N-5}} \sum\limits_{A, B = 1}^{N-3} ~
{ J_{A B}~ P^{1 A} ~P^{2 B}\over S}~,
\eeq
respectively.
The superpotential above was obtained by adding the last term of
(\ref{wtreebaryon})---with the matrix $\alpha_{AB}$ chosen to
preserve $SP(N-3)$, as described in Section 2.1---to the nonperturbatively
generated superpotential, eq.~(\ref{sabaryonw}).
We will see, in the following sections, that the sigma model has
a stable supersymmetry breaking vacuum. As discussed above, the field
$S$ is a dilaton for the $SU(N)$ gauge group. The first term in the
superpotential (\ref{elsup}) could have lead to runaway behavior. This
runaway behavior is, however, stopped by the K\" ahler potential
(\ref{effectivelagrangiankah}), which links the dilaton to the other moduli.
\subsection{Mass Scales and Spectrum.}
\subsubsection{Mass Scales.}
With the sigma model in hand we can now
write down the the potential---it is given in terms of the K\" ahler potential
and the superpotential as $V = W_i K^{-1~i j^*} W^{*}_{j^*}$ \cite{WB}.
The explicit minimization of the potential in our case needs to be done
numerically but several features about the resulting ground state can be deduced
in a straightforward way.
Notice first, that the superpotential has two scales $\Lambda_1$ and
$M$. These will determine the various scales which appear in this problem. The
scale of the vacuum expectation values $v$ can be obtained by balancing
the first two terms in the superpotential (\ref{elsup}) and is given by
\beq
\label{vevscale}
v \equiv M ~\left[ {\lambda^{N-2 \over 2N+2}
~\Lambda_1 \over M } \right]^{2N+2 \over (N-1) ~ (N-2)}~.
\eeq
In order for our approximations to be justified $v$ needs to be large enough.
Quantitatively, we need $\Lambda_{1L}/v \ll 1$, where $\Lambda_{1L}$ is the
strong coupling scale of the intermediate scale $SU(N)$ theory.
Since the first term in the superpotential (\ref{elsup})
is of order $\Lambda_{1L}^3$,
we need the condition
\beq
\label{approx}
{\Lambda_{1L} \over v} \sim \left({v \over M}\right)^{{N-5 \over 3}} \ll 1
\eeq
to be valid. Eq.~(\ref{approx})
can be met, for $N>5$, if $v \ll M$ \footnote{For $N = 5$, this
condition can be met by making a dimensionless Yukawa coupling small.}.
Hereafter it will be
convenient to use $v$ and $M$ as the two independent energy scales.
The scale of the typical $F$ components that give rise to
supersymmetry breaking is $\sim W/v$, i.e.
of order $F$ where
\beq
\label{ftermscale}
F \equiv M^2 ~ \left( {v \over M} \right)^{N - 3}~,
\eeq
while the masses of the fields in the sigma model are $ \sim W/v^2$,
i.e. of order $m$, where
\beq
\label{orderofmasses}
m \equiv M ~ \left( {v \over M} \right)^{N - 4} ~.
\eeq
Note that (for $N>5$)
the scale of supersymmetry breaking, $F^{1/2}$, eq.~(\ref{ftermscale}), is much
higher than the scale of the masses, eq.~(\ref{orderofmasses}), if $M \gg v$.
We turn now to the global symmetries. As is clear
from eq.~(\ref{elsup}), the superpotential has an
$SP(N-3)$ symmetry under which
$P^{1A}$ and $P^{2A}$ transform as fundamentals.
First we note that although there might exist vacua
that break the $SP(N-3)$ global symmetry, an $SP(N-7)$
global symmetry is always preserved, since the light spectrum only has
two fundamentals of the $SP(N-3)$ global symmetry.
Second, intuitively it is
clear that when the parameter $a$ that appears in the third term of the
superpotential (\ref{elsup}) is large, the ground state of this theory should
preserve the
global $SP(N-3)$ symmetry.
In the limit of large $a$, the fields that transform under the $SP(N-3)$ symmetry
can be integrated out if the field $S$ has an expectation value.
The resulting theory of the light
fields (the fields $S$ and $P^{\alpha N-2}$)
is expected to have a stable vacuum at nonvanishing value
of $S$ since the potential is singular for both zero and infinite field values.
In fact, the numerical minimization of the potential
shows that an $SP(N-3)$ symmetric
stable vacuum exists for a wide range of values of $a$ (not necessarily $\gg 1$).
\subsubsection{Mass Spectrum.}
\begin{table}
{Table~1: Vacuum expectation values $x,z$, eq.~(\ref{vevs}),
vacuum energy $\varepsilon$, and
mass matrix
parameters $\alpha, \beta, \gamma, \delta$,
eq.~(\ref{diracmass},\ref{scalarmass}), in the $SU(N)\times SU(N-2)$
models, for $5 \le N \le 27$, $N$-odd.}
\begin{center}
\vspace{0.2cm}
\label{tab1}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline\hline
$N$ & $x$ & $z$ & $\alpha$& $\beta$ & $\gamma$& $\delta$ & $\varepsilon$ \\
\hline
& & & & & &\\
$5$ & .0299 & .0429 & .873 & -.437 & .748 & .250 & .201 \\
$7$ & .0298 & .0357 & .197 & -.0493 & .317 & .0947 & .0774 \\
$9$ & .0283 & .0317 & .0893 & -.0149 & .209 & .0496 & .0443 \\
$11$& .0262 &.0284 &.0520 &-.00650 & .159 &.0308 & .0293 \\
$13$& .0241 &.0256 &.0343 & -.00343 & .129 &.0212 &.0211 \\
$15$& .0222 & .0233 &.0245 &-.00204 &.109 &.0154 & .0159 \\
$17$& .0205 &.0214 & .0183 &-.00131 & .0946 & .0118 &.0125 \\
$19$& .0190 &.0197 &.0143 &-.000892 &.0835 &.00928 & .0100 \\
$21$& .0177 &.0183 &.0114 & -.000635& .0748 & .00751 &.00828 \\
$23$& .0165 &.0170 &.00936 & -.000468& .0677 &.00619 & .00694 \\
$25$& .0155 &.0159 & .00780 &-.000355 & .0619 &.00520 & .00590 \\
$27$& .0146 &.0150 &.00661 &-.000275 & .0570 & .00442 &.00508 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
With this background in mind we turn to the
numerical minimization.
We will study the vacuum that preserves the maximal global symmetry
and will in particular be interested in the masses of the
$SP(N-3)$ fundamentals $P^{\alpha A}, A < N-2$, since they will play the role of
messenger fields in the subsequent discussion of model building.
The numerical investigation shows that an extremum exists
where the only nonvanishing vacuum expectation values are those of the fields $S$
and $P^{1 N-2}$. In particular
the field $P^{2 N-2}$ does not acquire an expectation value.\footnote{There may
exist other extrema of the potential where also
the field $P^{2 N-2} \ne 0$. We have not studied these in any detail.}
The expectation values of the
fields $S$ and $P^{1 N-2}$ are:
\beqa
\label{vevs}
S &=& x ~ v^{N-2} ~, \nonumber \\
P^{1 N-2} &=& z ~ v^{N-2} ~.
\eeqa
All components of the $S$ and $P^{\alpha N-2}$
supermultiplets have mass
of order $m$, except the $R$
axion---which becomes
massive due to higher dimensional operators \cite{BPR}, necessary
e.g. to cancel the cosmological constant---and the goldstino, which is a
linear combination of the $S$ and $P^{1 N-2}$ fermions.
The fermionic components of the
$SP(N-3)$ fundamentals $P^{\alpha A}$, $A = 1,...,N-3$ have a Dirac mass term,
which can be directly read off eq.~(\ref{elsup}) (the K\" ahler
connection \cite{WB} does not contribute to the masses of
the $SP(N-3)$ multiplets in the vacuum
(\ref{vevs}))\footnote{In eqs.~(\ref{diracmass}) and (\ref{scalarmass})
all kinetic terms have been brought to canonical form.}
\beq
\label{diracmass}
\gamma~ a~ m~\sum\limits_{A, B = 1}^{N-3}~ P^{1 A}~P^{2 A}~J_{AB},
\eeq
while the quadratic terms in their scalar
components are:
\beq
\label{scalarmass}
m^2 ~\sum\limits_{A, B = 1}^{N-3}~
( P^{1 A} ~ P^{\dagger}_{2 B} )~
\left( ~\begin{array}{cc}
( \alpha + \gamma^2 ~a^2 )~\delta_A^C & \delta ~a~J_{AD}\\
\delta ~a~J^{BC} & (\beta + \gamma^2~a^2) ~\delta^B_D
\end{array} ~ \right) ~
\left( \begin{array}{c}
P^{\dagger}_{1 C} \\ P^{2 D} \end{array} \right) ~.
\eeq
The numerical values of the vacuum expectation values $x, z$
(\ref{vevs}) and the mass matrix parameters $\alpha, \beta, \gamma$,
and $\delta$, as well as the vacuum energy $\varepsilon$
(defined by $V = M_{SUSY}^4 = \varepsilon~F^2~$) are given in Table 1 for a range of
values of $N$.
A few comments are now in order:
First, it is useful to consider the
messenger fields' spectrum, (\ref{diracmass}) and
(\ref{scalarmass}), in the $a \gg 1$ limit.
The fermion mass squared and the diagonal components of the scalar mass
matrix become equal in this limit. Furthermore,
the fermion mass squared is equal to the average of the squared masses
of the scalar mass eigenstates, and the splitting in the supermultiplet (proportional
to $\sqrt{a}$) is much smaller than the supersymmetric mass (proportional to $a$).
The spectrum of the messenger fields in this limit is very similar to that
obtained in the models of ref.~\cite{dnns}, where gauge singlet fields are responsible
for generating both the supersymmetric and supersymmetry breaking masses.
This is because in the $a \gg 1$ limit, the masses
of the $SP(N-3)$ fundamentals mainly arise due
to the last term in the superpotential in eq.~(\ref{elsup}),
which has the form of the singlet---messenger fields
coupling in the models
of ref.~\cite{dnns}.
Second, it is very likely---at least in the $a \gg 1$ limit---that the
vacuum we have explored here is in fact the global minimum of the theory.
This is to be contrasted with the models of ref.~\cite{dnns},
which contain a more elaborate messenger sector.
In these models, the required vacuum---with an $F$ term expectation value
for the
singlet, which couples to the messenger
quarks---is only local. Usually there is a deeper minimum, in which the
singlet $F$ term expectation value
vanishes, while the messenger quarks have expectation values,
breaking the Standard Model gauge group at an unacceptably high scale
(avoiding this problem requires an even more complicated messenger
sector, as shown in ref.~\cite{bogdanlisa}).
In addition to the fields in the sigma model, when discussing the
communication of supersymmetry breaking to the Standard Model sector,
we will need some information
on the spectrum of heavy fields in the $SU(N)\times SU(N-2)$ theory. The
vacuum expectation values for the fileds $S$ and $P^{1 N-2}$ (\ref{vevs})
correspond to the expectation values of $\bar{R}_{1 \ldots N-2}$ and
$\bar{R}_{N}$ of order $v$.
Correspondingly, due to the first term
in (\ref{wtreebaryon})
the fields $Q$ and $\bar{L}^I$ get (supersymmetric) masses of order $\lambda v$.
Since the
$F$ components of the $\bar{R}$ fields also have
expectation values, the fields $Q$ and $\bar{L}^I$ also obtain a supersymmetry
breaking mass squared splitting of order $\lambda F$.
For the discussion in the following
section it is relevant to note that the ratio of the
supersymmetry breaking mass squared splitting to the supersymmetric mass of the
heavy fields $Q$ and $\bar{L}^I$ is of order $F/v$---the
same as the corresponding ratio for the light fields in the sigma model.
The components of the $\bar{R}$ fields which get eaten by the $SU(N-2)$ gauge bosons
and their heavy superpartners (with mass of order $g_2 v$) also obtain supersymmetric
mass splitting.
The leading effect is that
the scalar components in the heavy vector supermultiplets obtain
supersymmetry breaking contributions to their masses of order $m \simeq F/v$. These
contributions arise because of a shift of the expectation values of the heavy fields
in response to the F-type vacuum expectation values of the light fields (a similar
effect of the heavy tadpole is discussed in ref.~\cite{BPR}; see also \cite{RP}).
Having understood the supersymmetry breaking vacuum and the spectrum
in some detail we now turn to using these theories for model building.
\section{Communicating Supersymmetry Breaking.}
\subsection{Basic Ideas.}
The basic idea is to construct a model containing two sectors: the usual
Standard Model sector, consisting of the supersymmetric Standard Model
and a supersymmetry breaking sector consisting of an $SU(N) \times SU(N-2)$
theory studied above. We saw above that the latter theories have an $SP(N-3)$
global symmetry which is left unbroken in the supersymmetry
breaking vacuum. A
subgroup of $SP(N-3)$ can be identified with the Standard Model
gauge symmetries.
The minimal $SP(2k)$ group in which one can
embed $SU(3)\times SU(2) \times U(1)$ is $SP(8)$---this corresponds to taking
$N=11$. Alternatively, we can consider an embedding consistent with Grand
Unification. For this purpose one can embed $SU(5)$ in $SP(10)$---using the
$SU(13)\times SU(11)$ models.
The soft parameters---the Standard Model gaugino masses and
soft scalar masses---receive contributions from several different energy
scales. As discussed in the previous section, all heavy fields in the
$SU(N)\times SU(N-2)$ theory that transform under the Standard Model
gauge group obtain supersymmetry breaking mass splittings. The $Q$ and $\bar{L}$ heavy
fields transform as fundamentals under the Standard Model gauge group, whereas
the eaten (and superpartners) components of the fields $\bar{R}$ transform as
two fundamentals, a symmetric tensor (adjoint), and an antisymmetric tensor
representation of $SP(N-3)$.
In this section we will present a brief
discussion of the generation of the soft parameters. As in \cite{dnns}
gaugino masses arise at one loop, while soft scalar masses arise at two
loops. The corresponding calculations are somewhat more involved than the
ones from \cite{dnns}, \cite{martin}; more details
will be presented in a subsequent paper \cite{future}.
We first consider the effects of the heavy $Q$ and $\bar{L}$ fields.
The contribution of these fields is analogous to that
of the messenger fields in the models of \cite{dnns}. Consequently their
contribution to the gaugino masses is:
\beq
\label{msoftH}
\delta_H m_{gaugino} \sim N_f~ {g^2 \over 16 \pi^2}~{F\over v} \sim
N_f~ {g^2 \over 16 \pi^2}~ M ~ \left( {v \over M} \right)^{N-4} ~,
\eeq
while their contribution to the soft scalar masses is :
\beq
\label{softscalarH}
\delta_H m_a^2 \sim N_f ~ {g^4 \over 128 \pi^4} ~
C_a ~ S_Q ~ \left( F \over v \right)^2 ~.
\eeq
In the equations above $g$ denotes the appropriate Standard Model gauge
coupling, $C_a$ is the quadratic Casimir
($(N^2 - 1)/2 N$ for an $SU(N)$ fundamental; for $U(1)_Y$
the corresponding coefficient is $3 Y^2/5$, with
$Y$ the messenger hypercharge), $S_Q$ is the Dynkin index of the
messenger representation (1/2 for the fundamental of $SU(N)$).
Finally, in eqs.~(\ref{msoftH}) and (\ref{softscalarH})
$N_f$ denotes the number of messenger flavors for
the appropriate Standard Model groups---in particular it is important to note
that $N_f$ is proportional to $N$ so that these contributions increase in
magnitude as the size of the $SU(N) \times SU(N-2)$ group increases.
Next we consider the contributions of the light fields (described by the
sigma model). These effects will be described in more detail
elsewhere \cite{future}, here we restrict ourselves to providing some rough
order of magnitude estimates. Their contribution to the gaugino masses
is of order:
\beq
\label{msoftL}
\delta_L m_{gaugino} \sim {g^2 \over 16 \pi^2} ~ m \sim
{g^2 \over 16 \pi^2}~ M ~ \left( {v \over M} \right)^{N-4} ~,
\eeq
where $m$ denotes the typical mass scale in the sigma model,
eq.~(\ref{orderofmasses}).
Since
the supertrace
of the light messenger mass matrix squared is nonvanishing (as can be inferred
from eqs.~(\ref{diracmass}), (\ref{scalarmass}), and Table 1), their
contribution to the soft scalar masses turns out to be logarithmically
divergent \cite{future}. The divergent piece is:
\beq
\label{softscalarL}
\delta_L m_a^2 = - ~ {g_a^4 \over 128 \pi^4} ~ C_a ~ S_Q ~
{\rm Str} M^2_{mess} ~{\rm Log} {\Lambda^2\over m_f^2} ,
\eeq
where $\Lambda$
is the ultraviolet
cutoff and $m_f $ is the Dirac mass of the messenger fermion,
$m_f \sim F/v\sim m$.
This logarithm is cutoff by the contributions of the
heavy eaten components of
the fields\footnote{As far as the soft Standard Model parameters are
concerned, this is the main effect of the supersymmetry breaking mass
splittings in the eaten components of the $\bar{R}$ fields. } $\bar{R}$,
therefore the scale $\Lambda$ in eq.~(\ref{softscalarL}) should be replaced by
their mass, $\sim g_2 v$ \footnote{In the full theory, the ${\rm Str} M^2$,
appropriately weighted by the messengers' Dynkin indices vanishes. One can
see this by noting that a nonvanishing supertrace would imply the existence
of a counterterm for the Standard Model soft scalar masses. This counterterm
would have to be nonpolynomial in the fields (for example, of the form
$\Phi^\dagger \Phi {\rm Log} \bar{R}^\dagger \bar{R}$)
and thus can not appear.}. Note
also that in (\ref{softscalarL}) ${\rm Str} M^2_{mess} \sim m^2$, and that
there is no large flavor factor $N_f$, since there are only two fundamentals of
$SP(N-3)$ light messengers. In addition to the logarithmically divergent
contribution (\ref{softscalarL}), there are finite contributions analogous to
those of the heavy fields (\ref{softscalarH}), which are proportional to
$\Delta m_{mess}^2/m_{mess} \sim F/v \sim m$ (the exact formula will be
given in \cite{future}).\footnote{For
completeness, we note that with the general messenger
scalar mass matrix (\ref{scalarmass}), one loop contributions to
the hypercharge D-term are generated. These can be avoided if
the messengers fall in complete $SU(5)$ representations, or,
alternatively, the parameter $a$ is sufficiently large (for $a$ not sufficiently
large, however, there are two loop
contributions to the $U(1)_Y$ D term,
even in the complete $SU(5)$ representations case).}
We can now use the above estimates for the Standard Model
soft masses, eqs.~(\ref{msoftH}), (\ref{softscalarH}), (\ref{msoftL}),
and (\ref{softscalarL}), to obtain
an estimate of the scales in the $SU(N)\times SU(N-2)$ theory.
In section 2.3.1, we found that the scale of the messenger
masses (\ref{orderofmasses})
is given by $m = M (v/M)^{(N-4)} = F/v$, while the scale of supersymmetry breaking
(\ref{ftermscale}) is $\sqrt{F} = M (v/M)^{(N-3)/2}$.
Demanding, e.g. that $m_{gaugino} \sim (10^2 - 10^3)$ GeV, we obtain
\beq
\label{voverM}
{v \over M} \sim \left({ (10^4 - 10^5)~{\rm GeV} \over M} \right)^{1\over N-4}~.
\eeq
The scale of supersymmetry breaking (\ref{ftermscale}) then becomes
\beq
\label{susybreakingscale}
\sqrt{F} \sim M ~\left({ (10^4 - 10^5) ~{\rm GeV} \over M}
\right)^{N-3 \over 2(N-4)}~.
\eeq
\subsection{Hybrid Models.}
Since $M$ suppresses the non-renormalizable operators in eq. (\ref{wtreebaryon})
one natural value it can take is $M_{Planck}$. We consider this case in some
detail here. On
setting $M = M_{Planck} \simeq 2 \cdot 10^{18}$ GeV in the formula above gives
$\sqrt{F} \sim 10^{18} (10^{-14}-10^{-13})^{ N-3 \over 2(N-4)}$ GeV.
As discused above, the smallest value of $N$ for which the Standard Model groups can be
embedded in the flavor group is $N=11$. This corresponds to
$\sqrt{F} \sim 10^{10}$ GeV, i.e. the supersymmetry
breaking scale is of order the intermediate scale.
It also follows from eq.~(\ref{susybreakingscale}) that on increasing $N$,
the scale of supersymmetry breaking increases very slowly.
For example, with $N=13$---the smallest value
consistent with Grand Unification---$\sqrt{F} \sim 10^{10} - 10^{11}$ GeV,
still of order the intermediate scale.
One consequence of the supersymmetry breaking scale being of order the
intermediate scale is that
the squark and slepton masses due to supergravity, of order $F/M_{Planck}$,
will be comparable to the
masses induced by the gauge interactions.
These models can therefore be
thought of as ``hybrid models" in which scalar masses arise
due to both supergravity and gauge interactions,
while gaugino masses arise solely from the gauge interactions.
It is also illustrative to work out the other energy scales in the supersymmetry
breaking sector. For concreteness we focus on the $N=11$ theory.
{}From eq.~(\ref{vevscale})
we find that $v \sim 10^{16} $ GeV while from eq.~(\ref{approx}), it follows that
$\Lambda_{1L} \sim 10^{12}$ GeV. Notice in particular that $\Lambda_{1L}
\ll v$ so that the requirement in eq.~(\ref{approx}) is met and the approximations
leading to the sigma model are valid. The
underlying physics giving rise to supersymmetry breaking in this model can
be described as follows. One starts with a $SU(11) \times SU(9)$ theory at very high
energies. At $v \sim 10^{16} $ GeV, the $SU(9)$ symmetry is broken giving rise to
a theory consisting of some moduli and a pure $SU(11)$ group coupled to a
dilaton. The $SU(11)$ group confines at $\Lambda_{1L} \sim 10^{12}$ GeV,
giving rise to a sigma model consisting of the moduli and the dilaton.
Finally, supersymmetry
breaks at $10^{10}$ GeV giving rise to masses for
messenger quarks of order $10$ TeV.
It is worth noting that this large hierarchy of scales is generated dynamically.
We also note that this hybrid model does
not exhibit Landau poles (below scales, higher than $v \sim 10^{16}$ GeV)
of the Standard Model gauge groups: between
the messenger scale and the scale $v$,
in addition to the usual quark, lepton
and Higgs supermultiplets only two vectorlike $SU(3)$ flavors and two
$SU(2)$ fundamentals contribute to the running of the gauge couplings. Above
the scale $10^{16}$ GeV, new physics is expected to take over, as discussed in the
Introduction.
The high scale of supersymmetry breaking in these models poses a problem and
constitutes their most serious drawback.
It implies that one cannot generically rule out the presence of large flavor
changing neutral current effects. Such effects could arise
due to higher dimensional operators in the K\" ahler potential.
For these models to be viable,
physics at the Planck scale would have to prevent such operators from
appearing. In this respect these models are no better than the usual hidden
sector models.
It is worth emphasizing the key features of the $SU(N) \times SU(N-2)$
theories that are ultimately responsible for the high scale of
supersymmetry breaking. The requirement that the flavor group is big enough
forces one to large
values of $N$ in these theories\footnote{For smaller values
of N, $N\le 7$, the scale of supersymmetry breaking
$\sqrt{F} \le 10^9$ GeV, and the problem of flavor changing effects may
be alleviated. However, in this case,
we can not embed the whole Standard Model gauge group in
the unbroken $SP(N-3 \le 4)$ global symmetry (in particular, the gluinos
would have to be massless in this framework).}. Furthermore, supersymmetry
breaking occurs only in the presence of nonrenormalizable
operators whose dimension grows with $N$. Suppressing these
operators by the Planck scale leads to the high scale of supersymmetry
breaking.
\subsection{Purely Gauge Mediated Models. }
One would like to find other theories in which the requirement for
a big enough flavor symmetry can be met without leading to such a
high supersymmetry breaking scale. We discuss two possibilities
in this context.
\subsubsection{Lowering the scale $M$.}
One possible way in which the supersymmetry breaking scale can
be lowered is by making $M < M_{Planck}$.
The $SU(N) \times SU(N-2)$ theory of Section 2.1 itself would in this case be
an effective
theory, which would arise from some underlying dynamics at scale $M$. However,
to suppress the flavor changing neutral currents one would have to
forbid $D$ terms of the form
\beq
\label{kahlerterms}
{\bar{R}^\dagger~ \bar{R}~ \Phi^\dagger ~\Phi\over M^2}~,
\eeq
where $\Phi$ denote Standard Model fields, in the effective theory.
Such terms, if present in a flavor
non-universal form, would be problematic (at least for $N$ sufficiently
large to accommodate the whole Standard Model gauge group).
It is possible that they might be absent in a theory where the last two
terms in eq.~(\ref{elsup}) arose
due to non-perturbative dynamics that only couples to the $\bar{R}$
fields but not to the Standard Model.
Once the supersymmetry breaking scale is lowered these theories
can be used to construct purely gauge mediated models of supersymmetry
breaking.
The feeddown of supersymmetry breaking to the Standard Model in these
models
proceeds as described in Section 3.1. Both gaugino and scalar soft masses
receive contributions
from the heavy, eqs.~(\ref{msoftH}), (\ref{softscalarH}),
and light, eqs.~(\ref{msoftL}), (\ref{softscalarL}), messengers.
As follows from eq.~(\ref{softscalarL}) and the sigma model spectrum of
Table 1 (note that ${\rm Str} M^2_{mess} \sim \alpha + \beta$),
the logarithmically enhanced contribution of
the light fields to the scalar masses is in fact negative. Consequently,
obtaining positive soft scalar mass squares
poses a significant constraint on the models.
These masses can be positive if the additional finite contributions of
the heavy and light messengers overcome the negative
logarithmically enhanced contribution
of the light messengers.
This can happen in two ways.
First the logarithmic contribution can be reduced in magnitude by lowering
the scale $g_2 v$, which cuts off the logarithm, and bringing it sufficiently
close to the scale $m$. For example, with $N=11$,
using Table 1, one can conclude that positive mass squares are obtained with a
scale $v$ two orders of magnitude larger than the scale $m \sim 10^{4} - 10^5$ GeV.
Note that lowering the
scale $g_2 v$
amounts to lowering the scale $M$, eq.~(\ref{wtreebaryon}), at which new
physics must enter.
Second, we note that
the positive finite contributions,
eq.~(\ref{softscalarH}),
of the heavy fields $Q$ and $\bar{L}$,
are enhanced by a factor of $N_f \sim N$. In addition, as is clear from
the numerical results of Table 1, with
increasing $N$ the ratio of the supertrace (proportional to
$\alpha + \beta$) to the finite contribution (proportional to $(\delta/\gamma)^2$)
decreases. Consequently, models with
$N$ sufficiently large
will yield positive mass squares, without requiring the scale $M$ to be too close
to the scale of the light messengers\footnote{Similar observations have been made
recently in ref.~\cite{berkeley}. We thank J. March-Russell for discussions in this
regard.}.
Having the scale $M$ be as large as the
GUT scale pushes the Landau poles up, which is an attractive feature of the
models that one might want to retain. We leave a detailed analysis of this issue
for future work \cite{future}. We only mention here
the phenomenologically
interesting possibility that the two competing effects, (\ref{softscalarH}),
and (\ref{softscalarL}) might yield squarks that
are lighter than the gauginos.
We conclude this section by raising the possibility that
the scale $M$ could be less than $ M_{Planck}$ if the
Standard Model gauge
groups are dual to some underlying theory.
In order to illustrate this, we return to our starting point,
the $SU(N) \times SU(N-2)$ theory, with, as discussed above,
the Standard Model groups embedded in the $SP(N-3)$ global symmetry.
As a result of the additional degrees of freedom the
Standard Model groups are severely non-asymptotically free,
once all the underlying
degrees in the $SU(N) \times SU(N-2)$ theory come into play. Consequently,
it is appealing to dualize the theory and to regard the dual, which is
better behaved in the ultraviolet, as the underlying microscopic theory.
We see below that this could also lead to lowering the scale $M$,
eq.~(\ref{wtreebaryon}), in the electric theory.
For purposes of illustration we work with the $N=11$ case and consider
dualizing the Standard Model $SU(3)$ and $SU(2)$ groups.
In the process we need to
re-express the baryonic operators in eq. (\ref{wtreebaryon}) in terms of
gauge invariants
of the two groups and then use the duality transformation of
SQCD, \cite{seiberg}, to map
these operators to the dual theory. Doing so shows that
the baryonic operators can be expressed as a product involving some
fields neutral under the Standard Model groups and mesons of the
$SU(3)$ and $SU(2)$
groups. But the mesons map to fields which are singlets
in the dual theory.
Consequently, the resulting terms in the
superpotential of the dual theory have smaller canonical dimensions
and are therefore suppressed by fewer powers of $M_{Pl}$.
For example, the operator $b^{N-1 N-2}$ can be written as a product involving
the field $\bar{R}_N$, three mesons of $SU(3)$, and a meson of $SU(2)$;
as a result in the dual it has dimension $5$ and is suppressed by
two powers of $M_{Pl}$. The deficit in terms of dimensions is made up
by the scales $\mu_3$ and $\mu_2$ which enter the scale matching relations
for the $SU(3)$ and $SU(2)$ theories, respectively \cite{seiberg},
leading to a relation:
\beq
\label{scale matching}
M = M_{Pl}~ \left ({\mu_3^3 ~\mu_2 \over M_{Pl}^4 }\right )^{1 \over 6}.
\eeq
For $\mu_3$ and $\mu_2$ much less than $M_{Pl}$ we see that $M$ is
much lower than $M_{Pl}$.
While the above discussion is suggestive, several concerns need to
be met before it can be made more concrete. First,
as was mentioned above, one needs to argue that terms of the form
(\ref{kahlerterms}) are suppressed
adequately. We cannot at present, conclusively, settle this matter since the
map for non-chiral operators under duality is not known. However, since
the scales involved in the duality transformation are much smaller than
the Planck scale, it is quite plausible that if an operator of the form
eq. (\ref{kahlerterms}), suppressed by the Planck scale, is present in the
dual theory it will be mapped to an operator in the electric theory
that is adequately suppressed.
Second, in the example discussed above, the Standard Model $U(1)_Y$
group continues to be
non-asymptotically free. This can be avoided by
considering theories in which the Standard Model groups are embedded in a
GUT group. The simplest such
example is the $N=13$ theory with a GUT group $SU(5)$. The $SU(5)$ group
has matter in the fundamental, antisymmetric and adjoint
representations. Unfortunately, no compelling dual for this theory
is known at present. \footnote{This theory
can be dualized by following the methods of \cite{Pouliotwo}, \cite{Kutasov}
and
unbinding each antisymmetric tensor by introducing an extra $SU(2)$ group.
However, the resulting dual is quite complicated and contrived.}
Finally, the above attempt at lowering
$M$ relied on taking the parameter(s) $\mu$ to be smaller than
$M_{Pl}$. This might be unnatural in the dual theory.
For example, in the dual theory considered here,
the Yukawa coupling $\lambda^{IA} Y_{IA}$, eq. (\ref{wtreebaryon}), turns
into a mass term with a $\mu$ dependent coefficient. Naturalness, in
this case suggests that $\mu$ is of order $M_{Pl}$.
A detailed discussion of these issues is left for the future, hopefully, within
the context of more compelling models and their duals.
\subsubsection{Other Sigma Models.}
We saw in our discussion of the hybrid models above that a large hierarchy
of scales separates the microscopic
theory from the sigma model. In view of this, one can ask if at least a sigma
model can be constructed as an effective theory that yields a low enough
supersymmetry breaking scale, while the nonrenormalizable operators are
still suppressed by the Planck scale. The answer, it is easy to see, is yes.
For example, we can take the dimensions of the fields in the effective
lagrangian (\ref{effectivelagrangiankah}), (\ref{elsup}) to be equal to, say,
$D$---being thus different
from their dimension, $N-2$, dictated by
the underlying $SU(N)\times SU(N-2)$ theory---and change
correspondingly the power of the $1/M$-factors, the powers in the K\" ahler
potential and the power of $S$ in the
nonperturbative term in the superpotential, eq.~(\ref{elsup}).
We should emphasise that we are not aware of any underlying microscopic
theory which gives rise to such a sigma model. However, they do
provide an adequate description of supersymmetry breaking.
An analysis similar to the one above shows that these sigma models
break supersymmetry, while leaving an $SP(N-3)$ flavor subgroup
intact. The mass spectrum of low lying excitations in these theories
is also qualitatively of the form in eq.~(\ref{scalarmass}) and
eq.~(\ref{diracmass}).
Following then the same arguments that lead to eq.~(\ref{susybreakingscale})
for the supersymmetry breaking scale, we find that the exponent
in eq.~(\ref{susybreakingscale}) changes to $(D-1)/(2(D-2))$ instead.
Consequently, for $D = 4 $ or $5$ (even with $M = M_{Planck}$),
the scale of supersymmetry breaking is sufficiently low for supergravity
effects to be unimportant.
It is illustrative to compare the energy scales obtained in such a model
with those obtained in the ``hybrid" models above. We consider the $D=4$
case for concreteness. The supersymmetry breaking scale in this case is
of order $10^7$ GeV, well below the intermediate scale,
while the scale of the vacuum
expectation values is $\sim 10^{11}$ GeV. Therefore the
the sigma model breaks down at an energy scale well above the scale of
supersymmetry breaking.\footnote{The scale at which the effective theory
breaks down could be smaller than the perturbative estimate
coming from the sigma model, $\sim 4 \pi v$, would indicate. For example,
if we had retained the corrections to the K\" ahler potential of
order $\Lambda_L/v$, discussed in Section 2.2.1, we would have found that
the model breaks down at a scale $\Lambda_L$, which is
lower than $v$, but still higher than $M_{SUSY}$.}
Once the supersymmetry breaking scale is sufficiently lowered one
can use these sigma models to construct purely gauge mediated models of
supersymmetry breaking. We note, however, that we can not
compute the Standard Model soft masses from the effective theory
alone---we saw in
Section 3.1 that the contribution of the heavy states not included in the
sigma model can be as
important as the ones from the light fields.
\section{Phenomenological Implications.}
In this section, we discuss
the phenomenological implications
of the ``hybrid" models of dynamical supersymmetry breaking, introduced
above. Towards the end we will briefly comment on some expected features
of purely gauge mediated models with a combined
supersymmetry breaking and messenger sector.
In our discussion of hybrid models we will, where necessary,
focus on the $SU(11)\times SU(9)$ model,
in which the $SU(3)\times SU(2) \times U(1)$ groups are embedded
in the $SP(8)$ global symmetry group.
We begin with two observations. First, since the supersymmetry breaking
scale is high in these models, the gravitino has a weak scale mass and
is not (for non-astrophysical purposes at any rate) the LSP.
Second, since the supersymmetry breaking sector is coupled quite directly
to the Standard Model sector, the masses of the (light) fields in the
supersymmetry breaking
sector are of order $10$ TeV. Consequently, at this scale one can
probe all the fields that play an essential role in the breaking of supersymmetry.
We now turn to the scalar soft masses. As noted in the previous section,
scalars in these models receive contributions due to both gauge and
gravitational effects. Gravitational effects give rise
to universal soft masses of
order $F/M_{Planck} \sim 10^2 - 10^3$ GeV at the Planck scale\footnote{
We are assuming as usual here that the K\" ahler metric is flat.}.
In addition, as described in Section 3.1, Standard Model gauge interactions
induce non-universal contributions (\ref{softscalarL}), (\ref{softscalarH}).
Since the soft masses receive contribution at various energy scales,
the renormalization group running in the hybrid models is quite different from the running
in supergravity hidden sector models and from that in gauge mediated, low-energy
supersymmetry breaking models.
We leave the detailed study of the renormalization group effects for future work.
Getting a big enough $\mu $ term in these models is a problem.
Since the model is ``hybrid", one could attempt to use $1/M_{Planck}^2$--suppressed
couplings, such as
$\int d^4 \theta H_1 H_2 \bar{R}^{\dagger} \bar{R}$
or $\int d^2 \theta H_1 H_2 (W^\alpha W_\alpha)_{SU(N)}$,
to generate the desired $\mu$ and $B \mu$ terms. However, it is easy to see that
while $B \mu \sim F^2/M_{Planck}^2$ is generally of the right order of
magnitude,
the resulting $\mu$-parameter is
$\mu \sim (v/M_{Planck}) \sqrt{B \mu} \sim 10^{-2} \sqrt{B \mu}$ and is therefore too small.
A similar conclusion results from considering, e.g. the $F$ term $b H_1 H_2/M^{N-3}$,
with $b$ being an $SP(N-3)$-singlet baryon,
which can be used to generate a reasonable $B \mu$ term and a negligible $\mu$ term
(to see this, we use
$\langle b/M^{N-3} \rangle \sim M (v/M)^{N-2} + \theta^2 M^2 (v/M)^{2(N-3)}$, with
$M \sim 10^{18}$ GeV, $v \sim 10^{16}$ GeV, and $N=11$).
To avoid this small-$\mu$ problem,
one could use the approach of ref.~\cite{dnns} and introduce a special sector
of the
theory, constrained by some discrete symmetry,
which will be responsible for generating the $\mu$ term. For example, this
could be achieved by requiring an appropriate $SP(N-3)$-singlet baryon,
$b/M^{N-3}$, to play the role of the singlet field $S$ of ref.~\cite{dnns}
(see Section 4 of last paper in \cite{dnns})
and the introduction of an additional singlet $T$ with appropriate
couplings in the superpotential.
{}From the point of view of low-energy phenomenology,
this approach implies that when analyzing the
low-energy predictions of the model, $\mu$ and $B \mu$ should be treated as
free parameters.
A few more comments are in order.
First, electroweak symmetry breaking will occur radiatively in these models,
with the large top Yukawa driving the mass square of one Higgs field negative.
Second, these models do not suffer from a supersymmetric CP problem.
This can be seen immediately in the sigma model superpotential eq.~(\ref{elsup}),
where all phases can be rotated away.\footnote{It can also be seen in the
underlying $SU(N) \times SU(N-2)$ theory where all phases except for the $\theta$
angle of $SU(N-2)$ can be rotated away. Since the $SU(N-2)$ group is broken at
a very high scale, its instantons are highly suppressed.}
Finally, we note that the hybrid models are likely to inherit some of the
cosmological problems of hidden sector models. For example, the $R$ axion, whose
mass in this model can be seen to be of order the electroweak scale \cite{BPR},
is very weakly interacting, $f_{axion} \sim v \sim 10^{16}$ GeV, and may suffer
the usual Polonyi problem. This problem could be solved, for example, by
invoking weak scale inflation.
We end with a few comments about the phenomenological implications of
purely gauge mediated models with a
supersymmetry breaking-cum-messenger sector.
As was mentioned in Section 3,
such models can be constructed by lowering the scale $M$.
A few key features
emerge from considering such purely gauge mediated
models, which are likely to be generally true
in models of this kind.
First, as we have seen above,
the scale of supersymmetry
breaking which governs the mass and interaction strength of the gravitino,
is a parameter which can take values ranging from $10$ TeV to $10^{10}$ GeV
and can therefore be very different from the value of the
messenger field masses. It should therefore
be treated as an independent parameter in considering the phenomenology of these
models.
Second, one consequence of having a combined
supersymmetry breaking and messenger sector is that several
degrees of freedom responsible for the communication and
the breaking of supersymmetry breaking can be probed at
an energy of about $10$ TeV.
Finally, the form of the mass matrix of the messenger fields can be different from
that in the models of ref.~\cite{dnns}, as is clear from eqs.~(\ref{scalarmass}) and
(\ref{diracmass}). In particular, the sum rule relating the fermion and boson
masses is not respected in general. We expect this to be a general feature of such
models. As discussed in Section 3.1, the nonvanishing supertrace for the light
messenger fields gives a logarithmically enhanced contribution to the soft
scalar masses. In the models discussed here, the supertrace is positive and
the corresponding contribution to the soft scalar masses squared is negative.
This poses a constraint on model building. The negative contribution can be
controlled by lowering the scale $M$, or considering models with large $N$.
This could lead to scalar soft masses that are lighter than
the gaugino masses\footnote{
We acknowledge discussions with G. Anderson on this point.}.
A detailed analysis of the spectrum
and the resulting phenomenology
is left for the future \cite{future}.
\section{Summary.}
In conclusion we summarize the main results of this paper and indicate
some possible areas for future study:
\begin{itemize}
\item{
We began this paper by studying a class of supersymmetry
breaking theories with an $SU(N) \times SU(N-2)$ gauge group.
We showed how the breaking of supersymmetry
in these theories can be studied in a {\it calculable}
low-energy sigma model. The sigma model was used to show that a large
subgroup of the global symmetries is left unbroken in these theories,
and to calculate the low-energy mass spectrum after supersymmetry breaking. }
\item{
We then turned to using these theories for model building. The models
we constructed had two sectors: a supersymmetry breaking sector, consisting of the
above mentioned $SU(N) \times SU(N-2)$ theories, and the
supersymmetric Standard Model. The essential idea
was to identify a subgroup of the global symmetries
of the supersymmetry breaking sector with the Standard Model gauge group.
In order to embed the full Standard Model gauge group in this way, we were
lead to consider large values of $N$, i.e. $N \ge 11$, and as a consequence
of this large value of $N$, the supersymmetry
breaking scale was driven up to be of order the
intermediate scale, i.e. $10^{10}$ GeV. Hence, these models
are of a ``hybrid" kind---supersymmetry
breaking is communicated to the Standard Model both
gravitationally and radiatively through the Standard Model gauge groups
in them.}
\item{We briefly discussed the phenomenology of these models.
The main consequence of the messenger fields being an integral
part of the supersymmetry
breaking sector
is that several degrees of freedom responsible for both communicating and
breaking supersymmetry can be probed at an energy of order $10 $ TeV.
In the hybrid models gauginos acquire mass due to gauge mediated effects, while scalars
acquire mass due to both gauge and gravitational effects. We leave a more
detailed investigation of the resulting mass spectrum, including the effects
of renormalization group running for further study.}
\item{
It is worth mentioning that in these models there is a large hierarchy of
scales that is generated dynamically. For example, even though
the scale of supersymmetry breaking is high, of order $10^{10}$ GeV,
the masses of the messenger fields---the lightest
fields in the supersymmetry
breaking sector that carry Standard Model
charges---are of order $10$ TeV. Furthermore,
the sigma model used for studying the low-energy dynamics
breaks down at a scale $10^{12}$ GeV---well above the scale of
supersymmetry breaking. }
\item{Purely gauge mediated models can be constructed by lowering the
scale $M$ that suppresses the nonrenormalizable term in the superpotential.
These purely gauge mediated models reveal the following features that
should be generally true in models with supersymmetry
breaking-cum-messenger sector that have an effective low-energy
weakly coupled description.
First, the supersymmetry
breaking scale can in general
be quite different from the scale of the messenger field masses---it can range from
$10$ TeV to $10^{10}$ GeV,
while the messenger field masses are of order $10$ TeV.
Second, as in the hybrid models, several degrees of freedom that
are responsible for communicating and breaking supersymmetry can be probed
at an energy scale or order $10$ TeV.
Third,
the Standard Model
soft masses receive contributions at various energy scales.
Because of a tradeoff between positive and negative contributions,
the soft scalar masses can be lighter than
the corresponding gaugino masses.
A detailed investigation of the
phenomenology of such models, incorporating these
features, needs to be carried out. We leave such an investigation for the
future.}
\item{Finally, we hope to return to the construction of purely gauge mediated
models of supersymmetry breaking with a
combined supersymmetry breaking and messenger
sector. One would like to construct a consistent microscopic theory
which could give rise to an adequate supersymmetry breaking sector.
A minimal model of this kind would serve to further guide
phenomenology. It would also prompt an investigation of more theoretical
questions---like those associated with the loss of asymptotic freedom for the
Standard Model gauge groups.}
\end{itemize}
We would like to acknowledge discussions with G. Anderson, J. Lykken,
J. March-Russell, S. Martin,
and especially Y. Shadmi. Recently, we became aware of work by
N. Arkani-Hamed, J. March-Russell, and H. Murayama along similar
lines \cite{berkeley},
and thank them for sharing some of their results before publication.
E.P. acknowledges support by a Robert R. McCormick
Fellowship and by DOE contract DF-FGP2-90ER40560.
S.T. acknowledges
the support of DOE contract DE-AC02-76CH0300.
\newcommand{\ib}[3]{ {\em ibid. }{\bf #1} (19#2) #3}
\newcommand{\np}[3]{ {\em Nucl.\ Phys. }{\bf #1} (19#2) #3}
\newcommand{\pl}[3]{ {\em Phys.\ Lett. }{\bf #1} (19#2) #3}
\newcommand{\pr}[3]{ {\em Phys.\ Rev. }{\bf #1} (19#2) #3}
\newcommand{\prep}[3]{ {\em Phys.\ Rep. }{\bf #1} (19#2) #3}
\newcommand{\prl}[3]{ {\em Phys.\ Rev.\ Lett. }{\bf #1} (19#2) #3}
\newcommand{\ptp}[3]{ {\em Progr.\ Theor.\ Phys.}{\bf #1} (19#2) #23}
| proofpile-arXiv_065-378 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Two-dimensional quantum gravity(2DQG) is interesting from the point of view
of string theory in non-critical or critical dimensions and the statistical
mechanics of random surfaces.
As has been known the matrix model and the continuous theory, {\it i.e.} the
Liouville field theory, exhibit the same critical exponent and Green's
function.
However, the mutual relation between DT and the Liouville theory has not
been so evident, because the DT surfaces are known to be fractal and its
typical fractal dimension is considered to be four.
On the other hand, we will show in this paper, the complex structure is
well defined even for the DT surfaces, and we can check a precise equivalence
of the DT and the Liouville theory.
It is known that 2DQG can be interpreted as a special case of the
critical string with the conformal mode having a linear dilaton background.
The critical string is defined to have two local symmetries,
$i.e.$ the local scale(Weyl) invariance and the reparametrization($Diffeo$)
invariance.
After imposing these two symmetries on the world-sheet, no degree of freedom
is left for the metric $g_{\mu \nu}$ except for a set of parameters $\tau$,
which specify the moduli space ${\cal M}$ of the complex structure:
$
{\cal M} = \{ g_{\mu \nu} \} / Diffeo \otimes Weyl = \{ \tau \}.
$
Therefore, if we find a way to impose the local scale invariance on DT, we
achieve a step closer to the study of the numerical simulations of the critical
string, although it is not very easy at the present stage.
However, considering the complex structure is very useful for obtaining
clear signals in various measurements because the complex structure can be
extracted independently to the rather complicated fluctuations of the
conformal mode.
In our previous work \cite{Cmplx_Struc}, we have established how to define
and measure the complex structure and the conformal mode on the DT surfaces
with $S^{2}$ topology.
To be concrete we focus on the case of torus $T^{2}$ in this study, but the
generalization would be straightforward.
\section{Determination of a moduli on the torus}
In the continuous formulation the period $\tau$ is obtained by the following
procedure.
First we introduce harmonic $1$-form $j_{\mu}dx^{\mu}$ where $j_{\mu}$
satisfies the divergence and rotation free conditions
$\partial_{\mu}j^{\mu} = 0$ and $\partial_{\mu}j_{\nu} -
\partial_{\nu}j_{\mu} = 0$ respectively, with
$j^{\mu} = \sqrt{g} g^{\mu \nu} j_{\nu}$.
Since there are two linearly independent solutions, we can impose
two conditions such as
\begin{equation}
\oint_{\alpha} j_{\mu} dx^{\mu} = 0, \;
\oint_{\beta} j_{\mu} dx^{\mu} = \frac{1}{r},
\label{eq:pot_cond}
\end{equation}
where $\alpha$ and $\beta$ represent two independent paths on the torus
which intersect each other only once and $r$ denotes the resistivity of
the surface.
Under these conditions the period $\tau$ is given by
\begin{equation}
\tau \equiv \frac{\oint_{\alpha} j_{\mu} dx^{\mu} + i\oint_{\alpha}
\tilde{j}_{\mu} dx^{\mu}}{\oint_{\beta} j_{\mu} dx^{\mu} + i\oint_{\beta}
\tilde{j}_{\mu} dx^{\mu}}
= \frac{i r \oint_{\alpha} \tilde{j}_{\mu}dx^{\mu}}
{1 + i r \oint_{\beta} \tilde{j}_{\mu} dx^{\mu}},
\end{equation}
where $\tilde{j}_{\mu}$ is the dual of $j_{\mu}$ defined by
$\tilde{j}_{\mu} = \epsilon_{\mu \nu} \sqrt{g} g^{\nu \lambda} j_{\lambda}$.
This procedure can be easily translated to the case of triangulated
surfaces by identifying $j_{\mu}$ with the current on the resistor network
of the dual graph.
Then $r \oint_{\alpha} j_{\mu} dx^{\mu}$ and
$\oint_{\alpha} \tilde{j}_{\mu} dx^{\mu}$ correspond to the potential drop
along $\alpha$-cycle and the total current flowing across $\alpha$-cycle
respectively.
We can easily impose two conditions eq.(\ref{eq:pot_cond}) by inserting
electric batteries in the dual links crossing the $\alpha$-cycle and apply
constant voltages(1V), see Fig.\ref{fig:Net_current}.
\begin{figure}
\vspace*{0cm}
\centerline{\psfig{file=Net_current.eps,height=5.3cm,width=5.3cm}}
\vspace{-1.0cm}
\caption
{
Configuration of batteries inserted across the $\alpha$-cycle.
The vertices with open circle denote the $v_{left}$ and with close
circle denote the $v_{right}$, respectively.
}
\label{fig:Net_current}
\vspace{-0.6cm}
\end{figure}
Writing the electric potential at the vertex $v$ as $V(v)$ and assuming
that each bond has resistance $1 \Omega$, the current conservation reads
$
V(v) = \frac{1}{3} \{ \sum_{1,2,3} V(v_{i}) + \delta_{\alpha\mbox{-cycle}} \},
$
where $\delta_{\alpha\mbox{-cycle}}$ represents the voltages of the
batteries placed along the $\alpha$-cycle and $v_{1,2,3}$ are the three
neighboring vertices of $v$.
We solve these set of equations iteratively by the successive
over-relaxation method, and estimate the total currents
flowing across the $\alpha$- and $\beta$-cycles as
$
\oint_{\alpha} \tilde{j}_{\mu} dx^{\mu} = \sum_{{\tiny \alpha-\mbox{cycle}}}
\{V(v_{right}) - V(v_{left}) + 1\}, \;\; \mbox{and} \;\;
\oint_{\beta} \tilde{j}_{\mu} dx^{\mu} = \sum_{{\tiny \beta-\mbox{cycle}}}
\{V(v_{right}) - V(v_{left})\}
$
respectively, where $V(v_{left})$ and $V(v_{right})$ denote the potentials
of vertices placed at either side of the $\alpha$- or $\beta$-cycle.
In the case of $T^{2}$ topology the resistivity is not easily determined
because of the lack of the $SL(2,C)$-invariance as we have made use of in
the $S^{2}$ topology case\cite{Cmplx_Struc}.
Here, we borrow the resistivities obtained in case of the $S^{2}$ topology
in order to determine the moduli of the torus.
Here, we present the theoretical predictions of the distribution function of
the moduli.
The genus-one partition function with c scalar fields is given by
\cite{Torus_Part} by
\begin{equation}
Z \simeq \int_{\cal F} \frac{d^{2}\tau}{(\tau_{2})^{2}} \; \{C(\tau)\}^{c-1},
\label{eq:Z_torus}
\end{equation}
\begin{equation}
C(\tau) = (\frac{1}{2} \; \tau_{2})^{-\frac{1}{2}} e^{\frac{\pi}{6} \;
\tau_{2}} |\prod_{n=1}^{\infty} (1- e^{2\pi i \tau n}) |^{-2},
\end{equation}
where $\tau$ is a moduli parameter $\tau = \tau_{1} + i \tau_{2}$, and
${\cal F}$ denotes the fundamental region.
According to the integrand of eq.(\ref{eq:Z_torus}), we find that the density
distribution function of $\tau$ in the fundamental region is given by up to
an overall numerical factor
\begin{equation}
\tau_{2}^{-\frac{3+c}{2}} \; e^{-\frac{\pi}{6} \; \tau_{2} \; (1-c)}
|\prod_{n=1}^{\infty} (1- e^{2\pi i \tau n}) |^{2(1-c)}.
\label{eq:moduli_integrand}
\end{equation}
The distribution of $\tau_{2}$ for $c>1$\footnote
{
In the case of the sphere, for example, the anomalous dimensions and the
string susceptibility turn to be complex for $c>1$.
}
indicates the instability of the vacuum due to the tachyon of the bosonic
string theory.
It may become a clear evidence for the branched polymers.
\section{Numerical results and discussions}
Fig.\ref{fig:Moduli_Dist_16K} shows the distribution of the period $\tau$
for a surface with $16$K triangles in the case of the pure gravity
with about $1.5 \times 10^{4}$ independent configurations.
\begin{figure}
\vspace*{0cm}
\centerline{\psfig{file=Moduli_Dist_16K.ps,height=5.5cm,width=6.7cm}}
\vspace{-1.3cm}
\caption
{
Plot of the moduli($\tau$) on the complex-plane with a total number of
triangles of $16$K.
A dot denotes $\tau$ which is mapped into the fundamental region of each
configuration
}
\label{fig:Moduli_Dist_16K}
\vspace{-0.7cm}
\end{figure}
Roughly speaking larger values of $\tau_{2}$ in the fundamental region
represent tori deformed like a long thin tube, while smaller values of
$\tau_{2}$ represent almost regular tori.
In order to compare numerical results with the predictions of the
Liouville theory eq.(\ref{eq:moduli_integrand}), we consider the distribution
functions integrated over $\tau_{1}$.
\begin{figure}
\centerline{\psfig{file=Moduli_Comp_Pure.ps,height=5.5cm,width=6cm}}
\vspace{-1.3cm}
\caption
{
Density-distributions of $\tau_{2}$ in the case of pure-gravity.
}
\label{fig:Hist_Comp_Pure}
\vspace{-0.3cm}
\end{figure}
\begin{figure}
\vspace*{-0.3cm}
\centerline{\psfig{file=Moduli_Comp_1S.ps,height=5.5cm,width=6cm}}
\vspace{-1.3cm}
\caption
{
Density-distributions of $\tau_{2}$ in the case of $c=1$.
}
\label{fig:Moduli_Comp_1S}
\end{figure}
\begin{figure}
\vspace*{0cm}
\centerline{\psfig{file=Moduli_Comp_2S.ps,height=5.5cm,width=6cm}}
\vspace{-1.3cm}
\caption
{
Density-distributions of $\tau_{2}$ in the case of $c=2$.
}
\label{fig:Moduli_Comp_2S}
\vspace{-0.4cm}
\end{figure}
Fig.\ref{fig:Hist_Comp_Pure} shows the density distributions of $\tau_{2}$
in the case of the pure-gravity with $2$K, $4$K, $8$K and $16$K triangles.
Fig.\ref{fig:Moduli_Comp_1S} shows the distributions of $\tau_{2}$ in
the case of surfaces coupled with a scalar field($c=1$) with $4$K, $8$K and
$16$K triangles.
It is clear that the numerical results agree fairly well with the
predictions of the Liouville theory for sufficiently large number of
triangles.
Fig.\ref{fig:Moduli_Comp_2S} shows the distributions of $\tau_{2}$ in the
case of two scalar fields($c=2$) with $4$K, $8$K and $16$K triangles.
In this case, we cannot detect the divergence of the distribution of
$\tau_{2}$.
It would be hard to obtain a large value of $\tau_{2}$ for relatively small
number of triangles, because we need many triangles to form a long narrow
shape to the torus.
We conclude that the DT surfaces have the
same complex structure as the Liouville theory in the thermodynamic limit
for $c \leq 1$ cases.
\vspace{-0.3cm}
\begin{center}
-Acknowledgements-
\end{center}
\vspace{-0.1cm}
\noindent
We are grateful to J.Kamoshita for useful discussions and
comments.One of the authors(N.T) was supported by a Research Fellowships of
the Japan Society for the Promotion of Science for Young Scientists.
| proofpile-arXiv_065-379 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Subluminous B stars (sdB) form the extreme blue end (\teff\,$>$~20000 K and
\logg\,$>$~5) of the Horizontal Branch
(Heber et al., \cite{hehu84}) and are therefore termed Extreme Horizontal
Branch (EHB)
stars. Such objects are now believed to be the dominant source of UV
radiation causing the UV upturn phenomenon in elliptical galaxies and galaxy
bulges (Dorman et al., \cite{dooc95}).
While there are more than a thousand sdBs known in the field of our
galaxy (Kilkenny et al., 1988), only in one globular cluster bona-fide sdB
stars were shown to exist (NGC 6752, Heber et al., \cite{heku86}, Moehler et
al., \cite{mohe96}).
Several claims that sdB stars had been found in other globular
clusters too could not be confirmed by spectroscopic analyses.
Moehler et al. (\cite{mohe95}, \cite{mohe96}, de Boer et al. \cite{dbsc95},
hereafter Paper I,III, and II respectively)
found that all ``classical''
BHB stars (i.e. stars with \teff\ $<$ 20000~K and \logg\ $<$ 5)
in several clusters
exhibited too low masses compared to standard evolutionary theory, whereas
the sdB stars' masses in NGC~6752 were in good agreement with the canonical
mass of 0.5 \Msolar.
It is therefore of great importance
to find and analyse sdB stars in other globular clusters.
In this letter we present follow-up spectroscopy and spectral
analyses of faint blue
stars (19\hbox{\hbox{$^{\rm m}$}} $<$ V $<$ 20\hbox{\hbox{$^{\rm m}$}} and \magpt{$-$0}{28} $<$ (B$-$V) $<$
\magpt{$-$0}{12})
in the globular cluster M\,15, which were discovered recently by
Durrell \& Harris (\cite{duha93}).
\section{Observations and Reduction}
Two of the four candidates (F2-1 and F2-3)
could not be observed reliably from the ground due to nearby
red neighbours (see Table 1).
The remaining candidate stars were observed with
the focal reducer of the 3.5m telescope at the Calar Alto observatory using
grism \#3 (134~\AA/mm) and a slit width of 1\bsec5,
resulting in medium resolution spectra
covering the wavelength range 3250 - 6350~\AA. We also obtained
low resolution spectrophotometric data, using a slit width of 5\arcsec\ and
binning the spectra by a factor of 2 along the dispersion axis.
\begin{table}
\begin{tabular}{|l|rr|rr|}
\hline
Number & V & B$-$V & $\alpha_{2000}$ & $\delta_{2000}$ \\
\hline
F1-1 & \magpt{19}{468} & \magpt{-0}{160} & \RA{21}{30}{12}{3} &
\DEC{+12}{03}{45}{6}\\
F2-1 & \magpt{19}{101} & \magpt{-0}{126} & \RA{21}{29}{39}{6} &
\DEC{+12}{07}{26}{3}\\
& \magpt{18}{555} & \magpt{+1}{531} & \RA{21}{29}{39}{6} &
\DEC{+12}{07}{26}{8}\\
F2-2 & \magpt{19}{983} & \magpt{-0}{231} & \RA{21}{29}{34}{7} &
\DEC{+12}{09}{19}{1}\\
F2-3 & \magpt{19}{956} & \magpt{-0}{276} & \RA{21}{29}{46}{6} &
\DEC{+12}{06}{53}{7}\\
& \magpt{18}{027} & \magpt{+0}{702} & \RA{21}{29}{46}{6} &
\DEC{+12}{06}{54}{3}\\
\hline
\end{tabular}
\caption{Positions and magnitudes of the sdB candidates and their close
neighbours}
\end{table}
Since there is no built-in wavelength calibration lamp we observed wavelength
calibration spectra only at the beginning of the night. The observation
and calibration of bias, dark current and flat-field was performed as
described in paper I and III. As the focal reducer produces rather strong
distortions we extracted that part of each observation that contained the
desired spectrum and some sky and straightened it, using a program written
by Dr. O. Stahl (priv. comm.).
We applied the same procedure to the wavelength
calibration frames. Thereby we could perform a good two-dimensional wavelength
calibration (using the MIDAS Long context)
and sky-subtraction (as described in paper I).
Correction of atmospheric and interstellar extinction as well as flux
calibration was again performed as described in paper I
using the flux tables of Massey (\cite{mast88}) for BD+28$^\circ$4211.
\section{Analyses}
\subsection{F1-1}
The spectra show broad Balmer lines and a weak He~I line at 4471~\AA\
typical for sdB stars.
The low resolution data unfortunately showed some excess flux towards
the red, probably caused by a red star (V = \magpt{19}{99}, B$-$V =
\magpt{+0}{55}) 3\arcsec\ away.
Fitting only the Balmer jump
and the B and V photometry of Durrell \& Harris (1993) we get an effective
temperature of 24000~K (cf. paper III).
We used the helium- and metal-poor model atmospheres of Heber
(cf. Paper III) to analyse the medium resolution spectrum
of F1-1. As the resolution of the spectrum varies with wavelength we convolved
the model atmospheres with different Gaussian profiles for the three Balmer
lines. The appropriate FWHM were determined from the calibration lines close
to the position of the Balmer lines. We used FWHM of 8.7 \AA\ for H$_\delta$,
7.1 \AA\ for H$_\gamma$, and 9.8 \AA\ for H$_\beta$. We fitted each line
separately and derived the mean surface gravity
by calculating the weighted mean of the individual results. The weights
were derived from the fit residuals of the individual lines. We thus get
a log g value of 5.2$\pm$0.14 for a fixed effective temperature of 24000~K.
The fits to the Balmer lines are shown in Fig. 1. Note in passing
that higher temperature and gravity results if we ignore the Balmer jump
and derive \Teff\ and \logg\ simultaneously from the three Balmer line
profiles alone:
The smallest overall residual is achieved for
\Teff\ = 29700~K and \logg\ = 5.9. These values, however,
are inconsistent with the low resolution data.
We already noted similar inconsistencies
between Balmer jump and Balmer line profiles in paper I, which are probably
caused by insufficient S/N in the Balmer line profiles.
The \ion{He}{I} 4471~\AA\ line is consistent with a helium abundance of
about 0.1 solar.
Using the routines of R. Saffer (Saffer et al., \cite{saff94})
and a fixed mean FWHM of 8~\AA\ we get internal errors
of 600 K and 0.13 dex, respectively. We take the standard deviation of \logg\
as external errors and assume an external error of \teff\ of $\pm$ 2000~K
(cf. Paper~III).
Using the same method as in Papers I and III we get a logarithmic mass of
$-$0.423$\pm$0.20 dex, corresponding to (0.38$^{+0.22}_{-0.14}$)~\Msolar.
For a detailed discussion of the errors entering into the mass determination
see Paper~III.
\begin{figure}
\vspace{6cm}
\special{psfile=m15f11lfit.ps hscale=40 vscale=40 hoffset=0 voffset=-45
angle=0}
\caption[]{The best fitting models for \teff\ = 24000~K
compared to the Balmer lines of of F1-1 (\logg\ = 5.0 (H$_\delta$),
5.3 (H$_\gamma$, H$_\beta$)).
The tickmarks mark steps of 10\% in intensity.}
\end{figure}
\subsection{F2-2}
Unlike F1-1 and other sdBs the spectrum of F2-2 is not dominated by Balmer
lines but displays a prominent \ion{He}{I} absorption line spectrum in
addition to the Balmer lines (see Fig.2). The Balmer lines are much weaker
and narrower than in F1-1.
In Fig. 2 we compare the spectrum of F2-2 (smoothed by a box filter
of 5 \AA\ width) to that of a
He-sdB in the field (HS\,0315+3355, Heber et al. \cite{hedr96}), a spectral
type
that describes rare helium-rich variants of the sdB stars
(Moehler et al., \cite{mori90}, see also below),
The helium line spectra of both stars are
very similar, while the Balmer lines in HS\,0315+3355 are considerably weaker
than in F2-2 (see Fig. 2), indicating an even lower hydrogen content of the
former.
\begin{figure}
\vspace{6.0cm}
\special{psfile=m15f22comp.ps hscale=37 vscale=37 hoffset=-30 voffset=195
angle=270}
\caption[]{The medium resolved spectrum of F2-2 and the identified absorption
lines. To allow a better identification of the lines the spectrum has been
smoothed with a box filter of 5\AA\ width.
In comparison (thick line)
we show the spectrum of a helium-rich sdB star (HS\,0315+3355) in the field
that has been convolved to the same resolution. The tickmarks mark steps of
10\%
in intensity.}
\end{figure}
The admittedly
rather low S/N unfortunately allows only a coarse analysis. Since an
analysis of individual spectral lines is impossible
we choose to fit
the entire useful portion of the spectrum (4000\AA\ -- 5000\AA)
simultaneously. It turned out
that F2-2 is somewhat hotter than F1-1 and we therefore used the updated
grid of NLTE model atmospheres calculated by S. Dreizler (see Dreizler et
al. \cite{drei90}).
The model atmospheres include detailed H and He model atoms and the
blanketing effects of their spectral lines but no metals. Using Saffer's
fitting program \teff , \logg , and helium abundance were determined
simultaneously
from a small subgrid (\teff\ = 35, 40, 45kK; \logg\ = 5.5, 6.0, 6.5;
He/(H+He)~=~0.5, 0.75, 0.91, 0.99, by number).
Since the \ion{He}{I} 4144\AA\ line is not included in the models, this
line is excluded from the fit.
\begin{figure}
\vspace{5.7cm}
\special{psfile=m15f22fit.ps hscale=37 vscale=37 hoffset=-35 voffset=195
angle=270}
\caption[]{The spectrum of F2-2 and the best fitting model
(thick line, \teff\,=~36000~K, \logg\,=~5.9,
N$_{He}$/(N$_{H}$+N$_{He}$)~=~0.87).
To allow a better identification of the lines the observed spectrum has been
smoothed with a box filter of 5\AA\ width. The tickmarks mark steps of 10\%
in intensity.}
\end{figure}
An effective temperature of 36000~K, \logg\ of 5.9, and
a helium abundance of N$_{He}$/(N$_{H}$+N$_{He}$)~=~0.87
(by number) resulted. The fit to the
spectrum is displayed in Fig. 3.
Since the noise level is rather high it is
difficult to estimate the error ranges of the atmospheric parameters.
As a test we changed the fitted
spectral range as well as the placement of the continuum, which resulted
in quite small changes of \teff\ ($\approx$ 2000K) and \logg\ ($\approx$
0.2~dex).
The helium abundance N$_{He}$/(N$_{H}$+N$_{He}$) ranged from 0.5 to 0.9 by
number, indicating that helium is overabundant by at least a factor 5 with
respect to the sun. As conservative error
estimates we adopted $\pm$ 4000~K and 0.5 dex for the errors in \teff\ and
\logg .
\subsection{Radial velocities}
For F1-1 we derive a mean heliocentric velocity of $-$142~km/sec
from the Balmer lines with an estimated error of about $\pm$40~km/sec
(due to the mediocre
resolution, the limited accuracy of the wavelength calibration,
and the shallow lines). This value is within the
error limits consistent with the cluster velocity
of -107.09 km/sec (Peterson et al., \cite{pese89}).
For F2-2 we could not derive reliable radial
velocities from single lines due to the low S/N of the spectrum.
Instead we cross correlated the normalized
spectrum with that of the field He-sdB HS\,0315+3355
(convolved to the same resolution).
This procedure resulted in a heliocentric velocity of $\approx -$70~km/sec,
which - due to the large errors - does not contradict a cluster membership
for F2-2.
\section{Discussion}
\begin{figure}[t]
\vspace{6.5cm}
\special{psfile=m15sdB_tg.ps hscale=33 vscale=33 hoffset=-7 voffset=190
angle=270}
\caption[]{The physical parameters of the sdBs in M15 compared to
the results of Papers I and III and to theoretical expectations.
The solid lines are the Zero Age HB and the Terminal Age (core
helium exhaustion) HB for [Fe/H]~=~$-$2.26 of Dorman
et al. (\cite{dorm93}). The short dashed line gives the position of the
helium main sequence (Paczynski 1971). The long dashed-short dashed lines
give post-EHB evolutionary tracks by Dorman et al. (\cite{dorm93}), labeled
with
the total mass of the EHB star.}
\end{figure}
The spectroscopic analyses of two faint blue stars in the globular cluster
M\,15 show that both stars are bona-fide subdwarf B stars. In Fig. 4 and
5 we compare their positions in the (\Teff, log\,g)- and the
(\Teff, M$_{\rm V}$)-diagrams to those of EHB stars in NGC\,6752 (from
paper III) as well as to the predictions from
Horizontal Branch stellar models (from Dorman et al. \cite{dorm93}).
As can be seen from this comparison,
their evolutionary status is well described by models for the Extreme
Horizontal Branch (EHB). Hence M\,15 is only the second globular cluster
(after NGC\,6752) for which the existence of EHB stars has been proven
spectroscopically. While the helium abundance of F1-1 is typical
for sdB stars (i.e. subsolar),
F2-2 surprisingly turned out to be a helium rich star.
This is the first time ever that a helium rich sdB star
has been reported in a globular cluster. In the field
of the Milky Way only 5\% of the sdB stars are helium-rich. Jeffery et
al. (\cite{jehe96}) list 48 such stars, while the latest (unpublished)
version of the catalog of hot subdwarfs (Kilkenny et al., \cite{kihe88})
lists more than 1,000 hydrogen-rich sdBs.
The helium-rich sdB has an absolute visual brightness of about
\magpt{4}{7}, which places it at the very faint blue end of the EHB as seen
in the
colour-magnitude diagram of NGC 6752. F2-2 may even be hotter than
any EHB star in NGC\,6752. From its proximity to the helium main sequence in
Fig.\,4 and 5 it might be tempting to regard F2-2 as a naked helium core, i.e.
as an Extreme Horizontal Branch star which lost (almost) all of its
hydrogen-rich envelope.
Why didn't we find any helium-rich sdBs in NGC\,6752?
All the EHB stars that have
been analysed in NGC 6752 (including the three faintest ones seen in Buonanno
et al., \cite{buca86}) are helium-poor sdB stars (Paper III). This could
either mean
that there are no helium-rich subdwarfs in NGC~6752 or that they are just
below the detection limit of Buonanno et al. (\cite{buca86}). One should
certainly keep an eye on newer and deeper CMDs of globular clusters to
see whether other He-sdB candidates show up.
\begin{figure}
\vspace{6.cm}
\special{psfile=m15sdB_tv.ps hscale=33 vscale=33 hoffset=-12 voffset=180
angle=270}
\caption[]{The absolute V magnitudes and effective temperatures
as given above compared to theoretical tracks by
Dorman et al. (\cite{dorm93}, details see Fig.~4).
Also shown are the data for the stars analysed in papers I and III.}
\end{figure}
\acknowledgements
We thank Dr. S. Dreizler for making available his NLTE model grid
to us and the staff of the Calar Alto Observatory for their support
during the observations. Thanks go also to Dr. R. Saffer for valuable
discussions. SM acknowledges support from the DFG (Mo 602/5-1),
by the Alexander von Humboldt-Foundation, and
by the director of STScI, Dr. R. Williams, through a DDRF
grant.
| proofpile-arXiv_065-380 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
\vspace*{-0.5pt}
\noindent
The chiral Potts model is a generalization of the $N$-state Potts model
allowing for handedness of the pair interactions. While known under
several other names, it has received much interest in the past two
decades. It may have appeared first in a short paper by Wu and
Wang\cite{WuWang} in the context of duality transformations.
However, active studies of chiral Potts models did not start until a
few years after that, when \"Ostlund\cite{Os} and Huse\cite{Hu}
introduced the more special chiral clock model as a model for
commensurate--incommensurate phase transitions. Much has been published
since and in this talk we can only highlight some of the developments
and discuss a few of those related to the integrable manifold in more
detail.
\subsection{Domain wall theory of incommensurate states in adsorbed
layers}
\noindent
Following the original papers of \"Ostlund\cite{Os} and Huse\cite{Hu}
there was immediately much interest in their model, because it can be
used to describe wetting phenomena in commensurate phases, transitions
to incommensurate states and it provides through the domain wall theory
a model for adsorbed monolayers.\cite{HSF}$^-$\cite{AP-O}
\subsection{Chiral NN interactions to describe further neighbor
interactions}
\noindent
One may object that next-nearest and further neighbor interactions are
to be seen as the physical cause of incommensurate states rather than
chiral interactions.\break However, one can show that by a block-spin
transformation such longer-range models can be mapped to a
nearest-neighbor interaction model but with chiral
interactions\cite{Barber}$^-$\cite{AP-F} in general. From the viewpoint
of integrability, for example, the nearest-neighbor picture is
preferred.
\subsection{New solutions of quantum Lax pairs, or star--triangle
equations}
\noindent
The integrable chiral Potts model\cite{AMPTY}$^-$\cite{AP-tani}
provides new solutions of the Yang--Baxter equation, which could help
understanding chiral field directions and correlations in lattice and
conformal field theories. In fact, the belief that to each conformal
field theory corresponds a lattice model was the hidden motivation
behind the original discovery.\cite{AMPTY}
\subsection{Higher-genus spectral parameters (Fermat curves)}
\noindent
The integrable chiral Potts models are different from all other
solvable models based on the Yang--Baxter equations. The spectral
parameters (or rapidities) lie on higher-genus
curves.\cite{AMPTY}$^-$\cite{AP-tani}
\subsection{Level crossings in quantum chains (1D vs 2D)}
\noindent
The physics of incommensurate states in one-dimensional quantum chiral
Potts chains is driven by level crossings,\cite{AMP} which are
forbidden in the classical case by\break the Perron--Frobenius theorem.
The two-dimensional chiral Potts model has its\break integrable
submanifold within the commensurate phase, which ends at the
Fateev--Zamolodchikov multicritical point.\cite{FZ}
\subsection{Exact solutions for several physical quantities}
\noindent
All other Yang--Baxter solvable models discovered so far have a
uniformization that is based on elementary or elliptic functions,
with meromorphic dependences on differences (and sums) of spectral
parameters. This is instrumental in the\break evaluation of their
physical quantities. For the integrable chiral Potts model\break
several quantities have been obtained using new approaches without
using an\break explicit
uniformization.\cite{AMP,AMPT}$^-$\cite{oRB,AP-O,AP-tani}
\subsection{Multi-component versions}
\noindent
Multicomponent versions of the chiral Potts model\cite{AP-multi} may
be of interest in many fields of study, such as the structure of lipid
bilayers for which the Pearce--Scott model has been
introduced.\cite{MPS}
\subsection{Large N-limit}
\noindent
The integrable chiral Potts model allows three large-$N$ limits that
may be useful in connection with $W_{\infty}$ algebras.\cite{AP-inf}
\subsection{Cyclic representations of quantum groups at roots of 1}
\noindent
The chiral Potts model can be viewed as the first application of the
theory of cyclic representations\cite{dCK} of affine quantum groups.
\textheight=7.8truein
\setcounter{footnote}{0}
\renewcommand{\fnsymbol{footnote}}{\alph{footnote}}
\subsection{Generalizing free fermions (Onsager algebra)}
\noindent
The operators in the superintegrable subcase of the chiral Potts model
obey\break Onsager's loop group algebra, making the model integrable
for the same two reasons as the $N=2$ Ising or free-fermion model.
Howes, Kadanoff, and den Nijs\cite{HKdN} first noted special features of
series expansions for a special case of the $N=3$ quantum chain. This
was generalized by von Gehlen and Rittenberg\cite{vGR} to arbitrary $N$
using the Dolan--Grady criterium, which was later explained to be
equivalent to Onsager's loop algebra relations.\cite{PD,Dav}
\subsection{Solutions of tetrahedron equations}
\noindent
Bazhanov and Baxter\cite{BB}$^-$\cite{SMS} have shown that the
sl($n$) generalization\cite{DJMM}$^-$\cite{KM} of the integrable
chiral Potts model can be viewed as an $n$-layered $N$-state
generalization of the three-dimensional Zamolodchikov model.
\subsection{Cyclic hypergeometric functions}
\noindent
Related is a new theory of basic hypergeometric functions at root
of unity discussed in some detail at the end of this talk.
\subsection{New models with few parameters}
\noindent
In the integrable submanifold with positive Boltzmann weights several
ratios of the parameters are nearly constant, suggesting the study of
new two-parameter $N$-state models.\cite{AP-O,AP-C}
\section{The Chiral Potts Model}
\noindent
The most general chiral Potts model is defined on a graph or lattice,
see Fig.~\ref{fig2}, where the interaction energies
\begin{equation}
{\cal E}(n)={\cal E}(n+N)=\sum_{j=1}^{N-1}E_j\,\omega^{jn},
\qquad \omega\equiv e^{2\pi i/N},
\label{en-cp}
\end{equation}
depend on the differences $n=a-b$ mod$\,N$ of the spin variables
$a$ and $b$ on two neighboring sites. We can write
\begin{figure}[htbp]
\hbox{\hspace*{.17in}\vbox{\vspace*{.15in}%
\psfig{figure=fig1.eps,height=1.1in}%
\vspace*{.35in}}\hspace*{.35in}%
\psfig{figure=fig2.eps,height=1.7in}}
\vspace*{13pt}
\fcaption{Interaction energies and Boltzmann weights of horizontal and
vertical couplings in the chiral Potts model. The subtraction of the
state variables $a,b,c=1,2,\ldots,N$ at the lattice sites is done
modulo $N$. The dashed lines with open arrows denote oriented lines on
the medial lattice where the rapidity variables $p,q,\ldots,$ of the
integrable submanifold live.\hfill
\label{fig2}}
\end{figure}
\begin{equation}
\betaE{E^{\vphantom{\displaystyle\ast}}_j}=
\betaE{E^{\displaystyle\ast}_{N-j}}=-K_j\,\omega^{\Delta_j},
\quad j=1,\ldots,\lfloor{\textstyle{1\over2}} N\rfloor,
\label{en-cp2}
\end{equation}
where $K_j$ and $\Delta_j$ constitute $N-1$ independent variables.
Then, for $N$ odd we have a sum of ``clock model" terms
\begin{equation}
-\,\betaE{{\cal E}(n)}=
\sum_{j=1}^{{\scriptstyle\frac12}(N-1)}2K_j
\cos\Big[{2\pi\over N}(jn+\Delta_j)\Big],
\label{en-odd}
\end{equation}
whereas, for $N$ even we have an additional Ising term, {\it i.e.}
\begin{equation}
-\,\betaE{{\cal E}(n)}=\sum_{j=1}^{{\scriptstyle\frac12} N-1}2K_j
\cos\Big[{2\pi\over N}(jn+\Delta_j)\Big]+K_{{\scriptstyle\frac12} N}(-1)^n.
\label{en-even}
\end{equation}
The Boltzmann weight corresponding to the edge is given by
\begin{equation}
W(n)=e^{-{\cal E}(n)/k_{\rm B}T}_{\vphantom{X}}.
\label{en-bw}
\end{equation}
\section{The Integrable Chiral Potts Model}
\noindent
In the integrable chiral Potts model, we have besides ``spins"
$a,b,\ldots$ defined mod$\,N$, ``rapidity lines" $p,q,\ldots$
all pointing in one halfplane. The weights satisfy the star--triangle
equation, see also Fig.~\ref{fig3},
\begin{figure}[htbp]
\hbox{\hspace*{.15in}\psfig{figure=fig3.eps,height=1.9in}}
\vspace*{13pt}
\fcaption{Star--Triangle Equation.
\label{fig3}}
\end{figure}
\begin{equation}
\sum^{N}_{d=1}\lbar{W}_{qr}(b-d)W_{pr}(a-d)\lbar{W}_{pq}(d-c)
=R_{pqr}W_{pq}(a-b)\lbar{W}_{pr}(b-c)W_{qr}(a-c).
\label{STE}
\end{equation}
In full generality there are six sets of weights to be found and a
constant $R$. But the solution is in terms of two functions depending
on spin differences and pairs of rapidity variables:
\begin{eqnarray}
W_{pq}(n)&=&W_{pq}(0)\prod^{n}_{j=1}\biggl({\mu_p\over\mu_q}\cdot
{y_q-x_p\omega^j\over y_p-x_q\omega^j}\biggr),\nonumber\\
\lbar{W}_{pq}(n)&=&\lbar{W}_{pq}(0)\prod^{n}_{j=1}\biggl(\mu_p\mu_q\cdot
{\omega x_p-x_q\omega^j\over y_q-y_p\omega^j}\biggr),
\label{weights}
\end{eqnarray}
with $R$ depending on three rapidity variables.
Periodicity modulo $N$ gives for all rapidity pairs $p$ and $q$
\begin{equation}
\biggl({\mu_p\over\mu_q}\biggr)^N={y_p^N-x_q^N\over y_q^N-x_p^N},
\qquad(\mu_p\mu_q)^N={y_q^N-y_p^N\over x_p^N-x_q^N}.
\label{periodic}
\end{equation}
Hence, we can define (rescale) $k,k'$ such that
\begin{equation}
\mu_p^N={k'\over1-k\,x_p^N}={1-k\,y_p^N\over k'},\quad
x_p^N+y_p^N=k(1+x_p^Ny_p^N),\quad k^2+k'^2=1,
\label{curve}
\end{equation}
and similarly with $p$ replaced by $q$. The rapidities live on
a curve of genus $g>1$ that is of Fermat type.
\section{Physical Cases}
\noindent
There are two conditions for physical cases:
\renewcommand{\labelenumi}{\Roman{enumi}.}
\begin{enumerate}
\item{Planar Model with Real Positive Boltzmann Weights,}
\item{Hermitian Quantum Spin Chain.}
\end{enumerate}
Usually, as in the six-vertex model in electric fields where the
quantum chain would have either imaginary or real Dzyaloshinsky--Moriya
interactions, one cannot require both simultaneously. Only for the
nonchiral (reflection-positive) subcase of the Fateev--Zamolodchikov
model\cite{FZ} are both physical conditions simultaneously fulfilled
for $N>2$.
We note that the Hermitian quantum spin chain submanifold contains
the\break superintegrable case, where both the star--triangle (or
Yang--Baxter) equation and the Onsager algebra are satisfied.
\section{Generalization of Free Fermion Model}
\noindent
Combining four chiral Potts model weights in a square, as in
Fig.~\ref{fig4},
\begin{figure}[htbp]
\hbox{\hspace*{0.92in}\psfig{figure=fig4.eps,height=1.5in}}
\vspace*{13pt}
\fcaption{Vertex model satisfying the Yang--Baxter Equation.
\label{fig4}}
\end{figure}
we obtain the $\mathsf{R}$-matrix of the $N$-state
generalization\cite{BPA} of the checkerboard Ising model
\begin{equation}
R_{\alpha\beta|\lambda\mu}=
\lbar{W}_{p_1q_1}(\alpha-\lambda)\lbar{W}_{p_2q_2}(\mu-\beta)
W_{p_2q_1}(\alpha-\mu)W_{p_1q_2}(\lambda-\beta).
\label{ff-a}
\end{equation}
Applying a Fourier transform gauge transformation we obtain\cite{BPA}
from this
\begin{equation}
{\hat R}_{\alpha\beta|\lambda\mu}=
{s_\beta\,t_\lambda\over s_\alpha\,t_\mu}\,{1\over N^2}\,
\sum^{N}_{\alpha'=1}\sum^{N}_{\beta'=1}
\sum^{N}_{\lambda'=1}\sum^{N}_{\mu'=1}
\omega^{-\alpha\alpha'+\beta\beta'+\lambda\lambda'-\mu\mu'}
R_{\alpha'\beta'|\lambda'\mu'},
\label{ff-b}
\end{equation}
which is the $\mathsf{R}$-matrix of an $N$-state generalization of the
free-fermion eight-vertex model for $N=2$. Indeed,
${\hat R}_{\alpha\beta|\lambda\mu}$ is nonzero only if
$\lambda+\beta=\alpha+\mu$ mod$\,N$ which generalizes the eight-vertex
condition.
In eq.~(\ref{ff-b}) $s_{\alpha}$ and $t_{\lambda}$ are free parameters,
corresponding to gauge freedom, which may be edge dependent.
\section{Transfer Matrices}
\noindent
\begin{figure}[htbp]
\hbox{\hspace*{0.39in}\psfig{figure=fig5.eps,height=1.3in}}
\vspace*{13pt}
\fcaption{Diagonal-to-diagonal transfer matrices
$\smathsf{T}_q$ and $\hat{\smathsf{T}}_r$.
\label{fig5}}
\end{figure}
If the weight functions $W$ and $\lbar{W}$ are solutions of the star--triangle
equation (\ref{STE}) and parametrized as in (\ref{weights}) with
variables on rapidity lines lying on the algebraic curve (\ref{curve}), all
diagonal transfer matrices commute. This is depicted in Fig.~\ref{fig5},
where the vertical rapidity variables have been chosen alternatingly;
more generally they could all be chosen independently.
\begin{figure}[htbp]
\hbox{\hspace*{0.17in}\psfig{figure=fig6.eps,height=2.2in}%
\hspace*{.6in}%
\psfig{figure=fig7.eps,height=2.2in}}
\vspace*{13pt}
\fcaption{Various $\smathsf{R}$-matrices related to the chiral Potts
model. Vertex weight $\smathsf{S}$ and IRF weight $\smathsf{U}$ are
built from two $W$'s and two $\lbar{W}$'s and have two horizontal and two
vertical rapidity lines. By the Wu--Kadanoff-Wegner map, IRF weight
$\smathsf{U}$ is also a vertex weight. The two zigzag lines in
$\smathsf{V}$ and $\tilde{\smathsf{V}}$ represent Fourier and inverse
Fourier transform, respectively.
\label{fig7}}
\end{figure}
We note that there are several ways to introduce
$\mathsf{R}$-matrices, as is indicated in Fig.~\ref{fig7}. First, we
have the original weights $\mathsf{W}$ and $\lbar{\mathsf{W}}$. Next,
we have the vertex weight (square) $\mathsf{S}$ and the IRF weight
(star) $\mathsf{U}$, both consisting of two $W$ and two $\lbar{W}$ weights.
Finally, we have the three-spin interactions $\mathsf{V}$ and
$\tilde{\mathsf{V}}$, its transposed or rotated version, which play a
special role in the theory.\cite{AMPTY,AP-tani} Note that $\mathsf{S}$
and $\mathsf{U}$ can be constructed from a $\mathsf{V}$ and a
$\tilde{\mathsf{V}}$, for example as
$\mathsf{U}=\mathsf{V}\cdot\tilde{\mathsf{V}}$.
\section{The Construction of Bazhanov and Stroganov}
\noindent
Bazhanov and Stroganov\cite{BS} have shown that the
$\mathsf{R}$-matrix $\mathsf{S}$ of a square in the checkerboard chiral
Potts model,\cite{BPA} see Figs.~\ref{fig4} and \ref{fig7}, is the
intertwiner of two cyclic representations. Their procedure is sketched
in Fig.~\ref{fig8}.
They start from the six-vertex model $\mathsf{R}$-matrix at an $N$th
root of 1, with $N$ odd. The corresponding Yang--Baxter equation is
well-known with spin ${\textstyle{1\over2}}$ highest-weight representations on all
legs. They then look for an $\mathsf{R}$-matrix $\mathsf{L}$
intertwining a highest-weight and a cyclic representation. This is
solved from a new Yang--Baxter equation that is quadratic in
$\mathsf{L}$.\footnote{Korepanov had earlier obtained the first
part of this construction, but his work has not yet been
published.\cite{Korv}}\ \ \ Next, the Yang--Baxter equation with one
highest-weight and two cyclic rapidity lines is a linear equation for
the intertwiner
$\mathsf{S}$ of two cyclic representations. This intertwiner
$\mathsf{S}$ satisfies a Yang--Baxter equation with only cyclic
representations on the legs. The original result\cite{BPA} for
$\mathsf{S}$ is obtained in a suitable gauge with a proper choice of
parameters.
The above illustrates the group theoretical significance of the chiral
Potts model. It gives a standard example of the affine quantum group
U$_q\widehat{\rm sl(2)}$ at root of unity in the minimal cyclic
representation.\cite{dCK} Intertwiners of certain bigger irreducible
cyclic representations are given by chiral Potts model partition
functions, as was found by Tarasov.\cite{Tara}
\begin{figure}[htbp]
\hbox{\hspace*{.3in}\psfig{figure=fig8.eps,height=3.5in}}
\vspace*{13pt}
\fcaption{The construction of Bazhanov and Stroganov. Single lines
correspond to spin ${\scriptstyle\frac12}$ highest-weight representations; double
lines correspond to minimal cyclic representations.
\label{fig8}}
\end{figure}
Smaller representations ({\it i.e.}\ semicyclic\cite{GRS}$^-$\cite{Schn}
and highest-weight) follow by the reduction of products of cyclic
representations in the construction of Baxter {\it et al}.\cite{BBP}
Elsewhere we shall present more details on how this relates several
root-of-unity models. Relating $p'$ in Fig.~\ref{fig5} by a special
automorphism with $p$, the product of transfer matrices splits
as\cite{BBP}
\begin{equation}
{\mathsf{T}}_q\,\hat{\mathsf{T}}_r=
B^{(j)}_{pp'q}\,{\mathsf{X}}^{-k}\,\tau^{(j)}(t_q)+
B^{(N-j)}_{p'pq}{\mathsf{X}}^l\,\tau^{(N-j)}(t_r).
\label{TT-split}
\end{equation}
Here the transfer matrix $\tau^{(j)}$ is made up of
$\mathsf{L}$-operators intertwining a cyclic and a spin
$s=\textstyle{j-1\over2}$ representation, the $B$'s are scalars,
$t_q\equiv x_qy_q$, and powers of the spin shift operator $\mathsf{X}$
come in depending on the automorphism.
Doing the same process $r=q'$ in the other direction, one obtains
several fusion relations for the $\tau^{(j)}$ transfer
matrices\cite{BBP}
\begin{eqnarray}
\tau^{(j)}(t_q)\,\tau^{(2)}(\omega^{j-1}t_q)&=&
z(\omega^{j-1}t_q)\,\tau^{(j-1)}(t_q)+\tau^{(j+1)}(t_q),\nonumber\\
\tau^{(j)}(\omega t_q)\,\tau^{(2)}(t_q)&=&
z(\omega t_q)\,\tau^{(j-1)}(\omega^2t_q)+\tau^{(j+1)}(t_q),
\label{tau-split}
\end{eqnarray}
where $z(t)$ is a known scalar function.
\section{Selected Exact Results}
\noindent
There are several exact results for the free energy of the integrable
chiral Potts model and we quote here a recent result of Baxter in the
scaling regime,\cite{B-scal}
\begin{eqnarray}
f-f_{\rm FZ}&\equiv&f-f_{\rm c}=
-{(N-1)k^2\over2N\pi}(u_q-u_p)\cos(u_p+u_q)\nonumber\\
&+&{k^2\over4\pi^2}\sin(u_q-u_p)
\sum_{j=1}^{<N/2}{\tan(\pi j/N)\over j}
B\bigg(1+{j\over N},{1\over2}\bigg)^2
\bigg({k\over2}\bigg)^{4j/N}\nonumber\\
&&+{\rm O}(k^4\log k),
\quad\hbox{if}\quad k^2\sim T_{\rm c}-T\to0,\quad\alpha=1-{2\over N}.
\label{scaling}
\end{eqnarray}
For the order parameters we have a general conjecture\cite{AMPT} in
the ordered state,
\begin{equation}
\langle\sigma_0^n\rangle = {(1-{k'}^2)}^{\beta_n},\quad
\beta_n={n(N-n)\over 2N^2},\quad
(1\le n\le N-1,\quad \sigma_0^N=1),
\label{eq:orderpar}
\end{equation}
which still remains to be proved.
Using Baxter's results\cite{Bax} we have a very
explicit formula\cite{AP-O} for the interfacial tensions,
\begin{equation}
{\epsilon_r\over k_{\rm B}T}=
{8\over\pi}\,\int_0^{\eta}dy\,{\sin(\pi r/N)\over
1+2y\cos(\pi r/ N)+y^2}\,
\hbox{artanh}\sqrt{\eta^N-y^N\over 1-\eta^N y^N},
\label{intf-T}
\end{equation}
in the fully symmetric, $W\equiv\lbar{W}$, integrable chiral Potts model,
{\it i.e.}\ diagonal chiral fields. Here
$\eta\equiv[(1-k')/(1+k')]^{1/N}$ is a temperature-like variable and
$r$ is the difference of the spin variables across the interface,
($r=1,\cdots,N-1$).
In the low-temperature region we have $k'\to0$ and
we can expand (\ref{intf-T}) as\cite{AP-O}
\begin{equation}
{\epsilon_r\over k_{\rm B}T}=
-{2r\over N}\,\log{k'\over 2}-
\log\biggl({1\over\cos^2 r\bar\lambda}
\prod_{j=1}^{[r/2]}\,{\cos^4 (r-2j+1)\bar\lambda
\over\cos^4 (r-2j)\bar\lambda}\biggr)
+{\rm O}(k'),
\label{intf-low}
\end{equation}
where $\bar\lambda\equiv \pi/(2N)$ and the constant term comes
from a dilogarithm integral. For the critical region,
$\eta\approx (k/2)^{2/N}\sim (T_{\rm c}-T)^{1/N}\to 0$,
(\ref{intf-T}) reduces to\cite{AP-O}
\begin{eqnarray}
{\epsilon_r\over k_{\rm B}T} & = &
{8\sin({\pi r/N})B({1/N},1/2)\over\pi(N+2)}\,\eta^{1+N/2} \nonumber \\
& - &{8\sin({2\pi r/N})B({2/N},1/2)\over\pi(N+4)}\,\eta^{2+N/2}+
{\rm O}(\eta^{3+N/2}) \nonumber \\
& \approx & \eta^{N\mu} D_r(\eta)
=\eta^{N\mu} D_r(\Delta/\eta^{N\phi}),
\label{intf-c}
\end{eqnarray}
where we have assumed the existence of scaling function $D_r$ depending
on $\eta$ and $\Delta\sim (T_{\rm c}-T)^{1/2}$, the chiral field
strength on the integrable line. From this, we have the critical
exponents
\begin{equation}
\mu={1\over 2}+{1\over N}=\nu, \quad \phi={1\over 2}-{1\over N}.
\label{intf-ex}
\end{equation}
We note that the above dilogarithm identity involves the dilogarithms
at $2N$-th roots of unity and has been discovered numerically first.
Apart from direct proofs for small $N=2,3,4$, only an indirect
proof$\,$\cite{AP-O} by the Bethe Ansatz exists.
\section{Basic Hypergeometric Series at Root of Unity}
\noindent
The basic hypergeometric hypergeometric series is
defined\cite{Bailey,Slater} as
\begin{equation}
\hypg{\alpha_1,\cdots,\alpha_{p+1}}
{\phantom{\alpha_1}\beta_1,\cdots,\beta_p}{z}=
\sum_{l=0}^{\infty}{{(\alpha_1;q)_l\cdots(\alpha_{p+1};q)_l}
\over{(\beta_1;q)_l\cdots(\beta_{p};q)_l(q;q)_l}}\,z^{l},
\label{eq:hypg}\end{equation}
where
\begin{equation}
(x;q)_l\equiv\cases{1,\quad l=0,\cr
(1-x)(1-xq)\cdots(1-xq^{l-1}),\quad l>0,\cr
1/[(1-xq^{-1})(1-xq^{-2})\cdots(1-xq^{l})],\quad l<0.}
\label{eq:qp}\end{equation}
Setting $\alpha_{p+1}=q^{1-N}$ and $q\to\omega\equiv e^{2\pi i/N}$,
we get
\begin{equation}
\hypg{\omega,\alpha_1,\cdots,\alpha_{p}}
{\phantom{\omega,}\beta_1,\cdots,\beta_p}{z}=
\sum_{l=0}^{N-1}
{{(\alpha_1;\omega)_l\cdots(\alpha_{p};\omega)_l}\over
{(\beta_1;\omega)_l\cdots(\beta_{p};\omega)_l}}\,z^{l}.
\label{eq:hypc}
\end{equation}
We note
\begin{equation}
(x;\omega)_{l+N}=(1-x^N)(x;\omega)_{l},\qquad\mbox{and}\quad
(\omega;\omega)_l=0,\quad l\ge N.
\label{eq:wp}
\end{equation}
So if we also require
\begin{equation}
z^N=\prod_{j=1}^p\gamma_j^N,\qquad
{\gamma_j}^N={1-\beta_j^N\over1-\alpha_j^N},
\label{eq:defg}\end{equation}
we obtain a ``cyclic basic hypergeometric function" with summand
periodic mod $N$.
For us the Saalsch\"utz case, defined by
\begin{equation}
z=q={\beta_1\cdots\beta_p\over\alpha_1\cdots\alpha_{p+1}}
\qquad\mbox{or}\quad
\omega^2\alpha_1\alpha_2\cdots\alpha_{p}=\beta_1\beta_2\cdots\beta_p,
\quad z=\omega,
\label{eq:saalc}
\end{equation}
is important, but details on this will be given elsewhere.
The theory of cyclic hypergeometric series is intimately related with
the theory of the integrable chiral Potts model and several identities
appear hidden in the literature. We note that our notations here, which
match up nicely with the classical definitions of basic hypergeometric
functions,\cite{Bailey,Slater} differ from those of Bazhanov\break {\it
et al}.,\cite{BB}$^-$\cite{SMS} who are using an upside-down version of
the $q$-Pochhammer symbol $(x;q)_l$. Hence, our definition of the
cyclic hypergeometric series differs from the one of Sergeev {\it et
al}.,\cite{SMS} who also use homogeneous rather than more compact affine
variables. So comparing our results with theirs is a little
cumbersome.
\pagebreak
\subsection{Integrable chiral Potts model weights}
\noindent
The weights of the integrable chiral Potts model can be written in
product form
\begin{equation}
{W(n)\over W(0)}=\gamma^{n}\,{(\alpha;\omega)_n\over(\beta;\omega)_n},
\qquad\gamma^N={1-\beta^N\over1-\alpha^N}.
\label{eq:w}
\end{equation}
This is periodic with period $N$ as follows from (\ref{eq:wp}).
The dual weights are given by Fourier transform,\cite{WuWang} {\it i.e.}
\begin{equation}
W^{({\rm f})}(k)=\sum_{n=0}^{N-1}\omega^{nk}\,W(n)=
\hypp{1}{\alpha}{\beta}{\gamma\,\omega^k}\,W(0).
\label{eq:wf}
\end{equation}
Using the recursion formula
\begin{equation}
W(n)\,(1-\beta\,\omega^{n-1})=W(n-1)\,\gamma\,(1-\alpha\,\omega^{n-1})
\label{eq:ww}\end{equation}
and its Fourier transform, we find
\begin{equation}
{W^{({\rm f})}(k)\over W^{({\rm f})}(0)}=
{\displaystyle{\hypp{1}{\alpha}{\beta}{\gamma\,\omega^k}}\over
\displaystyle{\hypp{1}{\alpha}{\beta}{\gamma}}}
=\left({\omega\over\beta}\right)^k
{(\gamma;\omega)_k\over(\omega\alpha\gamma/\beta;\omega)_k}.
\label{eq:rec1}\end{equation}
This relation is equivalent to the one originally found in
1987,\cite{BPA,AP-tani} It shows that dual weights also satisfy
(\ref{eq:w}).
With one more Fourier transform we get
\begin{equation}
{N\over\displaystyle{\hypp{1}{\alpha}{\beta}{\gamma}}}=
{\hypp{1}{\phantom{q/b}\gamma}{\omega\alpha\gamma/\beta}
{{\omega\over\beta}}}
={\hypp{1}{\beta/\alpha\gamma}{\omega/\gamma}{\alpha}}.
\label{eq:wff}
\end{equation}
Also, we can show that
\begin{equation}
{\displaystyle{\hypp{1}{\alpha\,\omega^m}{\beta\,\omega^n}
{\gamma\,\omega^k}}\over
\displaystyle{\hypp{1}{\alpha}{\beta}{\gamma}}}=
{(\omega/\beta)^{k}(\beta;\omega)_n(\gamma;\omega)_k
(\omega\alpha/\beta;\omega)_{m-n}
\over{(\gamma\,\omega^k)}^{n}(\alpha;\omega)_m
(\omega\alpha\gamma/\beta;\omega)_{m-n+k}}.
\label{eq:rec4}\end{equation}
This equation has been proved by Kashaev {\it et al}.\cite{KMS} in
other notation and is valid for all values of the arguments,
provided condition (\ref{eq:w}) holds.
\subsection{Baxter's summation formula}
\noindent
{}From Baxter's work\cite{B-399} we can infer the identity
\begin{equation}\begin{array}{ll}
\displaystyle{\hypp{1}{\alpha}{\beta}{\gamma}}&=
\displaystyle{\Phi_0\,\sqrt{N}\,(\omega/\beta)^{{1\over2}(N-1)}}\cr
&\displaystyle{\times\,
\prod_{j=1}^{N-1}\left[{(1-\omega^{j+1}\alpha/\beta)(1-\omega^j\gamma)
\over(1-\omega^j\alpha)(1-\omega^{j+1}/\beta)
(1-\omega^{j+1}\alpha\gamma/\beta)}\right]^{j/N}\!\!,}\cr
\end{array}\label{eq:hyp1}
\end{equation}
valid up to an $N$-th root of unity, while
\begin{equation}
\Phi_0\equiv e^{i\pi(N-1)(N-2)/12N},\qquad
\gamma^N={1-\beta^N\over1-\alpha^N}.
\label{eq:phi0}
\end{equation}
Introducing a function
\begin{equation}
p(\alpha)=\prod_{j=1}^{N-1}(1-\omega^{j}\alpha)^{j/N},\qquad p(0)=1,
\label{eq:defp}\end{equation}
we can rewrite the identity as
\begin{equation}
\hypp{1}{\alpha}{\beta}{\gamma}=\omega^d N^{1\over2}\Phi_0
\left({\omega\over\beta}\right)^{{1\over2}(N-1)}
{p(\omega \alpha/\beta)p(\gamma)
\over p(\alpha)p(\omega/\beta)
p(\omega\alpha\gamma/\beta)},
\label{eq:hyp1d}
\end{equation}
with $\omega^d$ determined by the choice of the branches. The LHS of
(\ref{eq:hyp1d}) is single valued in $\alpha$, $\beta$, and $\gamma$,
whereas the RHS has branch cuts. It is possible to give a precise
prescription for $d$, assuming that $p(\alpha)$ has branch cuts along
$[\omega^j,\infty)$ for $j=1,\ldots,N-1$, following straight lines
through the origin; but these details will be presented elsewhere.
Using this relation (\ref{eq:hyp1d}) and classical identities we can get
many relations for ${}_3\Phi_2,\,{}_4\Phi_3,\,{}_5\Phi_4,\ldots$,
including the star--triangle equation and the tetrahedron equation of
the Bazhanov--Baxter model. Some of these results have been given very
recently in different notation.\cite{SMS}
\subsection{Outline of proof of Baxter's identity}
\noindent
As (\ref{eq:hyp1d}) is crucial in the theory, we briefly sketch a
proof here. Also we note that in this subsection, we choose the
normalization $W(0)=1$ for the weights given by (\ref{eq:w}), rather
than $\prod W(j)=1$.
We note that the $W^{({\rm f})}(k)$ are eigenvalues of the cyclic
$N\times N$ matrix
\begin{equation}
{\mathsf{M}}\equiv\left(\matrix{W(0)&W(1)&\ldots&W(N-1)\cr
W(-1)&W(0)&\ldots&W(N-2)\cr.
\vdots&\vdots&\ddots&\vdots\cr
W(1-N)&W(2-N)&\ldots&W(0)\cr}\right),
\label{eq:matr}
\end{equation}
and
\begin{equation}
\det{\mathsf{M}}=
\prod_{j=0}^{N-1}W^{(f)}(j)=[W(0)^{(f)}]^N\prod_{j=1}^{N-1}
\left[{W^{(f)}(j)\over W^{(f)}(0)}\right].
\label{eq:det}
\end{equation}
So if we can calculate $\det\mathsf{M}$ directly, we obtain a result
for $[W(0)^{(f)}]^N$, giving us the proof of the identity.
Using
\begin{equation}
W(n)=\gamma^n\,\prod_{j=0}^{n-1}{1-\alpha\omega^j\over1-\beta\omega^j},
\quad W(N-n)=W(-n)=
\gamma^{-n}\,\prod_{j=-n}^{-1}{1-\beta\omega^j\over1-\alpha\omega^j},
\label{eq:detsub}
\end{equation}
we can rewrite
\begin{equation}
\det{\mathsf{M}}=\prod_{l=0}^{N-1}\Biggl[\prod_{j=0}^{N-2-l}
(1-\omega^{j}\beta)^{-1}
\prod_{j=-l}^{-1}(1-\omega^{j}\alpha)^{-1}\Biggr]\,
\det{\mathsf{E}}^{(0)},
\label{eq:d}\end{equation}
where the elements of ${\mathsf{E}}^{(0)}$ are polynomials, so that
$\det{\mathsf{E}}^{(0)}$ is also a polynomial. We can define more
general matrix elements ${\mathsf{E}}^{(m)}$ for $m=0,\dots,N-1$ by
\begin{equation}
E^{(m)}_{k,l}=\prod_{j=-k+1}^{l-k-1}
(1-\omega^{j}\alpha)
\prod_{j=l+m-k}^{N-k-1}(1-\omega^{j}\beta),
\label{eq:ed}
\end{equation}
which satisfy the recursion relation
\begin{equation}
E^{(m)}_{k,l}-E^{(m)}_{k,l+1}=\omega^{l-k}\,(\alpha-\omega^m\beta)\,
E^{(m+1)}_{k,l}.
\label{eq:em}
\end{equation}
Subtracting the pairs of consecutive columns of $\det{\mathsf{E}}^{(0)}$
in (\ref{eq:d}), and using (\ref{eq:em}), we can pull out some of the
zeros leaving a determinant with $N-1$ columns from ${\mathsf{E}}^{(1)}$
and the last column from ${\mathsf{E}}^{(0)}$. Repeating this process,
we arrive at
\begin{equation}
\det{\mathsf{E}}^{(0)}=
\prod_{m=0}^{N-2}(\alpha-\omega^{m}\beta)^{N-1-m}
\cdot\det{\mathsf{F}}.
\label{eq:dete2}
\end{equation}
Here the matrix $\mathsf{F}$ is defined such that its $j$th column is
the $j$th column of matrix ${\mathsf{E}}^{(N-j)}$.
{}From a simple polynomial degree count we conclude that
$\det{\mathsf{F}}$ has to be a constant. Noting that
${\mathsf{E}}^{(0)}$ is triangular in the limit $\alpha\to1$, we find
\begin{eqnarray}
\det{\mathsf{F}}=\prod_{j=1}^{N-1}(1-\omega^{j})^j=
\Phi_0^N\,N^{{1\over2}N},
\label{eq:fv}
\end{eqnarray}
in which $\Phi_0$ is given by (\ref{eq:phi0}). Hence, we can complete
the proof of the identity.
\subsection{Further identities}
\noindent
Several other identities can be derived using the above identity and
the classical Jackson identity.\cite{Bailey,Slater} One thus generates
the fundamental identities for the weights of the Baxter--Bazhanov model
and the sl$(n)$ chiral Potts model. More precisely, the Boltzmann
weights of a cube in the Baxter--Bazhanov model are proportional to
${}_3\Phi_2$'s, so all identities for the weights of this model are
also identities for cyclic ${}_3\Phi_2$'s.
Without any restriction on the parameters, we can derive
\begin{equation}
\hypp{2}{\alpha_1,\alpha_2}{\beta_1,\beta_2}{z}=N^{-1}
\sum_{k=0}^{N-1}\hypp{1}{\alpha_1}{\beta_1}{\omega^{-k}\gamma_1}
\hypp{1}{\alpha_2}{\beta_2}{{\omega^k \gamma_2}},
\label{eq:hyp2}\end{equation}
where $z=\gamma_1\gamma_2$ and $\gamma_i$ is defined in (\ref{eq:defg}).
We have only used the convolution theorem so far. Next, we can use
(\ref{eq:rec1}) so that we can perform the sum in (\ref{eq:hyp2}). This
way we obtain the transformation formula
\begin{equation}
\hypp{2}{\alpha_1,\alpha_2}{\beta_1,\beta_2}{z}=A\,\,\,
\hypp{2}{{z/\gamma_1},
{\hphantom{\omega}{\beta_1/\alpha_1\gamma_1}\hphantom{\omega}}}
{{\omega/\gamma_1},{\omega\alpha_2z/\beta_2\gamma_1}}
{{\omega\alpha_1\over\beta_2}},
\label{eq:h2t1}
\end{equation}
where the constant $A$ can be written in several different forms,
either with ${}_2\Phi_1$'s or with $p(x)$'s using (\ref{eq:hyp1d}).
{}From this identity one can generate the symmetry relations of
the cube in the Baxter--Bazhanov model under the 48 elements of the
symmetry group of the cube, see also the recent work of Sergeev
{\it et al}.\cite{SMS}
In addition, one can work out relations for ${}_4\Phi_3$ and higher.
One of these many\break relations, {\it i.e.}\ a Saalsch\"utzian
${}_4\Phi_3$ identity, is the star--triangle equation of the\break
integrable chiral Potts model,\cite{BPA} or its Fourier
transform\cite{AMPTY}
\begin{equation}
V_{prq}(a,b;n)\,\lbar{W}_{qr}^{({\rm f})}(n)=
R_{pqr}\,V_{pqr}(a,b;n)\,W_{qr}(a-b).
\label{eq:STEV}
\end{equation}
More detail will be presented elsewhere. We also have to refer the
reader to the recent work of Stroganov's group\cite{SMS} which uses
fairly different notations in their appendix. Their higher identities
also involve Saalsch\"utzian cyclic hypergeometric functions, albeit
that that is hard to recognize.
As a conclusion, we may safely state that the existence of all these
cyclic\break hypergeometric identities is the mathematical reason
behind the integrable chiral Potts family of models.
\nonumsection{Acknowledgements}
\noindent
We thank Professor F.\ Y.\ Wu for his hospitality and Professors
R.\ J.\ Baxter and Yu.\ G.\ Stroganov for much advice.
This work has been supported in part by NSF
Grants No.\ PHY-93-07816 and PHY-95-07769.
\nonumsection{References}
\noindent
| proofpile-arXiv_065-381 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Acknowledgments}
We thank S.F. Pessoa for kindly providing us with the tight-binding
parameters of BCC Cu, D.M. Edwards for helpful discussions and T.J.P.
Penna for helping with the figures. This work has been financially
supported by CNPq and FINEP of Brazil, and SERC of U.K.
\begin{figure}
\caption{Calculated bilinear exchange coupling
at T=300K for BCC Fe/Cu/Fe (001) trilayer versus Cu thickness (solid
circles). The lines are contributions from the extremal points (see
text) corresponding to: the Cu FS belly (full line), the neck wave
vectors of set 2 (dashed line) and the neck M-points (dotted line).
The inset shows the total contribution from the three sets of
extrema; the tick are the same as those of the main figure. }
\label{fig1}
\end{figure}
\begin{figure}
\caption{Calculated BCC Cu FS (a), and its relevant cross sections
for (001) (b). The arrows are the critical vectors
$k^{\perp}(\vec{k}^0_{||})$.}
\label{fig2}
\end{figure}
\begin{figure}
\caption{Band structures of bulk BCC Cu and Fe in the relevant
$[001]$ direction for the wave vectors $\vec{k}^0_{||}=(0,0)$ (a),
$\vec{k}^0_{||}=(0,0.324)$ (b) and $\vec{k}^0_{||}=(0.5,0.5)$ (c).}
\label{fig3}
\end{figure}
\begin{figure}
\caption{Calculated temperature dependence of the bilinear exchange
coupling for Fe/12Cu/Fe (solid circles) and Fe/14Cu/Fe (open
circles). The inset is for Fe/13Cu/Fe; the tick labels are the same
as those of the main figure. The lines are simply linear fits.}
\label{fig4}
\end{figure}
| proofpile-arXiv_065-382 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The first pioneering measurements of the deep inelastic diffractive
cross section at HERA have yielded great insight into the mechanism of
diffractive exchange and the partonic structure of the
pomeron~\cite{H1F2D93,ZEUS_MASS93,H1F2D94,H1WARSAW}.
The precision of present measurements is, however, inadequate for the
study of many quantitative aspects of this picture. The origin of the gluonic
structure of the pomeron and the interface between soft and hard
physics are unresolved questions that would benefit from the higher
luminosity offered by HERA in the years to come. In addition, the
substantially different partonic structure of the
pomeron in comparison to usual hadrons makes diffraction
a challenging test for perturbative QCD~\cite{MCDERMOTT}.
Furthermore, it has been suggested that the emerging data from HERA
may provide fundamental insight into the non-perturbative aspects
of QCD, leading to a complete understanding of
hadronic interactions in terms of a Reggeon Field Theory~\cite{ALAN}.
In this paper we study the measurements that can be made of inclusive
diffractive cross sections with the luminosities which may be achieved
in the future running of HERA. We evaluate what precision is possible
for these measurements, and attempt to highlight those measurements
which will be of particular relevance to achieving significant
progress in our understanding of QCD .
\section{Experimental Signatures of Diffractive Dynamics}
A prerequisite for any precision measurement of a hadronic cross section
is that it must be defined purely in terms of physically observable
properties of the outgoing particles. Since there is, as yet, only the
beginnings of a
microscopic understanding of the mechanism responsible for diffractive
dynamics, and since there are a plethora of wildly different predictions for
how the diffractive cross section should depend on all of the kinematic
variables,
it is the experimentalist's job to provide a well defined measurement
unfettered by any {\em ad hoc} assumptions about how the cross section
should behave.
For the interaction {\mbox{$ep\rightarrow eX$ ({\em $X=$Anything})}}
the cross section is usually measured differentially as a function
of the two kinematic variable $x$ and $Q^2$ defined as follows:
\begin{equation}
x=\frac{-q^2}{2P\cdot q},
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
Q^2=-q^2,
\end{equation}
where $q$ and $P$ are the 4-momenta of the exchanged virtual boson and
incident proton respectively, and $y$ is the fractional energy loss
of the electron in the rest frame of the proton.
The differential cross section
$\frac{{\rm d}^2\sigma}{{\rm d}x{\rm d}Q^2}$ may then be
related to the proton structure functions $F_2$ and
$F_L=F_2\cdot R/(1+R)$ by:
\begin{equation}
\frac{{\rm d}^2\sigma}{{\rm d}x{\rm d}Q^2}=
\frac{2\pi\alpha_{em}^2}{xQ^4}
\left[ 2(1-y) + \frac{y^2}{1+R}\right] \cdot F_2(x,Q^2).
\end{equation}
Such an interaction may be further characterised by dividing the total
hadronic final state (i.e. the final state system excluding the
scattered lepton) into two systems, $X$ and $Y$, separated by the
largest gap in true rapidity distribution of particles in the
$\gamma^*$-$p$ centre of mass system, as shown in
figure \ref{fig:diffXY}.
\begin{figure}[htb]
\begin{center}
\epsfig{file=difXYdef.eps,angle=270,width=0.4\textwidth}
\end{center}
\scaption {Schematic illustration of the method of selecting events
to define a diffractive cross-section, showing the definitions of
the kinematic variables discussed in the text. The systems $X$ and
$Y$ are separated by the largest gap in true rapidity in the
$\gamma^*$-$p$ centre of mass system. The system $Y$ is nearest to
the direction of the incident proton in this frame of reference.}
\label{fig:diffXY}
\end{figure}
When the masses of these two systems, $M_X$ and $M_Y$, are both much
smaller than the $\gamma^*p$ centre of mass energy $W$ then a large
rapidity gap between the two systems is kinematically inevitable, and
the interactions are likely to be dominantly diffractive.
The two additional kinematic degrees of freedom may be specified
by introducing the variables $t$, $\beta$ and $x_{_{I\!\!P}}$:
\begin{equation}
t = (P-Y)^2,
\end{equation}
\begin{equation}
x_{_{I\!\!P}} = \frac{Q^2+M_X^2-t}{Q^2+W^2-M_p^2} \simeq \frac{Q^2+M_X^2}{Q^2}\cdot x,
\end{equation}
\begin{equation}
\beta = \frac{Q^2}{Q^2+M_X^2-t} \simeq \frac{Q^2}{Q^2+M_X^2},
\end{equation}
The hadronic cross section to be measured is therefore $\frac{{\rm d}^4\sigma_{ep\rightarrow eXY}$. The
presence of a rapidity gap may then be identified experimentally by
use of the calorimeters and forward detectors of the experiments. For
sufficiently small values of $M_X$, the system $X$ will be completely
contained in the central calorimeters, allowing an accurate
determination of $\beta$ and $x_{_{I\!\!P}}$. Tagging of energy in the very
forward direction allows both the mass of system $Y$ and the magnitude
of $|t|$ to be constrained, but does not allow a direct measurement of
$t$.
A leading proton spectrometer (LPS) ``tags'' particles which follow
closely the path of the outgoing proton beam separated by only a very
small angle. By tagging the particle at several points along it's
trajectory and with detailed knowledge of the magnetic field optics
between the interaction point and the tagging devices, it is possible
to reconstruct the original vector momentum of the particle. Thus it
is possible to select a sample of events containing a ``leading''
proton with $E_p\sim E_p^{beam}$. An LPS permits an accurate
measurement of $t$, and provides an independent method of selecting
interactions sensitive to diffractive dynamics. Since there is no
need for all the hadrons of system $X$ to be contained within the
main calorimeter it is possible to make measurements at higher values
of $x_{_{I\!\!P}}$, so that subleading contributions to the cross section
may be confirmed and further investigated.
Regge phenomenology has been highly successful in correlating many
features of the data for both elastic and inelastic hadron--hadron
cross sections~\cite{ReggeFits}. At high energies (low $x_{_{I\!\!P}}$) the
data are dominated by the contribution from the leading trajectory
of intercept $\alpha(0)\stackrel{>}{_{\sim}} 1$. The most simple prediction of
Regge phenomenology is that at fixed $M_X$ the
cross section should vary $\propto x_{_{I\!\!P}}^{-n}$ where $n$ is related
to the intercept of the leading trajectory by $n=2\alpha(t)-1$.\footnote{
It is worth pointing out that at $Q^2>0$ then $x_{_{I\!\!P}}$ is the basic
Regge variable, not $W^2$ as is sometimes assumed.} The size of
the rapidity gap is kinematically related to the value of $x_{_{I\!\!P}}$
by \mbox{$\Delta \eta \sim -{\rm ln} x_{_{I\!\!P}}$}~\cite{XPDETA}.
It is easy to show that at fixed $M_X$ the rapidity gap distribution
is then related to $\alpha(t)$ by:
\begin{equation}
\frac{{\rm d}N}{{\rm d}\Delta\eta} \propto (e^{-\Delta\eta})^{2-2\alpha(t)}.
\end{equation}
Hence for $\alpha(t)<1$ the production of large rapidity gaps is
exponentially suppressed, whilst for $\alpha(t)>1$ it is exponentially
enhanced~\cite{Bjorken}. With $x_{_{I\!\!P}}$ fixed, then extensions of the
simple Regge model are necessary to predict the $M_X$ dependence. For
sufficiently large masses, it is perhaps possible to use M\"{u}ller's
extension of the optical theorem~\cite{Mueller} (``Triple Regge'') to
achieve this aim, and for small masses the vector meson dominance
model~\cite{VMD} may be appropriate.
There is much evidence from hadron--hadron data~\cite{ReggeFits},
and some from $ep$ data~\cite{H1WARSAW} that additional
(``sub--leading'') trajectories are necessary to reproduce the
measured hadronic cross sections. The Regge formalism permits $t$
dependent interference between some of the different contributions
rendering problematic any theoretical predictions for the
dependence of the measured hadronic cross sections on $M_X$,
$x_{I\!\!P}$, $W^2$ or $t$.
Estimates of the diffractive contribution to both deep--inelastic
and photoproduction cross sections have been made using the
$M_X$ dependence at fixed $W$~\cite{ZEUS_MASS93}.
\section{Future Measurements of {\boldmath $F_2^{D(3)}$}}
The 1994 H1 data \cite{H1F2D94,H1WARSAW}, corresponding to
$\sim 2$~${\rm pb^{-1}}$, have allowed a measurement of $F_2^{D(3)}$ to
be made in the kinematic region $2.5 < Q^2 < 65 $ GeV${}^2$,
$0.01 < \beta <0.9$ and $0.0001 < x_{I\!\!P} < 0.05$.
For $Q^2<8.5\,{\rm GeV^2}$ this
was achieved by taking data with the nominal interaction point shifted
by $+70\,{\rm cm}$ allowing lower angles to be covered by the backward
electromagnetic calorimeter.
The dependence of $F_2^{D(3)}$ on $x_{I\!\!P}$ was found not to depend
on $Q^2$ but to depend on $\beta$, demonstrating that the
factorisation of the $\gamma^*p$ cross section into a universal
diffractive flux (depending only on $x_{I\!\!P}$) and a structure
function (depending only on $\beta$ and $Q^2$) is not tenable. These
deviations from factorisation were demonstrated to be consistent with
an interpretation in which two individually factorisable components
contribute to $F_2^{D(3)}$. These two components could be identified
with pomeron ($I\!\!P$) and meson contributions $\propto
x_{_{I\!\!P}}^{-n_{I\!\!P}}$,$x_{_{I\!\!P}}^{-n_{M}}$ where
$n_{I\!\!P}=1.29\pm0.03\pm 0.07$ and
$n_{M}=0.3\pm0.3\pm 0.6$. Scaling violations, positive
with increasing ${\rm log}Q^2$ for all $\beta$, were observed and
could be interpreted in terms of a large gluon component in the
diffractive exchange, concentrated near $x_{g/I\!\!P}=1$
at $Q^2\sim 2.5\,{\rm GeV^2}$~\cite{QCD93,H1PARIS,H1F2D94,H1WARSAW}.
Given the significant progress in the understanding of diffractive
dynamics that has been achieved with the existing data, the goal of
future measurements is twofold: to extend the kinematic regime of
existing measurement, and to achieve highest possible precision,
particularly where existing measurements have uncovered
interesting physics. In the 1994
measurement, with $5$ bins per decade in both $Q^2$ and $x_{_{I\!\!P}}$, and
$7$ bins in $\beta$ between $0.01$ and $0.9$, then in the interval
$8.5<Q^2<65\,{\rm GeV^2}$ there were an average of $\sim100$ events
per bin, corresponding to a statistical accuracy of $\sim
10\%$\footnote{The variation of the cross section with $Q^2$ of
approximately $Q^{-4}$ was partially offset by an increasing bin
size with increasing $Q^2$.}. The different sources of systematic
error for the 1994 measurement are shown in table \ref{tab:syst},
along with an estimate of the level to which they may be reduced in
the future.
\begin{table}
\begin{small}
\begin{center}
\begin{tabular}{|c|c|c|}\hline
\multicolumn{2}{|c|}{H1 1994 Preliminary} &
H1 Future \\ \hline
Error Source & $\delta F_2^{D(3)}/F_2^{D(3)}$ &
$\delta F_2^{D(3)}/F_2^{D(3)}$ \\ \hline
Main Calo' Hadronic E Scale: $\pm 5\%$ & $3\%$ &
$\stackrel{<}{_{\sim}} 1\%$ \\ \hline
Backward Calo' Hadronic E Scale: $\pm 15\%$ & $3\%$ &
$\stackrel{<}{_{\sim}} 0.5\% $ \\ \hline
Backward Calo' Elec. E Scale: $\pm 1.5\% $ & $5\%$ &
$\stackrel{<}{_{\sim}} 3\%$ \\ \hline
Tracking Momentum Scale $\pm3\%$ & $2\%$ &
$\stackrel{<}{_{\sim}} 1\%$ \\ \hline
Scattered Lepton Angle: $\pm 1\,{\rm mrad}$ &
$2\%$ & $1\%$ \\ \hline
$t$ dependence: $e^{6\pm2t}$ & $1\%$ &
$0.5\%$ \\ \hline
$x_{I\!\!P}$ dependence: $x_{_{I\!\!P}}^{n\pm0.2}$ & $3\%$ &
$1\%$ \\ \hline
$\beta$ dependence & $3\% $ &
$1\%$ \\ \hline
$M_X$/$x_{I\!\!P}$ resolution & $4\%$ & $2\%$ \\ \hline
Background (photoproduction and non-$ep$) & $0.1\%$ & $<0.1\%$ \\ \hline
MC Statistics/Model Dependence & $14\%$ & $4\%$ \\ \hline
\end{tabular}
\end{center}
\end{small}
\caption{\label{tab:syst} Sources of systematic error for the H1 1994
Preliminary measurement of $F_2^{D(3)}$. The results of calculations
to estimate the extent to which these uncertainties may be reduced
are shown in the right hand column. These calculations take cognisance
of the new SPACAL backward calorimeter installed by H1~\cite{SPACAL},
and rely upon future measurements made using the LPS and forward
neutron calorimeter based upon a minimum luminosity
of $10\,{\rm pb}^{-1}$.}
\end{table}
The largest single error arises from the combination of limited Monte
Carlo statistics ($10\%$) and different possible final state topologies
leading to varying corrections for finite efficiency and
resolution ($10\%$). The latter contribution was estimated from the
difference in the correction factors calculated for two
possible $\gamma^*I\!\!P$ interaction mechanisms:
\begin{itemize}
\item a quark parton model process in which the $\gamma^*$ couples
directly to a massless quark in the exchange with zero transverse momentum,
\item a boson--gluon fusion process in which the $\gamma^*$ couples
to a gluon in the diffractive exchange through a quark box.
\end{itemize}
Numerous experimental measurements of diffractive final state
topologies at HERA are now available~\cite{H1WARSAW}
which constrain the data to lie between these two possibilities.
Therefore, it is reasonable to assume that the error arising from
a lack of knowledge of the final state topologies may be reduced by a
factor of $\sim2$ such that it no longer dominates the total error.
Monte Carlo statistics can obviously be increased. Therefore, it is
reasonable that in the future a measurement may be made with a total
systematic uncertainty of $5\%$ or less. To reduce the statistical
error to $5\%$ ($3\%$) would require $8\,{\rm pb^{-1}}$
($22\,{\rm pb^{-1}}$) of data.
Less luminosity is required to achieve the same statistical precision
at lower $Q^2$. However, to reach the lowest possible $Q^2$ with
the widest possible range in $x$ (and hence in $x_{_{I\!\!P}}$) it is
advantageous to run with the interaction vertex shifted forwards along
the proton beam direction. We calculate that $2$~${\rm pb^{-1}}$ of such
data would give a statistical accuracy of $5\%$ in the region of overlap
between nominal and shifted vertex data, allowing a precise cross
check of the two analyses. It is important to note that
the HERA magnets are not optimised for a shifted vertex configuration,
and so such data take twice as long to collect as for the nominal
configuration.
Theoretically, and consequently experimentally, the region of high
$\beta$ is of particular interest.
The evolution of $F_2^{D(3)}$ with $Q^2$ at high $\beta$ is expected
to depend crucially upon which evolution equations are pertinent to
diffractive dynamics. In particular, a DGLAP~\cite{DGLAP} QCD
analysis~\cite{H1PARIS,H1F2D94,H1WARSAW} demonstrates that the H1 data are
consistent with a large gluon distribution, concentrated near
$x_{g/I\!\!P}=1$ at $Q^2\sim2.5\,GeV^2$. In this case then $F_2^{D(3)}$
should begin to fall with increasing $Q^2$ at $\beta=0.9$ for
$Q^2\gg10\,{\rm GeV^2}$. The presence of a sufficiently large
``direct'' term in the evolution equations would lead to a
indefinite increase with
$Q^2$~\cite{GS}. Thus a measurement
of $F_2^{D(3)}$ at high $\beta$ to the highest possible
$Q^2$ is desirable.
At fixed $ep$ centre of mass energy $\sqrt{s}$ the range of
$\beta$ that may be accessed decreases with increasing $Q^2$ such that:
\begin{equation}
\beta > \frac{Q^2}{s x_{I\!\!P}^{max}y^{max}}.
\end{equation}
The acceptance for a measurement of $F_2^{D(3)}$ at $\beta=0.9$,
based upon the interval $0.8<\beta<1$, therefore extends to
a maximum $Q^2$ of $\sim3500\,{GeV^2}$ for $x_{I\!\!P}<0.05$, where
the cross section is likely to be dominantly diffractive. To
achieve $10\%$ statistical precision in a measurement in the interval
$2000<Q^2<3500\,{\rm GeV^2}$ (bin centre $Q^2=2600\,{\rm GeV^2}$)
would require $\sim 200$~${\rm pb^{-1}}$. However, a measurement of
$30\%$ statistical precision, requiring only $\sim 20\,{\rm pb^{-1}}$,
would already be significant in light of theoretical calculations
which differ by more than a factor of $2$ in this
region.
\begin{boldmath}
\section{Measurement of $F_2^{D(4)}$}
\end{boldmath}
For particles with $E\ll E_P$ (here $E_P$ is the proton beam energy)
then the HERA magnets separate them from the proton beam allowing
particles to be tagged for $t\sim t_{min}$. For particles with $E\sim
E_P$ then only those particles with transverse momentum
$P_T^2\gapprox0.07\,{\rm GeV^2}$ may be tagged~\cite{ZEUS_LPS1}.
Consequently, for diffractive measurements then the acceptance in $t$
is limited to $t\stackrel{<}{_{\sim}} -P_T^2/(1-x_{_{I\!\!P}})=-0.07\,{\rm GeV^2}$. For a
process with a highly peripheral $t$ dependence then this limitation
results in the loss of $30$ to $40\%$ of the total cross section. The
geometrical acceptance of the current ZEUS detectors is $\sim
6\%$~\cite{ZEUSFPS}, giving an overall acceptance in the region
$x_{_{I\!\!P}}<0.05$ of $\sim 4\%$.
Tantalising first measurements of the $t$ dependence of deep--inelastic
events with a leading proton have been presented by the ZEUS
collaboration~\cite{ZEUS_LPS1}. The observed dependence on
$t$ of ${\rm d}\sigma/{\rm d}t \propto e^{bt}$,
$b=5.9\pm1.2^{+1.1}_{-0.7}\,{\rm GeV}^{-2}$, lends strong
support to a diffractive interpretation of such interactions.
The measurements of $b$ differential in $x_{_{I\!\!P}}$, $\beta$ and $Q^2$
that will be possible with increased luminosity are eagerly awaited.
A preliminary measurement $F_2^{D(3)}$ using an LPS has also been
made~\cite{ZEUS_LPS2}. In order to use measurements of $F_2^{D(4)}$
to estimate $F_2^{D(3)}=\int_{t_{min}}^{t_{max}}{\rm d}t\,F_2^{D(4)}$
it is necessary to extrapolate from the measured region at high $|t|$
into the region of no acceptance to $t_{min}$. To make this
extrapolation with the minimal assumptions about the $t$ dependence,
it is necessary to have at least three bins in $t$ for each bin in
$\beta$, $Q^2$ and $x_{_{I\!\!P}}$. This means that $\sim 150\,{\rm pb}^{-1}$
of data are required to make a measurement of similar accuracy to the
existing data for $F_2^{D(3)}$. Obviously it would be possible to
study the $t$ dependence in aggregate volumes of phase space and make
a measurements with a factor $3$ fewer statistics, relying upon model
assumptions about the variation of the $t$ dependence with $\beta$,
$Q^2$ and $x_{_{I\!\!P}}$. Even with $150\,{\rm pb^{-1}}$ of data, the
extrapolation into the $t$ region of no acceptance relies on the
assumption that the $t$ dependence is the same in the measured and
unmeasured regions. It is not clear that the resultant theoretical
uncertainty will be less than the 5\% uncertainty in the overall
normalisation resulting from proton dissociation for existing
measurements of $F_2^{D(3)}$.
The primary purpose of the LPS is the diffractive ``holy grail'' of
measuring $F_2^{D(4)}(\beta,Q^2,x_{_{I\!\!P}},t)$. Measurements of any
possible ``shrinkage'' of the forward diffractive scattering
amplitude (increasing peripherality of the $t$ dependence with
decreasing $x_{_{I\!\!P}}$) are likely to have unique power for
discriminating between different theoretical models of diffractive
dynamics~\cite{MCDERMOTT}. In addition, the ability to select
subsamples of events in which there is an additional hard scale at
the proton vertex is of great theoretical interest~\cite{MCDERMOTT}.
It is therefore of the utmost importance that the $100$ to $150\,{\rm
pb^{-1}}$ of data necessary to make an accurate measurement of
$F_2^{D(4)}(\beta,Q^2,x_{_{I\!\!P}},t)$ are collected by both collaborations
with the LPS devices fully installed.
The LPS also allows measurements at higher $x_{_{I\!\!P}}$ for which the
rapidity gap between the photon and proton remnant systems $X$ and
$Y$ becomes too small to observe in the main calorimeter.
This will allow the contributions to the measured hadronic cross
section from subleading trajectories to be further investigated.
Combining information from the scattered lepton and proton will allow the
invariant mass $M_X$ to be reconstructed in the region $x_{_{I\!\!P}}\gg0.05$,
where it would otherwise be impossible.
Tagged particles with $E\ll E_P^{beam}$ will provide information
about the way in which protons dissociate into higher mass systems.
Particles thus produced are kinematically unlikely to generate
a significant background in the region $x_{_{I\!\!P}}<0.05$, but for
larger $x_{_{I\!\!P}}$ then this background will become increasingly important.
{\boldmath
\section{On the Determination of $F_L^{I\!\!P}$}}
A fundamental question in the study of hard diffractive processes is
the extent to which perturbative QCD dynamics may be factorised from
the proton vertex. Some recent calculations of diffractive cross
sections~\cite{BUCHMUELLER,SCI} consider the interaction in terms of a
hard (perturbative) phase producing a coloured partonic system which
subsequently interacts with the colour field of the proton via the
exchange of non-perturbative gluons. Such ``soft colour
interactions''~\cite{SCI} allow colour to be exchanged between
the photon and proton remnant systems such that in some fraction
of the events both will become colour singlet states separated by
a potentially large rapidity gap. In such models the evolution of
the effective parton distribution functions (PDFs) will be driven by the
evolution of the PDFs of the proton. Alternatively, it is possible that
the presence of a large rapidity gap will confine any evolution dynamics
to within the photon remnant system $X$. The effective PDFs will then
depend only upon $\beta$ and $Q^2$, and not upon $x_{_{I\!\!P}}$ as in the
former case.
The 1994 H1 data have been demonstrated to be consistent with a factorisable
approach in which a large gluon distribution is attributed to the
pomeron in the region $x_{_{I\!\!P}}<0.01$ or over the whole kinematic
range for the sum of two individually factorisable
components~\cite{H1WARSAW}. At next to
leading order (NLO) then a large gluon distribution
$G^{I\!\!P}(\beta,Q^2)$ necessitates
a large longitudinal structure function, $F_L^{I\!\!P}(\beta,Q^2)$:
\begin{equation}
F_L^{I\!\!P}(\beta,Q^2) = \frac{\alpha_s(Q^2)}{4\pi}\cdot
\beta^2\int_{\beta}^{1}\frac{{\rm d}\xi}{\xi^3}
\left[\frac{16}{3}F_2^{I\!\!P}(\xi,Q^2) + 8\sum_{i}e_i^2(1-\frac{\beta}{\xi})
\,\xi G^{I\!\!P}(\xi,Q^2)\right].
\end{equation}
A prediction for $F_L(\beta,Q^2)$ based upon a NLO QCD analysis of the
data could be tested directly since the wide range in $x_{_{I\!\!P}}$ accessible
at HERA leads to a wide variation in the $eI\!\!P$
centre of mass energy $\sqrt{s_{eI\!\!P}}=\sqrt{x_{_{I\!\!P}} s}$. This means that
in factorisable models, at fixed $\beta$ and fixed $Q^2$ the same
partonic structure of the pomeron may be probed at different values of
$x_{_{I\!\!P}}$, corresponding to different values of $y$:
\begin{equation}
y = \frac{Q^2}{s\beta} \cdot \frac{1}{x_{_{I\!\!P}}}.
\end{equation}
If the dependence of the diffractive structure function can be factorised
such that
\begin{eqnarray}
F_2^{D(3)}(\beta,Q^2,x_{_{I\!\!P}})=f(x_{_{I\!\!P}}) F_2^{I\!\!P}(\beta,Q^2)
\end{eqnarray}
and $F_L^{I\!\!P}$ is non-zero, then a measurement of
$F_2^{D(3)}$ assuming $F_L^{I\!\!P}$=0 is lower than the correct value
of $F_2^{D(3)}$ by the factor $\delta(y,\beta,Q^2)$:
\begin{equation}
\delta = \frac{F_2^{D(3)}(Measured)}{F_2^{D(3)}(True)}
= \frac{2(1-y) + y^2/[1+R^{I\!\!P}(\beta,Q^2)]}{2(1-y)+y^2}
\end{equation}
Thus under the assumption of factorisation, a measurement of
$F_L^{I\!\!P}(\beta,Q^2)$ is possible by measuring the extent of the
deviation from a simple $x_{_{I\!\!P}}^{-n}$ extrapolation from higher $x_{_{I\!\!P}}$.
The size of this effect is shown for different values of
$R^{I\!\!P}=F_L^{I\!\!P}/(F_2^{I\!\!P}-F_L^{I\!\!P})$
in figure \ref{fig:rpom}.
\begin{figure}[htb]\unitlength 1mm
\begin{center}
\begin{picture}(120,60)
\put(-15,-75){\epsfig{file=r_pom.eps,width=0.9\textwidth}}
{\footnotesize
\put(23,39){$0$}
\put(28,25){$0.5$}
\put(32,20){$1$}
\put(36,15){$R^{I\!\!P}=5$}
\put(20,58){$y=1$}}
\end{picture}
\end{center}
\caption{\label{fig:rpom} The expected fractional deviation $\delta(x_{_{I\!\!P}})$
of $F_2^{D(3)}$
from an extrapolation from high $x_{_{I\!\!P}}$ for $R^{I\!\!P}=0$ (horizontal
solid line), $R^{I\!\!P}=0.5$ (dashed line),
$R^{I\!\!P}=1.0$ (dotted line) and $R^{I\!\!P}=5.0$ (dash-dotted line)
for $Q^2=10\,{\rm GeV^2}$, $\beta=0.5$. The vertical solid line
indicates the kinematic limit of $y=1$. The larger box
indicates the area covered by the preliminary 1994 H1
measurements, which extended
up to $y\sim 0.5$ with an accuracy of $\sim20\%$. The smaller box
represents a future measurement with a total error of 7\%
extending to $y=0.8$.}
\end{figure}
Also shown is the range covered by the existing 1994
data and the range that could be covered by future measurements at HERA.
It is clear that in order to see this effect it is necessary to
make a measurement at $y$ values above $0.7$ with a total error of less
than $10\%$. For high values of $\beta$ the extrapolation
of $F_2^{D(3)}$ into the high $y$ region of interest should not be
significantly compromised by the possible presence of subleading
trajectories, which contribute only at high $x_{_{I\!\!P}}$ (low $y$).
A comparison between the values of $F_L^{I\!\!P}$ determined from
such apparent deviations from factorisation and the values
expected from a QCD analysis of $F_2^{D(3)}$ constitutes a powerful
test of both the validity of factorisation and the applicability of
NLO QCD to diffraction at high $Q^2$.
{\boldmath
\section{Measurements of $R^{D(3)}$ and $R^{D(4)}$}}
Since an evaluation of $F_L^{I\!\!P}$ using the techniques
described in the previous section require theoretical
assumptions concerning factorisation, such an analysis is
clearly no substitute for a direct measurement of the ratio of
the longitudinal to transverse diffractive cross sections, $R^D(x_{_{I\!\!P}},
\beta)$. A good measurement of this quantity is vital for a full
understanding of the diffractive mechanism and should provide an
exciting testing ground for QCD. There is at present no theoretical
consensus on what values to expect for $R^{D}$, although all models
suggest a substantial dependence on $\beta$ with most suggesting an
extreme rise as $\beta \rightarrow 1$~\cite{MCDERMOTT}.
A measurement of $R^{D}$ to any precision leads us into unexplored
territory.
Measurements of $R^D$ have so far been restricted to DIS exclusive
vector mesons production~\cite{VDM} by a direct measurement of the
polarisation of the final state resonance. This method could perhaps
be used for the bulk data if the directions of the final state partons
could be inferred, but is likely to be difficult due to the problems
of running jet analyses on low mass final states. Instead we
investigate a slightly modified version of the method used to
determine $R(x, Q^2)$ for inclusive DIS~\cite{FLHWS}.
The general form relating the structure functions $F^D_2$ and $F^D_1$
to the $ep$ differential diffractive cross section can be written in
analogy to the inclusive cross sections~\cite{GUNNAR}.
\begin{eqnarray}
\frac{{\rm d}^4 \sigma^D_{ep}}{{\rm d} x_{I\!\!P}\,{\rm d} t\,{\rm
d} x\,{\rm d}Q^2}=
\frac{4\pi\alpha^2}{xQ^4}(1-y+\frac{y^2}{2(1+R^{D(4)}(x,Q^2,x_{I\!\!P},t))}){F_2}^{D(4)}(x,Q^2,x_{I\!\!P},t),
\label{eqn:sig4}
\end{eqnarray}
where
$R^{D(4)}=(F_2^{D(4)}-2xF_1^{D(4)})/(2xF_1^{D(4)})=\sigma^D_L/\sigma^D_T$.
Although a measurement of $R^{D(4)}$ as a function of all 4 variables
is the most desirable measurement and must be an experimental goal,
statistical limitations are likely to mean that initial measurements
must be made without a reliance on a leading proton spectrometer (LPS) and so
$t$ is not measured. In this case we define $F_2^{D(3)}$ and
$R^{D(3)}$ as
\begin{eqnarray}
\frac{{\rm d}^3 \sigma^D_{ep}}{{\rm d} x_{I\!\!P}\,{\rm d} x\,{\rm d}Q^2}=
\frac{4\pi\alpha^2}{xQ^4}(1-y+\frac{y^2}{2(1+R^{D(3)}(x,Q^2,x_{I\!\!P}))}){F_2}^{D(3)}(x,Q^2,x_{I\!\!P}),
\label{eqn:sig3}
\end{eqnarray}
In this case $R^{D(3)}$ is the ratio of the longitudinal to transverse
cross section only if $R^{D(4)}$ has no dependence on $t$.
Analysis of equation~\ref{eqn:sig3} reveals that in order to make a
measurement of $R^{D}$ independent of $F^D_2$ at least two $ep$ cross
sections must be compared at the same values of $x$, $Q^2$ and $x_{_{I\!\!P}}$
but different values of $y$. This is achieved by varying the $ep$
centre of mass energy, $\sqrt{s}$. There are of course many possible
running scenarios for which either or both beam energies are changed
to a variety of possible values. A full discussion on this point is
given in~\cite{FLHWS}. For the present study we examine the case when
the proton beam is roughly halved in energy from 820~GeV to 500~GeV
and the electron beam remains at a constant energy of $27.5$~GeV so
that data is taken at the 2 values of $s$ of $s=90200$~${\rm GeV}^2$
and $s=55000$~${\rm GeV}^2$. This setup allows for a reasonable
luminosity at the low proton beam energy and enables systematic
uncertainties concerned with detection of the scattered electron to be
minimised. In this scheme we make a measurement of the ratio of the
$ep$ differential cross sections, $r=\sigma^D_{hi}/ \sigma^D_{lo}$, for
two values of $y$, $y_{hi}$ and $y_{lo}$ (corresponding to the high
and low values of $s$) for fixed $x$, $Q^2$, $x_{_{I\!\!P}}$ and (if measuring
$R^{D(4)}$) $t$. Equation~\ref{eqn:sig4} or \ref{eqn:sig3} is then
used to determine $R^D$.
It is also apparent from equation~\ref{eqn:sig3} that in order to have
the greatest sensitivity to $R^{D}$ measurements must be made at the
highest $y_{lo}$ possible (and thus lowest electron energy). This is
illustrated in figure~\ref{fig:flycut}, where it can be seen that for
values of $y_{lo}=0.5$ (or lower) there is little change of $r$ for
different values of $R^{D}$. The upper limit in $y_{lo}$ is crucially
dependent on the ability of the detectors to discriminate and veto
against photoproduction events in which a pion is misidentified as an
electron. Experience has shown, however, that for the diffractive
events the low mass of the final state reduces the chance of faking
electron when compared to the more energetic non-diffractive events.
For this study we take a central value of $y_{lo}=0.8$ with a lower
(upper) bin limit of 0.75 (0.85) so that good electron identification
for energies above 4.15~GeV is assumed.
\begin{figure}[htb]
\begin{center}
\epsfig{file=flycut.eps,width=0.7\textwidth}
\end{center}
\caption {Dependence of the ratio, $r$, of the $ep$ cross sections at
$s=90200$ and $s=55000$ with $R^D$ for various values of $y$ at $s=55000$.}
\label{fig:flycut}
\end{figure}
The kinematic range of the measurement projected onto the $x$--$Q^2$
plane is shown in figure~\ref{fig:flbins} for both CMS energies. To
ensure that the scattered electrons are well contained within the
backward detectors we place a maximum $\theta_e$ cut of $174^\circ$.
This restricts us to $Q^2>5$~${\rm GeV}^2$ and $x>10^{-4}$. In order to
ensure good acceptance in the forward region we impose a cut
of $x_{_{I\!\!P}}<0.01$.
\begin{figure}[htb]
\begin{center}
\epsfig{file=flbins.eps,width=\textwidth}
\end{center}
\vspace{-1cm}
\caption {A projection of the kinematic range onto the $x$--$Q^2$ plane
for a) a proton beam of $E_p=500$~GeV and b) a proton beam of
$E_p=820$~GeV for an electron beam of 27.5~GeV. The shaded region
represents the proposed region of study. Also shown is the
restriction on the kinematic range imposed by a maximum $\theta_e$
cut of $174^\circ$.}
\label{fig:flbins}
\end{figure}
For low electron energies the kinematic variables are well
reconstructed and have good resolutions if the electron only method
is used~\cite{H1F2,ZEUSF2}. Since the major problem with this
measurement will be in reducing the statistical error we select bins
as large as possible whilst maintaining enough bins to investigate any
variation of $R^D$ with the kinematic quantities. A suitable choice
would be 4 bins per decade in $x$, 4 bins in $\beta$ and if the LPS
is used 2 bins in $t$. The bins in $\beta$ and $t$ are optimised so as
to contain approximately equal numbers of events at each $x$ value.
Identical bins in these variables are used for both CMS energies.
In order to estimate the statistical errors on the measurement we used
the RAPGAP generator~\cite{RAPGAP} with a fit to the measured H1
$F_2^{D(3)}$~\cite{H1F2D94} at $s=90200$~${\rm GeV}^2$ and used
equation~\ref{eqn:sig4} to determine the statistics at $s=55000$~${\rm
GeV}^2$. We assumed 100\% efficiency for measurements made without
any LPS and 4\% efficiency for those with. The expected number of
events per integrated luminosity in each bin is summarised in
table~\ref{tab:noevents} for an example $R^D=0.5$.
\begin{table}
\begin{center}
\begin{tabular}{|r|c|c|c|c|} \hline
& \multicolumn{2}{c|}{Number of Events} &
\multicolumn{2}{c|}{Number of Events} \\
$\log_{10} x$ & \multicolumn{2}{c|}{without LPS} &
\multicolumn{2}{c|}{with LPS} \\ \cline{2-5}
& $E_P=820$~GeV & $E_P=500$~GeV & $E_P=820$~GeV & $E_P=500$~GeV \\ \hline
-4.125 & 41 & 28 & 0.82 & 0.56 \\ \hline
-3.875 & 36 & 25 & 0.72 & 0.50\\ \hline
-3.625 & 19 & 13 & 0.38 & 0.27\\ \hline
-3.375 & 9 & 6 & 0.18 & 0.12\\ \hline
\end{tabular}
\end{center}
\caption{The estimated number of events in each bin
for an integrated luminosity of 1~${\rm pb}^{-1}$ for the 2 proton
beam energies and an electron beam energy of $27.5$~GeV, assuming 4
bins per decade in $x$, 4 bins in $\beta$ and (for measurements with
the LPS) 2 bins in $t$. $R^D$ was set to 0.5.}
\label{tab:noevents}
\end{table}
For systematic errors we estimate an error of $\delta(r)/r$ of 5\%.
This error is conservatively evaluated by taking the estimated error
on $F^D_2$ (see above) and assuming any improvement that arises from
taking a ratio is offset by increased uncertainty in the
photoproduction background and radiative corrections.
An example of the sort of precision which may be obtained for a
measurement of $R^{D(3)}$ and $R^{D(4)}$ is shown in
figure~\ref{fig:flrlum}. For this study we assumed that many more data
would be obtained at the high CMS energy. It can be seen that for an
integrated luminosity of 10~${\rm pb}^{-1}$ at the lower $s$ value a
measurement of $R^{D(3)}$ is statistically dominated with an error
around 60\% if $R^{D(3)}=0.5$ for the lowest value of $x$. For an integrated
luminosity of 50~${\rm pb}^{-1}$ at the lower $s$ value statistical
and systematic errors become comparable and $R^{D(3)}$ can be measured
to 40\% accuracy. For measurements of $R^{D(4)}$ very high integrated
luminosities are required -- at least a factor of 50 is needed for a
similar precision to $R^{D(3)}$.
\begin{figure}[htb]
\begin{center}
\epsfig{file=flrlum.eps,width=0.7\textwidth}
\end{center}
\caption{The estimated errors for an example central value
of $R^D=0.5$ for a) 10(500)~${\rm pb}^{-1}$ at $s=55000$~${\rm
GeV}^2$ and 50(2500)~${\rm pb}^{-1}$ at $s=90200$~${\rm GeV}^2$
and b) 50(2500)~${\rm pb}^{-1}$ at $s=55000$~${\rm GeV}^2$ and
250(12500)~${\rm pb}^{-1}$ at $s=90200$~${\rm GeV}^2$ for a
measurement of $R^{D(3)}$ ($R^{D(4)}$). The inner error bar
represents the statistical error and the outer the statistical and
systematic error added in quadrature.}
\label{fig:flrlum}
\end{figure}
\boldmath
\section{Measuring $F_{2~{\it charm}}^{D}$}
\unboldmath
Since the leading mechanism in QCD for the production of charm quarks
is the boson gluon fusion process, the diffractive charm structure
function $F_{2~{\it charm}}^{D}$ is very sensitive to the gluonic component
of the diffractive exchange. It is important to establish whether
the measured $F_{2~{\it charm}}^{D}$ is consistent with that
expected from a QCD analysis of the scaling violations in $F_2^{D(3)}$.
In addition, it has already been observed in the photoproduction
of $J/\psi$ mesons that the charm quark mass provides a sufficiently
large scale to generate the onset of hard QCD dynamics. The extend to
which the charm component of $F_2^{D(3)}$ exhibits a different
energy ($x_{_{I\!\!P}}$) dependence to that of the total will provide insight
into the fundamental dynamics of diffraction.
The method used here for tagging charm events uses the $D^{*+}$
decay\footnote{Charge conjugate states are henceforth implicitly included.}
$D^{*+} \rightarrow D^0 \pi_{\mathit{slow}}^+ \rightarrow (K^-\pi^+) \pi_{\mathit{slow}}^+$.
The tight kinematic constraint imposed by the small difference between
the $D^{*+}$ and $D^0$ masses means that the mass difference
$\Delta M = M(K\pi\pi_{\mathit{slow}}) - M(K\pi)$ is better resolved than the
individual masses, and the narrow peak in $\Delta M$ provides a clear
signature for $D^{*+}$ production. The chosen $D^0$ decay mode is the
easiest to use because it involves only charged tracks and because the
low multiplicity means that the combinatorial background is small
and that the inefficiency of the tracker does not cause a major problem.
A prediction of the observed number of events is obtained using
RAPGAP with a hard gluon dominated
pomeron structure function taken from a QCD analysis of
$F_2^{D(3)}$~\cite{H1F2D94}.
The cross section predicted by this model for $D^{*\pm}$ production
in diffractive DIS is compatible with the value measured
in~\cite{H1WARSAW}.
The acceptance of the detector is simulated by applying cuts on
the generated direction ($\theta$) and transverse momentum ($p_{\perp}$)
of the decay products and on the energy of the scattered lepton
($E_e^{\prime}$).
The $p_{\perp}$ cut used is 150~MeV, which is approximately the
value at which the H1 central and forward trackers reach full efficiency.
This cut has a major influence on the acceptance for $D^{*+}$ mesons,
because the momentum of the slow pion $\pi_{\mathit{slow}}$ is strongly correlated
with that of the $D^{*+}$, so a $D^{*+}$ with $p_{\perp}$ much less than
150~MeV$ \times M_{D^{*+}} / M_{\pi^+} \approx 2$~GeV cannot be
detected. The $p_{\perp}$-dependence of the acceptance is shown in
figure~\ref{fig:acc}a.
There is no obvious way of extending the tracker acceptance to lower
$p_{\perp}$, so this cut is not varied.
\begin{figure}[htb]
\setlength{\unitlength}{1cm}
\begin{picture}(16.0,5.5)
\put(0.0,0.0){\epsfig{file=ptacc.eps,height=5.5cm}}
\put(8.0,0.0){\epsfig{file=acc_allq2.eps,height=5.5cm}}
\put(6.9,5.0){\small (a)}
\put(14.9,5.0){\small (b)}
\end{picture}
\caption{\label{fig:acc}
The acceptance for $D^{*+}$ in the region
$10~\mathrm{GeV}^2<Q^2<100~\mathrm{GeV}^2$, shown as a function of
(a)~$p_{\perp}$ and (b)~$x_{_{I\!\!P}}$.
The continuous line in (b) shows the results with central tracking
only and a requirement $E_e^{\prime}>10$~GeV.
The other lines show the effect of extending track
coverage in the backward direction and including $E_e^{\prime}$
down to 5~GeV.
}
\end{figure}
Figure~\ref{fig:acc}b shows the average acceptance for a $D^{*+}$ over
the region $10~\mathrm{GeV}^2<Q^2<100~\mathrm{GeV}^2$ and all values of $\beta$
and for $p_{\perp}>2\,{\rm GeV}$. It can be seen that extending the
angular coverage from the present $25<\theta<155^{\circ}$ range
in the backward direction to $170^{\circ}$ in conjunction with
lowering the scattered lepton energy cut used in present analyses
significantly improves the acceptance, especially at low $x_{_{I\!\!P}}$.
Figure~\ref{fig:num} shows the number
of $D^{*\pm}$ which one might expect to detect in the low- and high-$Q^2$
regions with a total integrated luminosity of 750~pb$^{-1}$.
It can be seen that even with this large integrated luminosity,
cross section measurements can only be made with an accuracy of 10\%
in this binning.
\begin{figure}[htb]
\setlength{\unitlength}{1cm}
\begin{picture}(16.0,5.5)
\put(0.0,0.0){\epsfig{file=num_loq2.eps,height=5.5cm}}
\put(8.0,0.0){\epsfig{file=num_hiq2.eps,height=5.5cm}}
\put(6.9,5.0){\small (a)}
\put(14.9,5.0){\small (b)}
\end{picture}
\caption{\label{fig:num}
The number of $D^{*\pm}$ expected to be observed in
750~pb$^{-1}$ in the range
(a)~$10~\mathrm{GeV}^2<Q^2<25~\mathrm{GeV}^2$ and
(b)~$50~\mathrm{GeV}^2<Q^2<100~\mathrm{GeV}^2$,
predicted using RAPGAP with a hard-gluon-dominated
pomeron.}
\end{figure}
Whilst one waits for $750\,{\rm pb^{-1}}$, it may be worthwhile
attempting to increase the statistics by investigating other decay
modes which are experimentally more demanding.
The $D^{*+}$ decay to $D^0 \pi_{\mathit{slow}}^+$ has a branching fraction of
nearly 70\%~\cite{pdg96} and is the only decay mode giving
a charged track in addition to the $D^0$ decay products.
However, the $D^0$ decay to $K^-\pi^+$ has a branching fraction
of slightly less than 4\%~\cite{pdg96}, so there is
clearly room for a large improvement in statistics if other channels
can be used.
For example the use of a silicon vertex detector close to the
interaction point,
such as that already partly installed in H1, should enable the secondary
vertex from the decay of the $D^0$, which has a decay length
$c\tau = 124~\mu$m~\cite{pdg96}, to be tagged. This could be used to
extend the analysis to other channels, including semileptonic decays,
which are otherwise difficult to use.
The gain in statistics, neglecting
the vertex-tagging inefficiency, can be up to a factor of $\sim 20$
(if all channels are used), with a further factor of $\sim 2$ available
if inclusive $D^0$ production is used rather than relying on the $D^{*+}$
decay.
\section{Summary and Conclusions}
We have argued that a precise hadron level definition of the cross section
to be measured is essential in order that future high precision
measurements of diffractive structure functions may be treated in
a consistent theoretical manner. Although $20\,{\rm pb}^{-1}$ of
integrated luminosity will be enough to achieve precision in
the measurement of $F_2^{D(3)}$ at moderate $Q^2$, in excess
of $100\,{\rm pb^{-1}}$ is necessary to make a comprehensive
survey of diffractive structure within the kinematic limits of HERA.
An attempt to determine $F_L^{I\!\!P}$ will be an important
tool in establishing the validity of both factorisation and
NLO QCD in diffractive interactions. A direct measurement of
$R^{D(3)}$ is shown to be feasible with $10\,{\rm pb^{-1}}$ of
integrated luminosity taken at lower proton beam energy.
A substantial integrated luminosity is demonstrated to
be necessary to complete an exhaustive study of the diffractive
production of open charm, although statistics can be markedly
improved by exploiting the full range of possible charm decays.
| proofpile-arXiv_065-383 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The Standard Model (SM) is a theory of spin-$1\over 2$ matter fermions
which interact via the exchange of spin-1 gauge bosons, where the bosons
and fermions live in independent representations of the gauge symmetries.
Supersymmetry (SUSY) is a symmetry which
establishes a one-to-one correspondence between bosonic and fermionic
degrees of freedom, and provides a relation between their
couplings ~\badcite{bagger}. Relativistic quantum field
theory is formulated to
be consistent with the symmetries of the Lorentz/Poincar\'e group -- a
non-compact Lie algebra. Mathematically, supersymmetry is formulated
as a generalization of the Lorentz/Poincar\'e group of space-time
symmetries to include
spinorial generators which obey specific anti-commutation relations; such
an algebra is known as a graded Lie algebra. Representations of the SUSY
algebra include both bosonic and fermionic degrees of freedom.
The hypothesis that nature is supersymmetric is very compelling to many
particle physicists for several reasons.
\begin{itemize}
\item It can be shown that the SUSY algebra is the only non-trivial
extension of the set of spacetime symmetries which forms one of the
foundations of relativistic quantum field theory.
\item If supersymmetry is formulated as a {\it local} symmetry, then one is
necessarily forced into introducing a massless spin-2 (graviton) field into
the theory. The resulting supergravity theory reduces to Einstein's
general relativity theory in the appropriate limit.
\item Spacetime supersymmetry appears to be a fundamental ingredient
of superstring theory.
\end{itemize}
These motivations say nothing about the {\it scale} at which nature might be
supersymmetric. Indeed, there are additional motivations for
{\it weak-scale supersymmetry}.
\begin{itemize}
\item Incorporation of supersymmetry into the SM leads to a solution of the
gauge hierarchy problem. Namely, quadratic divergences in loop
corrections to the Higgs boson mass will cancel between fermionic and bosonic
loops. This mechanism works only if the superpartner particle masses are
roughly of order or less than the weak scale.
\item There exists an experimental hint: the three gauge couplings
can unify at the
Grand Unification scale if there exist weak-scale supersymmetric particles,
with a desert between the weak scale and the GUT scale.
This is not the case with the SM.
\item Electroweak symmetry breaking is a derived consequence of
supersymmetry breaking in many particle physics models with weak-scale
supersymmetry, whereas electroweak symmetry breaking in the SM is put in
``by hand.'' The SUSY radiative electroweak symmetry-breaking
mechanism works best if the top quark has mass
$m_t\sim 150-200$ GeV. The recent discovery of the top quark with
$m_t=176\pm 4.4$ GeV is consistent with this mechanism.
\item As a bonus, many particle physics models with weak-scale
supersymmetry contain
an excellent candidate for cold dark matter (CDM): the lightest neutralino.
Such a CDM particle seems necessary to describe many aspects of cosmology.
\end{itemize}
Finally, there is a historical precedent for supersymmetry. In 1928,
P. A. M. Dirac incorporated the symmetries of the Lorentz group into
quantum mechanics. He found as a natural consequence that each known
particle had to have a partner particle -- namely, antimatter.
The matter-anti-matter symmetry wasn't revealed until high enough
energy scales were reached to create a positron. In a similar manner,
incorporation of supersymmetry into particle physics once again predicts
partner particles for all known particles. Will nature prove to be
supersymmetric at the weak scale? In this report, we try to
shed light on some of the many possible ways that weak-scale supersymmetry
might be revealed by colliders operating at sufficiently high energy.
\subsection{Minimal Supersymmetric Standard Model}
The simplest supersymmetric model of particle physics which is consistent
with the SM is called
the Minimal Supersymmetric Standard Model (MSSM). The recipe for this
model is to start with the SM of particle physics, but in addition
add an extra Higgs doublet of opposite hypercharge. (This ensures
cancellation of triangle anomalies due to Higgsino partner contributions.)
Next, proceed with supersymmetrization, following well-known rules to
construct supersymmetric gauge theories. At this stage one has a globally
supersymmetric SM theory. Supersymmetry breaking is incorporated by adding
to the Lagrangian explicit soft SUSY-breaking terms consistent with the
symmetries of the SM.
These consist of scalar and gaugino
mass terms, as well as trilinear ($A$ terms) and bilinear ($B$ term) scalar
interactions. The resulting theory has $>100$ parameters, mainly from the
various soft SUSY-breaking terms.
Such a model is the most conservative approach to
realistic SUSY model building, but the large parameter space leaves little
predictivity. What is needed as well is a theory of how the soft
SUSY-breaking terms arise. The fundamental field content of the MSSM is
listed in Table 1,
for one generation of quark and lepton (squark and slepton) fields.
Mixings and symmetry breaking lead to the actual physical mass eigenstates.
\begin{table}
\begin{center}
\begin{tabular}{c|cc}
\hline
\ & Boson fields & Fermionic partners \cr
\hline
Gauge multiplets & & \cr
$SU(3)$ & $g^a$ & $\tilde g^a$ \cr
$SU(2)$ & $W^i$ & $\tilde{W}^i$ \cr
$U(1)$ & $B$ & $\tilde B$ \cr
\hline
Matter multiplets & & \cr
Scalar leptons & $\tilde{L}^j=(\tilde{\nu} ,\tilde{e}^-_L)$ & $(\nu
,e^-)_L$\cr
\ & $\tilde{R}=\tilde{e}^+_R$ & $e_L^c$ \cr
Scalar quarks & $\tilde{Q}^j=(\tilde{u}_L,\tilde{d}_L)$ & $(u,d)_L$ \cr
\ & $\tilde{U}=\tilde{u}_R^*$ & $u_L^c$ \cr
\ & $\tilde{D}=\tilde{d}_R^*$ & $d_L^c$ \cr
Higgs bosons & $H_1^j$ & $(\tilde{H}_1^0,\tilde{H}_1^-)_L$ \cr
\ & $H_2^j$ & $(\tilde{H}_2^+,\tilde{H}_2^0)_L$ \cr
\hline
\end{tabular}
\caption{Field content of the MSSM for one generation of
quarks and leptons.}
\label{mssm}
\end{center}
\end{table}
The goal of this report is to create a mini-guide to some of the possible
supersymmetric models that occur in the literature, and to provide a
bridge between SUSY model builders and their experimental colleagues. The
following sections each contain a brief survey of six classes of
SUSY-breaking models studied at this workshop;
contributing group members are listed in {\it italics}.
We start with the most popular framework for
experimental searches, the paradigm
\begin{itemize}
\item minimal supergravity model (mSUGRA) ({\it M. Drees and M. Nojiri}),
\end{itemize}
and follow with
\begin{itemize}
\item models with additional D-term contributions to scalar masses,
({\it C. Kolda, S. Martin and S. Mrenna})
\item models with non-universal GUT-scale soft SUSY-breaking terms,
({\it G. Anderson, R. M. Barnett, C. H. Chen, J. Gunion, J. Lykken, T.
Moroi
and Y. Yamada})
\item two MSSM scenarios which use the large parameter freedom of
the MSSM to fit to various collider zoo events, ({\it G. Kane and S. Mrenna})
\item models with $R$ parity violation, ({\it H. Baer, B. Kayser and X.
Tata})
and
\item models with gauge-mediated low energy SUSY breaking (GMLESB),
({\it J. Amundson, C. Kolda, S. Martin, T. Moroi, S. Mrenna, D. Pierce,
S. Thomas, J. Wells and B. Wright}).
\end{itemize}
Each section contains a brief description of the model, qualitative
discussion of some of the associated phenomenology, and finally some
comments on event generation for the model under discussion. In this way,
it is hoped that this report will be a starting point for future
experimental SUSY searches, and that it will provide a flavor for the
diversity of ways that weak-scale supersymmetry might manifest itself
at colliding beam experiments. We note that a survey of some additional
models is contained in Ref. ~\badcite{peskin}, although under a somewhat
different format.
\section{Minimal Supergravity Model}
The currently most popular SUSY model is the minimal supergravity
(mSUGRA) model ~\badcite{sug1,sug2}. Here one assumes that SUSY is broken
spontaneously in a ``hidden sector,'' so that some auxiliary field(s)
get vev(s) of order $M_Z \cdot M_{Pl} \simeq (10^{10} \ {\rm
GeV})^2$. Gravitational -- strength interactions then {\em automatically}
transmit SUSY breaking to the ``visible sector,'' which
contains all the SM fields and their superpartners; the effective mass
splitting in the visible sector is by construction of order of the
weak scale, as needed to stabilize the gauge hierarchy. In
{\em minimal} supergravity one further assumes that the kinetic terms for the
gauge and matter fields take the canonical form: as a result,
all scalar fields
(sfermions and Higgs bosons) get the same contribution $m_0^2$ to
their squared scalar masses, and that all trilinear $A$ parameters
have the same value $A_0$, by virtue of an approximate
global $U(n)$ symmetry of
the SUGRA Lagrangian ~\badcite{sug2}.
Finally, motivated by the apparent
unification of the measured gauge couplings within the MSSM
~\badcite{sug3} at scale $M_{\rm GUT} \simeq 2 \cdot 10^{16}$ GeV, one assumes
that SUSY-breaking gaugino masses have a common value $m_{1/2}$
at scale $M_{\rm GUT}$. In practice, since little is known about physics
between the scales $M_{\rm GUT}$ and $M_{\rm Planck}$,
one often uses $M_{\rm GUT}$ as the scale
at which the scalar masses and $A$ parameters unify. We note that
$R$ parity is assumed to be conserved within the mSUGRA framework.
This ansatz has several advantages. First, it is very economical; the
entire spectrum can be described with a small number of free
parameters. Second, degeneracy of scalar masses at scale $M_{\rm GUT}$ leads
to small flavor-changing neutral currents.
Finally, this model predicts radiative breaking of the
electroweak gauge symmetry ~\badcite{sug4} because of the large top-quark
mass.
Radiative symmetry breaking together with the precisely known value of
$M_Z$ allows one to trade two free parameters, usually taken to be the
absolute value of the supersymmetric Higgsino mass parameter $|\mu |$ and
the $B$ parameter appearing in the scalar Higgs potential, for the
ratio of vevs, $\tan \beta$. The model then has four continuous and one
discrete free parameter not present in the SM:
\begin{equation}
m_0, m_{1/2}, A_0, \tan \beta, {\rm sign}(\mu).
\end{equation}
This model is now incorporated in several publicly available MC codes,
in particular {\tt ISAJET} ~\badcite{isajet}.
An approximate version is incorporated into {\tt Spythia} ~\badcite{spythia},
which reproduces {\tt ISAJET} results to 10\%.
Most SUSY spectra studied at this workshop have been
generated within mSUGRA; we refer to the various accelerator subgroup
reports for
the corresponding spectra. One ``generically'' finds the following
features:
\begin{itemize}
\item $|\mu|$ is large, well above the masses of the $SU(2)$ and
$U(1)$ gauginos. The lightest neutralino is therefore mostly a
Bino (and an excellent candidate for cosmological CDM -- for related
constraints, see {\it e.g.} Ref. ~\badcite{cosmo}),
and the second neutralino and lighter chargino are dominantly
$SU(2)$ gauginos. The heavier neutralinos and
charginos are only rarely produced in the decays of gluinos and
sfermions (except possibly for stop decays). Small regions of parameter
space with $|\mu | \simeq M_W$ are possible.
\item If $m_0^2 \gg m_{1/2}^2$, all sfermions of the first two
generations are close in mass. Otherwise, squarks are significantly
heavier than sleptons, and $SU(2)$ doublet sleptons are heavier than singlet
sleptons. Either way, the lighter stop and sbottom eigenstates are
well below the first generation squarks; gluinos therefore have
large branching ratios into $b$ or $t$ quarks.
\item The heavier Higgs bosons (pseudoscalar $A$, scalar $H^0$,
and charged $H^\pm$) are usually heavier than $|\mu|$ unless
$\tan\beta \gg 1$. This also implies that the light scalar $h^0$
behaves like the SM Higgs.
\end{itemize}
These features have already become something like folklore. We want to
emphasize here that even within this restrictive framework, quite
different spectra are also possible, as illustrated by the following
examples.
Example A is for $m_0 = 750$ GeV, $m_{1/2} = 150$ GeV, $A_0 = -300$
GeV, $\tan \beta = 5.5$, $\mu<0$, and $m_t=165$ GeV (pole mass). This
yields $|\mu| = 120$ GeV, very similar to the $SU(2)$ gaugino mass
$M_2$ at the weak scale, leading to strong Higgsino -- gaugino
mixing. The neutralino masses are 60, 91, 143 and 180 GeV, while
charginos are at 93 and 185 GeV. They are all considerably lighter
than the gluino (at 435 GeV), which in turn lies well below the
squarks (at $\simeq$ 815 GeV) and sleptons (at 750-760 GeV). Due to
the strong gaugino -- Higgsino mixing, all chargino and neutralino
states will be produced with significant rates in the decays of
gluinos and $SU(2)$ doublet sfermions, leading to complicated decay
chains. For example, the $l^+ l^-$ invariant mass spectrum in gluino
pair events will have many thresholds due to $\tilde{\chi}^0_i
\rightarrow \tilde{\chi}^0_j l^+l^-$ decays. Since first and second
generation squarks are almost twice as heavy as the gluino, there
might be a significant gluino ``background'' to squark production at
the LHC. A 500 GeV $e^+e^-$ collider will produce all six chargino and
neutralino states. Information about $\tilde{e}_L, \ \tilde{e}_R$ and
$\tilde{\nu}_e$ masses can be gleaned from studies of neutralino and
chargino production, respectively; however, $\sqrt{s}>$ 1.5 TeV is
required to study sleptons directly. Spectra of this type can already
be modelled reliably using {\tt ISAJET}: the above parameter space set can be
entered via the $SUGRA$ keyword.
As example B, we have chosen $m_0=m_{1/2}=200$ GeV, $A_0=0$, $\tan
\beta =48$, $\mu < 0$ and $m_t=175$ GeV. Note the large value of $\tan
\beta$, which leads to large $b$ and $\tau$ Yukawa couplings, as
required in models where all third generation Yukawa couplings are
unified at scale $M_{\rm GUT}$. Here the gluino (at 517 GeV) lies slightly
above first generation squarks (at 480-500 GeV), which in turn lie
well above first generation sleptons (at 220-250 GeV). The
light neutralinos (at 83 and 151 GeV) and light chargino (at 151 GeV)
are mostly gauginos, while the heavy states (at 287, 304 and
307 GeV) are mostly Higgsinos, because $\vert\mu\vert=275 \
{\rm GeV}\gg m_{1/2}$.
The masses of $\tilde{t}_1$ (355 GeV), $\tilde{b}_1$ (371 GeV) and
$\tilde{\tau}_1$ (132 GeV) are all significantly below those of the
corresponding first or second generation sfermions. As a result, more
than 2/3 of all gluinos decay into a $b$ quark and a $\tilde{b}$
squark. Since (s)bottoms have large Yukawa couplings, $\tilde{b}$
decays will often produce the heavier, Higgsino-like chargino and
neutralinos. Further, all neutralinos (except for the lightest one,
which is the LSP) have two-body decays into $\tilde{\tau}_1 + \tau$;
in case of $\tilde{\chi}^0_2$ this is the only two-body mode, and for
the Higgsino-like states this mode will be enhanced by the large
$\tau$ Yukawa coupling. Chargino decays will also often produce real
$\tilde{\tau}_1$. Study of the $l^+l^-$ invariant mass spectrum will
not allow direct determination of neutralino mass differences, as the
$l^\pm$ are secondaries from tau decays. Even $\tilde{e}_L$ pair
events at $e^+e^-$ colliders will contain up to four tau leptons!
Further, unless the $e^-$ beam is almost purely right-handed, it
might be difficult to distinguish between $\tilde{\tau}_1$ pair
production and $\tilde{\chi}_1^\pm$ pair production. Finally, the
heavier Higgs bosons are quite light in this case, e.g. $m_A = 126$
GeV. There will be a large number of $A \rightarrow
\tau^+ \tau^-$ events at the LHC. However, because most SUSY events
will contain $\tau$ pairs in this scenario, it is not clear whether
the Higgs signal will remain visible. At present, scenarios with
$\tan \beta \gg 1$ can not be simulated with {\tt ISAJET}, since the $b$
and $\tau$ Yukawa couplings have not been included in all relevant
decays. This situation should be remedied soon.
\section{$D$-term Contributions to Scalar Masses}
We have seen that the standard mSUGRA framework predicts
a testable pattern of squark and slepton masses.
In this section we
describe a class of models in which a quite distinctive modification
of the mSUGRA predictions can arise, namely contributions
to scalar masses associated with the
$D$-terms of extra spontaneously broken gauge symmetries
~\badcite{dterm}.
As we will see, the modification of squark, slepton and Higgs masses can
have a profound effect on phenomenology.
In general, $D$-term contributions to scalar masses will arise in
supersymmetric models whenever a gauge symmetry is spontaneously
broken with a reduction of rank.
Suppose,
for example, that the SM gauge group $SU(3)\times SU(2)\times U(1)_Y$
is supplemented by an additional $U(1)_X$ factor broken far above
the electroweak scale. Naively, one might suppose that if the
breaking scale is sufficiently large, all direct effects of
$U(1)_X$ on TeV-scale physics are negligible. However, a simple
toy model shows that this is not so.
Assume that ordinary MSSM scalar fields, denoted generically by
$\varphi_i$, carry $U(1)_X$ charges $X_i$ which are not all 0.
In order to break $U(1)_X$,
we also assume the existence of a pair of additional
chiral superfields $\Phi$ and $\overline \Phi$ which are SM singlets,
but carry $U(1)_X$ charges
which are normalized (without loss of generality) to be $+1$ and $-1$
respectively.
Then VEV's for $\Phi$ and $\overline\Phi$ will spontaneously
break $U(1)_X$ while leaving the SM gauge group intact.
The scalar potential whose minimum determines
$\langle\Phi\rangle,\langle\overline\Phi\rangle$
then has the form
\begin{equation}
V=V_{0}
+ m^2 |\Phi|^2 + {\overline m}^2 |{\overline \Phi}|^2
+ {g_X^2\over 2 } \left [ |\Phi|^2 - |{\overline \Phi}|^2
+ X_i |\varphi_i |^2 \right ]^2.
\end{equation}
Here $V_0$ comes from the superpotential
and involves only $\Phi$ and $\overline\Phi$;
it is symmetric under $\Phi\leftrightarrow\overline\Phi$, but otherwise its
precise form need not concern us.
The pieces involving $m^2$ and ${\overline m}^2$
are soft breaking terms; $m^2$ and ${\overline m}^2$ are
of order $M_Z^2$ and in general unequal.
The remaining piece is the square of the
$D$-term associated
with $U(1)_X$, which forces the minimum of the potential to occur along
a nearly $D$-flat direction
$\langle \Phi \rangle\approx \langle \overline\Phi \rangle $.
This scale can be much larger than 1 TeV with
natural choices of $V_0$, so that the $U(1)_X$ gauge boson
is very heavy and plays no role in collider physics.
However, there is also a deviation from $D$-flatness given by
$\langle \Phi \rangle^2
- \langle \overline\Phi \rangle^2
\approx D_X/g_X^2$, with $D_X =({\overline m}^2 - m^2)/2$, which directly
affects the
masses of the remaining light MSSM fields. After integrating out $\Phi$
and $\overline\Phi$, one finds that each MSSM scalar (mass)$^2$
receives a correction given by
\begin{equation}
\Delta m_i^2 = X_i D_X
\label{dtermcorrections}
\end{equation}
where $D_X$ is again typically of order
$M_Z^2$
and may have either sign.
This result does not depend on the scale at which $U(1)_X$ breaks; this
turns out to be a general feature,
independent of assumptions about the precise mechanism
of symmetry breaking.
Thus $U(1)_X$ manages to leave its ``fingerprint" on the masses of the
squarks,
sleptons, and Higgs bosons, even if it is broken at an arbitrarily high
energy. From a TeV-scale point of view, the parameter $D_X$ might
as well be taken as a parameter of our ignorance regarding physics
at very high energies. The important point is that $D_X$ is universal,
so that each MSSM scalar (mass)$^2$ obtains a contribution
simply proportional
to $X_i$, its charge under $U(1)_X$.
Typically the $X_i$ are
rational numbers and do not all have the same sign, so that a particular
candidate $U(1)_X$
can leave a quite distinctive pattern of mass splittings on the squark and
slepton spectrum.
The extra $U(1)_X$ in this discussion may stand alone, or may be
embedded in a larger non-abelian gauge group,
perhaps together with the SM
gauge group (for example in an
$SO(10)$ or $E_6$ GUT).
If the gauge group contains more than one $U(1)$ in addition to $U(1)_Y$, then
each $U(1)$ factor can contribute a set of corrections exactly analogous to
(\ref{dtermcorrections}). Additional $U(1)$ groups are endemic
in superstring models, so at least from that point of view one
may be optimistic about the existence of corresponding $D$-terms
and their potential importance in the study of the squark and slepton
mass spectrum at future colliders.
It should be noted that once one assumes the existence of additional
gauged $U(1)$'s at very high energies, it is quite unnatural to
assume that $D$-term contributions to scalar masses can be avoided
altogether. (This would require an exact symmetry
enforcing $m^2 = {\overline m}^2$ in the example above.)
The only question is whether or not
the magnitude of the $D$-term contributions
is significant compared to the usual mSUGRA contributions.
Note also that as long as the charges $X_i$ are
family-independent, then from (\ref{dtermcorrections})
squarks and sleptons
with the same electroweak quantum numbers remain degenerate,
maintaining the natural suppression of flavor changing neutral currents.
It is not difficult to implement the effects of $D$-terms in simulations,
by imposing the corrections (\ref{dtermcorrections}) to a particular
``template" mSUGRA model.
After choosing the $U(1)_X$ charges of the MSSM fields, our remaining
ignorance of the mechanism of $U(1)_X$ breaking is
parameterized by $D_X$ (roughly of order $M_Z^2$).
The $\Delta m_i^2$ corrections should be imposed at the scale
$M_X$ where one chooses to assume that $U(1)_X$ breaks.
(If $M_X < M_{\rm Planck}$ or $M_{\rm GUT}$,
one should also in principle incorporate
renormalization group effects due to $U(1)_X$ above $M_X$,
but these can often be shown to be small.)
The other parameters of the theory are
unaffected.
One can then run these parameters down to the electroweak
scale, in exactly the same way as in mSUGRA models, to find the
spectrum of sparticle masses.
(The solved-for parameter $\mu$ is then indirectly affected by $D$-terms,
through the requirement of correct electroweak symmetry breaking.)
The only subtlety involved is an apparent ambiguity in choosing the
charges $X_i$, since any linear combination of $U(1)_X$ and $U(1)_Y$
charges might be used.
These charges should be picked to correspond to the basis
in which there is no mixing in the kinetic
terms of the $U(1)$ gauge bosons. In particular models where $U(1)_X$
and/or $U(1)_Y$ are embedded in non-abelian groups, this linear
combination is uniquely determined; otherwise it can be arbitrary.
A test case which seems particularly worthy of study is that of
an additional gauged $B-L$ symmetry. In this case the $U(1)_X$ charges
for each MSSM scalar field are a linear combination of $B-L$ and $Y$.
If this model is embedded in $SO(10)$ (or certain of its subgroups),
then the unmixed linear combination of $U(1)$'s appropriate
for (\ref{dtermcorrections}) is
$X = -{5\over 3}(B-L) + {4\over 3} Y$. The $X$ charges for the
MSSM squarks and sleptons are $-1/3$ for $Q_L,u_R,e_R$ and $+1$
for $L_L$ and
$d_R$. The MSSM Higgs fields have charges $+2/3$ for $H_u$ and
$-2/3$ for $H_d$. Here we consider the modifications to a
mSUGRA model defined by the parameters
$(m_0,m_{1/2},A_0)=(200,100,0)$~GeV, $\mu<0$, and $\tan\beta=2$,
assuming $m_t=175$ GeV.
The effects of $D$-term contributions to the scalar mass spectrum is
illustrated in Fig.~1, which shows the masses of
$\tilde{e}_L,\tilde{e}_R$, the lightest Higgs boson $h$, and the
lightest bottom squark $\tilde{b}_1$ as a function of $D_X$.
The unmodified mSUGRA prediction is found at
$D_X=0$. A particularly dramatic
possibility is that $D$-terms could invert the usual hierarchy of slepton
masses, so that $m_{{\tilde e}_L},m_{{\tilde \nu}} < m_{{\tilde e}_R}$.
In the test model, this occurs for negative $D_X$; the negative endpoint
of $D_X$ is set by the experimental lower bound on $m_{\tilde \nu}$.
The relative change of the squark masses is smaller, while the change
to the lightest Higgs boson mass is almost negligible except near
the positive $D_X$ endpoint where it reaches the experimental lower
bound. The complicated mass spectrum perhaps can be probed most
directly at the NLC with precision measurements of squark and
slepton masses.
Since the usual MSSM renormalization group contributions to scalar
masses are much larger for squarks than for sleptons, it is likely
that the effects of $D$-term contributions are relatively
larger for sleptons.
At the Tevatron and LHC, it has been suggested in these proceedings that
SUSY parameter determinations can be obtained by making global fits
of the mSUGRA parameter space to various observed signals. In this
regard it should be noted that significant $D$-term contributions
could invalidate such strategies unless they are generalized.
This is because adding
$D$-terms (\ref{dtermcorrections}) to a given template mSUGRA model
can dramatically change certain branching fractions
by altering the kinematics of decays involving squarks and especially
sleptons. This is demonstrated for the test model in Fig.~2.
Thus we find for example that the product
$BR({\tilde\chi}^+_1 \rightarrow l^+X)\times
BR({\tilde\chi}_2^0 \rightarrow l^+l^- X)$ can change
up to an order of magnitude or more as one varies $D$-terms
(with all other parameters held fixed).
Note that the branching ratios of Fig.~2 include the leptons
from two-body and three-body decays, e.g. ${\tilde\chi}^+_1 \rightarrow
l^+\nu {\tilde\chi}_1^0$
and ${\tilde\chi}^+_1 \rightarrow {\tilde l}^+\nu
\rightarrow l^+ {\tilde\chi}_j^0 \nu$.
On the other hand,
the $BR({\tilde g}\rightarrow b X)$ is fairly insensitive to
$D$-terms over most, but not all, of parameter space.
Since the squark masses are generally much less affected by
the $D$-terms, and the gluino mass only indirectly,
the production cross sections for squarks and gluinos
should be fairly stable. Therefore, the variation of
$BR({\tilde g}\rightarrow b X)$ is an accurate gauge of
the variation of observables such as the $b$ multiplicity
of SUSY events. Likewise, the ${\tilde\chi}^\pm_1 {\tilde\chi}_2^0$
production cross section does not change much as the
$D$-terms are varied, so the
expected trilepton signal can vary like the product
of branching ratios -- by orders of magnitude.
While the results presented are for a specific,
and particularly simple, test model,
similar variations can be observed in other
explicit models.
The possible presence of $D$-terms should be considered when
interpreting a SUSY signal at future colliders. An
experimental analysis
which proves or disproves their existence would be a unique
insight into physics at very high energy scales.
To facilitate event generation, approximate expressions for the modified
mass spectra are implemented in the {\tt Spythia} Monte Carlo,
assuming the $D$-terms are added in at the unification scale.
Sparticle spectra from models with extra $D$-terms can be incorporated
into {\tt ISAJET} simply via the $MSSMi$ keywords, although the user must
supply a program to generate the relevant spectra via RGE's or
analytic formulae.
\begin{figure}[h]
\hskip 1cm\psfig{figure=mass.eps,width=7cm}
\caption{Mass spectrum as a function of $D_{X}$.}
\end{figure}
\begin{figure}[h]
\hskip 1cm\psfig{figure=br.eps,width=7cm}
\caption{Branching ratios as a function of $D_{X}$.}
\end{figure}
\section{Non-Universal GUT-Scale Soft SUSY-Breaking Parameters}
\subsection{Introduction}
We considered models in which the gaugino masses and/or the
scalar masses are not universal at the GUT scale, $M_{\rm GUT}$.
We study the extent to which non-universal boundary conditions
can influence experimental signatures and
detector requirements,
and the degree to which experimental data can distinguish
between different models for the
GUT-scale boundary conditions.
\subsubsection{Non-Universal Gaugino Masses at $M_{\rm GUT}$}
We focus on two well-motivated types of models:
\noindent $\bullet$ Superstring-motivated models
in which SUSY breaking is
moduli dominated. We consider
the particularly attractive O-II model of Ref.~ ~\badcite{Ibanez}.
The boundary conditions at $M_{\rm GUT}$ are:
\begin{equation}
\begin{array}{l}
M_a^0\sim \sqrt 3 m_{3/2}[-(b_a+\delta_{GS})K\eta] \\
m_0^2=m_{3/2}^2[-\delta_{GS}K^\prime] \\
A_0=0
\end{array}
\label{bcs}
\end{equation}
where $b_a$ are SM beta function coefficients, $\delta_{GS}$
is a mixing parameter,
which would be a negative integer in the O-II model,
and $\eta=\pm1$.
{}From the estimates of Ref.~ ~\badcite{Ibanez},
$K \simeq 4.6\times 10^{-4}$ and $K^\prime \simeq 10^{-3}$, we
expect that slepton and squark masses would be very much
larger than gaugino masses.
\noindent $\bullet$ Models
in which SUSY breaking occurs via an $F$-term that is not
an $SU(5)$ singlet. In this class of models, gaugino masses are generated
by a chiral superfield $\Phi$ that appears linearly in the gauge
kinetic function, and whose auxiliary $F$ component acquires an
intermediate-scale vev:
\begin{equation}
{\cal L}\sim \int d^2\theta W^a W^b {\Phi_{ab}\over M_{\rm Planck}} + h.c.
\sim {\langle F_{\Phi} \rangle_{ab}\over M_{\rm Planck}}
\lambda^a\lambda^b\, +\ldots ,
\end{equation}
where the $\lambda^{a,b}$ are the gaugino fields.
$F_{\Phi}$
belongs to an $SU(5)$ irreducible representation which
appears in the symmetric product of two adjoints:
\begin{equation}
({\bf 24}{\bf \times}
{\bf 24})_{\rm symmetric}={\bf 1}\oplus {\bf 24} \oplus {\bf 75}
\oplus {\bf 200}\,,
\label{irrreps}
\end{equation}
where only $\bf 1$ yields universal masses.
Only the component of $F_{\Phi}$ that is ```neutral'' with respect to
the SM gauge group should acquire a vev,
$\langle F_{\Phi} \rangle_{ab}=c_a\delta_{ab}$, with $c_a$
then determining the relative magnitude of
the gauginos masses at $M_{\rm GUT}$: see Table~\ref{masses}.
\begin{table}
\begin{center}
\begin{small}
\begin{tabular}{|c|ccc|ccc|}
\hline
\ & \multicolumn{3}{c|} {$M_{\rm GUT}$} & \multicolumn{3}{c|}{$m_Z$} \cr
$F_{\Phi}$
& $M_3$ & $M_2$ & $M_1$
& $M_3$ & $M_2$ & $M_1$ \cr
\hline
${\bf 1}$ & $1$ &$\;\; 1$ &$\;\;1$ & $\sim \;6$ & $\sim \;\;2$ &
$\sim \;\;1$ \cr
${\bf 24}$ & $2$ &$-3$ & $-1$ & $\sim 12$ & $\sim -6$ &
$\sim -1$ \cr
${\bf 75}$ & $1$ & $\;\;3$ &$-5$ & $\sim \;6$ & $\sim \;\;6$ &
$\sim -5$ \cr
${\bf 200}$ & $1$ & $\;\; 2$ & $\;10$ & $\sim \;6$ & $\sim \;\;4$ &
$\sim \;10$ \cr
\hline
$\stackrel{\textstyle O-II}{\delta_{GS}=-4}$ & $1$ & $\;\;5$ & ${53\over 5}$ &
$\sim 6$ & $\sim 10$ & $\sim {53\over5}$ \cr
\hline
\end{tabular}
\end{small}
\caption{Relative gaugino masses at $M_{\rm GUT}$ and $m_Z$
in the four possible $F_{\Phi}$ irreducible representations,
and in the O-II model with $\delta_{GS}\sim -4$.}
\label{masses}
\end{center}
\end{table}
Physical masses of the gauginos are
influenced by $\tan\beta$-dependent off-diagonal terms in the mass
matrices and by corrections which boost $m_{\gl}(pole)$ relative to $m_{\gl}(m_{\gl})$.
If $\mu$ is large, the lightest neutralino (which is the LSP)
will have mass $m_{\cnone}\sim {\rm min}(M_1,M_2)$ while the lightest
chargino will have $m_{\cpmone}\sim M_2$. Thus, in the ${\bf 200}$
and O-II scenarios with
$M_2\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} M_1$, $m_{\cpmone}\simeqm_{\cnone}$ and the $\tilde{\chi}^{\pm}_1$
and $\tilde{\chi}^0_1$ are both Wino-like. The
$\tan\beta$ dependence of the masses at $m_Z$ for the universal,
${\bf 24}$, ${\bf 75}$, and ${\bf 200}$ choices appears in Fig.~\ref{mtanb}.
The $m_{\gl}$-$m_{\cnone}$ mass splitting becomes increasingly
smaller in the sequence ${\bf 24}$, ${\bf 1}$, ${\bf 200}$
${\bf 75}$, O-II,
as could be anticipated from Table~\ref{masses}.
It is interesting to note that at
high $\tan\beta$, $\mu$ decreases to a level comparable to $M_1$ and
$M_2$, and there is substantial degeneracy among the $\tilde{\chi}^{\pm}_1$, $\tilde{\chi}^0_2$
and $\tilde{\chi}^0_1$.
\begin{figure}[htb]
\leavevmode
\centerline{\psfig{file=nonu_mtanb.ps,width=3.5in}}
\caption{Physical (pole) gaugino masses as a function of $\tan\beta$
for the ${\bf 1}$ (universal), ${\bf 24}$, ${\bf 75}$, and ${\bf 200}$
$F$ representation choices. Also plotted are $|B|$ and $|\mu|$.
We have taken $m_0=1\,{\rm TeV}$ and $M_3=200,400,200,200\,{\rm GeV}$,
respectively.}
\label{mtanb}
\end{figure}
\subsubsection{Non-Universal Scalar Masses at $M_{\rm GUT}$}
We consider
models in which the SUSY-breaking scalar masses at $M_{\rm GUT}$ are influenced
by the Yukawa couplings of the corresponding quarks/leptons.
This idea is exemplified in the model of Ref.~ ~\badcite{hallrandall}
based on perturbing about the $[U(3)]^5$ symmetry that is present
in the absence of Yukawa couplings. One finds, for example:
\begin{equation}
{\bf m}_{\tilde Q}^2=m_0^2(I+c_Q\lam_u^\dagger\lam_u+c_Q^\prime\lam_d^\dagger\lam_d+\ldots)
\end{equation}
where $Q$ represents the squark partners of the left-handed quark doublets.
The Yukawas $\lam_u$ and $\lam_d$ are $3\times 3$ matrices
in generation space.
The $\ldots$ represent terms of order $\lambda^4$ that we will neglect.
A priori, $c_Q$, $c_Q^\prime$, should all be similar in size,
in which case the large top-quark Yukawa coupling implies that
the primary deviations from universality will occur in $m_{\stl}^2$,
$m_{\sbl}^2$ (equally and in the same direction).\footnote{In this discussion
we neglect an analogous, but independent, shift in $m_{\str}^2$.}
It is the fact that $m_{\stl}^2$ and $m_{\sbl}^2$ are shifted equally
that will distinguish $m^2$ non-universality from the effects of a large
$A_0$ parameter at $M_{\rm GUT}$; the latter would primarily
introduce ${\tilde{t}_L}-{\tilde{t}_R}$ mixing and yield a low
$m_{\stopone}$ compared to $m_{\sbotone}$.
\subsection{Phenomenology}
\subsubsection{Non-Universal Gaugino Masses}
We examined the phenomenological
implications for the standard Snowmass comparison point
({\it e.g.} NLC point \#3) specified by $m_t=175\,{\rm GeV}$, $\alpha_s=0.12$,
$m_0=200\,{\rm GeV}$, $M_3^0=100\,{\rm GeV}$, $\tan\beta=2$, $A_0=0$ and $\mu$$<$$0$.
In treating the O-II model
we take $m_0=600\,{\rm GeV}$, a value that yields a (pole)
value of $m_{\gl}$ not unlike that for the other scenarios.
The masses of the supersymmetric particles for
each scenario are given in Table~\ref{susymasses}.
\begin{table}
\begin{center}
\begin{small}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\ & ${\bf 1}$ & ${\bf 24}$ & ${\bf 75}$ & ${\bf 200}$ &
$\stackrel{\textstyle O-II}{\delta_{GS}=-4.7}$ \cr
\hline
$m_{\gl}$ & 285 & 285 & 287 & 288 & 313 \cr
${m_{\sur}}$ & 302 & 301 & 326 & 394 & - \cr
$m_{\stopone}$ & 255 & 257 & 235 & 292 & - \cr
$m_{\stoptwo}$ & 315 & 321 & 351 & 325 & - \cr
$m_{\sbl}$ & 266 & 276 & 307 & 264 & - \cr
$m_{\sbr}$ & 303 & 303 & 309 & 328 & - \cr
$m_{\slepr}$ & 207 & 204 & 280 & 437 & - \cr
$m_{\slepl}$ & 216 & 229 & 305 & 313 & - \cr
$m_{\cnone}$ & 44.5 & 12.2 & 189 & 174.17 & 303.09 \cr
$m_{\cntwo}$ & 97.0 & 93.6 & 235 & 298 & 337 \cr
$m_{\cpmone}$ & 96.4 & 90.0 & 240 & 174.57 & 303.33 \cr
$m_{\cpmtwo}$ & 275 & 283 & 291 & 311 & - \cr
$m_{\hl}$ & 67 & 67 & 68 & 70 & 82 \cr
\hline
\end{tabular}
\end{small}
\caption{Sparticle masses for the Snowmass comparison point
in the different gaugino mass scenarios. Blank entries for the O-II
model indicate very large masses.}
\label{susymasses}
\end{center}
\end{table}
The phenomenology of these scenarios for $e^+e^-$ collisions
is not absolutely straightforward.
\begin{itemize}
\item
In the $\bf 75$ model, $\tilde{\chi}^+_1\tilde{\chi}^-_1$ and $\tilde{\chi}^0_2\cntwo$
pair production at $\sqrt s=500\,{\rm GeV}$ are
barely allowed kinematically; the phase space for $\tilde{\chi}^0_1\tilde{\chi}^0_2$
is only somewhat better. All the signals would be rather weak,
but could probably be extracted with sufficient integrated luminosity.
\item
In the $\bf 200$ model, $e^+e^-\to \tilde{\chi}^+_1\tilde{\chi}^-_1$ production would be
kinematically allowed at a $\sqrt s=500\,{\rm GeV}$ NLC, but not easily observed
due to the fact that the (invisible) $\tilde{\chi}^0_1$ would take essentially all of
the energy in the $\tilde{\chi}^{\pm}_1$ decays. However,
according to the results of Ref.~ ~\badcite{moddominated},
$e^+e^-\to \gamma\tilde{\chi}^+_1\tilde{\chi}^-_1$ would be observable at $\sqrt s=500\,{\rm GeV}$.
\item
The O-II model with $\delta_{GS}$ near $-4$ predicts that $m_{\cpmone}$
and $m_{\cnone}$ are both rather close to $m_{\gl}$, so that
$e^+e^-\to \tilde{\chi}^+_1\tilde{\chi}^-_1,\tilde{\chi}^0_1\cnone$ would {\it not} be kinematically allowed
at $\sqrt s = 500\,{\rm GeV}$.
The only SUSY ``signal'' would be the presence of a very SM-like
light Higgs boson.
\end{itemize}
At the LHC, the strongest signal for SUSY would arise from ${\tilde g}\gl$
production. The different models lead to very distinct signatures
for such events. To see this, it is sufficient to list the primary
easily identifiable decay chains of the gluino for each of the five scenarios.
(In what follows, $q$ denotes any quark other than a $b$.)
\begin{eqnarray*}
{\bf 1:} && {\tilde g} \stackrel{90\%}{\to} {\tilde{b}_L}\overline b \stackrel{99\%}{\to}
\tilde{\chi}^0_2 b\overline b\stackrel{33\%}{\to} \tilde{\chi}^0_1(e^+e^-~{\rm or}~\mu^+\mu^-)b\overline b\\
\phantom{{\bf 1:}} &&
\phantom{{\tilde g} \stackrel{90\%}{\to} {\tilde{b}_L}\overline b \stackrel{99\%}{\to}
\tilde{\chi}^0_2 b\overline b}
\stackrel{8\%}{\to} \tilde{\chi}^0_1 \nu\overline\nu b\overline b\\
\phantom{{\bf 1:}} &&
\phantom{{\tilde g} \stackrel{90\%}{\to} {\tilde{b}_L}\overline b \stackrel{99\%}{\to}
\tilde{\chi}^0_2 b\overline b}
\stackrel{38\%}{\to} \tilde{\chi}^0_1 q\overline q b\overline b\\
\phantom{{\bf 1:}} &&
\phantom{{\tilde g} \stackrel{90\%}{\to} {\tilde{b}_L}\overline b \stackrel{99\%}{\to}
\tilde{\chi}^0_2 b\overline b}
\stackrel{8\%}{\to} \tilde{\chi}^0_1 b\overline b b\overline b\\
{\bf 24:} && {\tilde g} \stackrel{85\%}{\to} {\tilde{b}_L}\overline b
\stackrel{70\%}{\to} \tilde{\chi}^0_2 b\overline b\stackrel{99\%}{\to}
h^0 \tilde{\chi}^0_1 b\overline b\stackrel{28\%}{\to} \tilde{\chi}^0_1 b\overline b b\overline b \\
\phantom{ {\bf 24:}} &&
\phantom{ {\tilde g} \stackrel{85\%}{\to} {\tilde{b}_L}\overline b
\stackrel{70\%}{\to} \tilde{\chi}^0_2 b\overline b\stackrel{99\%}{\to}
h^0 \tilde{\chi}^0_1 b\overline b}
\stackrel{69\%}{\to} \tilde{\chi}^0_1 \tilde{\chi}^0_1\cnone b\overline b \\
{\bf 75:} && {\tilde g} \stackrel{43\%}{\to} \tilde{\chi}^0_1 g~{\rm or}~\tilde{\chi}^0_1 q\overline q \\
\phantom{{\bf 75:}} && \phantom{{\tilde g}}\stackrel{10\%}{\to} \tilde{\chi}^0_1 b\overline b \\
\phantom{{\bf 75:}} && \phantom{{\tilde g}}\stackrel{20\%}{\to}
\tilde{\chi}^0_2 g~{\rm or}~\tilde{\chi}^0_2 q\overline q \\
\phantom{{\bf 75:}} && \phantom{{\tilde g}}\stackrel{10\%}{\to} \tilde{\chi}^0_2 b\overline b \\
\phantom{{\bf 75:}} && \phantom{{\tilde g}}\stackrel{17\%}{\to} \tilde{\chi}^{\pm}_1 q\overline q \\
{\bf 200:} && {\tilde g} \stackrel{99\%}{\to} {\tilde{b}_L}\overline b
\stackrel{100\%}{\to} \tilde{\chi}^0_1 b\overline b \\
{\bf O-II{\bf:}} && {\tilde g} \stackrel{51\%}{\to} \tilde{\chi}^{\pm}_1 q\overline q \\
\phantom{{\rm O-II{\bf:}}} && \phantom{{\tilde g}} \stackrel{17\%}{\to} \tilde{\chi}^0_1 g \\
\phantom{{\rm O-II{\bf:}}} &&
\phantom{{\tilde g}} \stackrel{26\%}{\to} \tilde{\chi}^0_1 q\overline q \\
\phantom{{\rm O-II{\bf:}}} &&
\phantom{{\tilde g}} \stackrel{6\%}{\to} \tilde{\chi}^0_1 b\overline b
\end{eqnarray*}
Gluino pair production will then lead to the following strikingly
different signals.
\begin{itemize}
\item
In the $\bf 1$ scenario we expect a very large
number of final states with missing energy,
four $b$-jets and two lepton-antilepton pairs.
\item
For $\bf 24$, an even larger number of events will have missing energy
and eight $b$-jets, four of which reconstruct to two pairs with
mass equal to (the known) $m_{\hl}$.
\item
The signal for ${\tilde g}\gl$ production in the case of $\bf 75$ is
much more traditional; the primary decays yield
multiple jets (some of which are
$b$-jets) plus $\tilde{\chi}^0_1$, $\tilde{\chi}^0_2$ or $\tilde{\chi}^{\pm}_1$.
Additional jets, leptons and/or neutrinos
arise when $\tilde{\chi}^0_2\to\tilde{\chi}^0_1$ + two jets,
two leptons or two neutrinos or $\tilde{\chi}^{\pm}_1\to\tilde{\chi}^0_1$ +
two jets or lepton+neutrino.
\item
In the $\bf 200$ scenario, we find missing energy plus four $b$-jets;
only $b$-jets appear in the primary decay -- any other
jets present would have to come from initial- or final-state radiation,
and would be expected to be softer on average. This is almost
as distinctive a signal as the $8b$ final state found in the $\bf 24$
scenario.
\item
In the final O-II scenario, $\tilde{\chi}^{\pm}_1\to \tilde{\chi}^0_1$ + very soft
spectator jets or leptons that would not be easily detected. Even
the $q\overline q$ or $g$ from the primary decay would not be very energetic
given the small mass splitting between $m_{\gl}$ and $m_{\cpmone}\simm_{\cnone}$.
Soft jet cuts would have to be used to dig out this signal,
but it should be possible given the very high ${\tilde g}\gl$ production rate
expected for this low $m_{\gl}$ value; see Ref.~ ~\badcite{moddominated}.
\end{itemize}
Thus, for the Snowmass comparison point,
distinguishing between the different boundary condition scenarios
at the LHC will be easy. Further, the event rate
for a gluino mass this low is such that the end-points of
the various lepton, jet or $h^0$ spectra will allow relatively good
determinations of the mass differences between the sparticles appearing
at various points in the final state decay chain.
We are optimistic that this will prove to be
a general result so long as event rates are large.
\subsubsection{Non-Universal Scalar Masses}
Once again we focus on the Snowmass overlap point. We maintain
gaugino mass universality at $M_{\rm GUT}$, but allow for non-universality
for the squark masses. Of the many possibilities, we focus
on the case where only $c_Q\neq 0$ with $A_0=0$ (as assumed
for the Snowmass overlap point). The phenomenology for this case
is compared to that which would emerge if we take $A_0\neq 0$
with all the $c_i=0$.
Consider the ${\tilde g}$ branching ratios as a function
of $m_{\stl}$$=$$m_{\sbl}$ as $c_Q$ is varied from negative to positive values.
As the common mass crosses the threshold above which
the ${\tilde g}\to \sbot_1 b$ decay becomes kinematically disallowed,
we revert to a more standard SUSY scenario in which ${\tilde g}$
decays are dominated by modes such as $\tilde{\chi}^{\pm}_1 q\overline q$,
$\tilde{\chi}^0_1 q\overline q$, $\tilde{\chi}^0_2 q\overline q$ and $\tilde{\chi}^0_2 b\overline b$.
For low enough $m_{\stl}$, the ${\tilde g}\to \stop_1 t$ mode opens
up, but must compete with the ${\tilde g}\to\sbot_1 b$ mode
that has even larger phase space.
In contrast, if $A_t$ is varied,
the ${\tilde g}$ branching ratios remain essentially constant until
$m_{\stopone}$ is small enough that ${\tilde g}\to \stop_1 t$ is kinematically
allowed. Below this point, this latter mode quickly dominates
the $\sbot_1 b$ mode which continues to have very small phase
space given that the $\sbot_1$ mass remains essentially constant
as $A_t$ is varied.
\subsection{Event Generation}
A thorough search and determination
of the rates (or lack thereof) for the full panoply of possible
channels is required to distinguish the many possible
GUT-scale boundary conditions from one another. In the program {\tt ISAJET},
independent weak-scale gaugino masses may be input using the $MSSM4$
keyword. Independent third generation
squark masses may be input via the $MSSM2$ keyword.
The user must supply a program to generate the relevant weak-scale parameter
values from the specific GUT-scale assumptions.
Relevant weak-scale MSSM parameters can also be input to {\tt Spythia};
as with {\tt ISAJET}, the user must provide a program for the specific
model.
\newcommand{ \PLBold }[3]{Phys. Lett. {\bf #1B} (#2) #3}
\newcommand{ \PRold }[3]{Phys. Rev. {\bf #1} (#2) #3}
\newcommand{ \PREP }[3]{Phys. Rep. {\bf #1} (#2) #3}
\newcommand{ \ZPC }[3]{Z. Phys. C {\bf #1} (#2) #3}
\def\slashchar#1{\setbox0=\hbox{$#1$}
\dimen0=\wd0
\setbox1=\hbox{/} \dimen1=\wd1
\ifdim\dimen0>\dimen1
\rlap{\hbox to \dimen0{\hfil/\hfil}}
#1
\else
\rlap{\hbox to \dimen1{\hfil$#1$\hfil}}
/
\fi} %
\section{MSSM Scenarios Motivated by Data}
An alternative procedure for gleaning
information about the SUSY soft terms is to use the full
(> 100 parameters) parameter space freedom of the MSSM and match
to data, assuming one has a supersymmetry signal. This approach has been
used in the following two examples.
\subsection{The CDF $e^+e^-\gamma\gamma +\slashchar{E}_T$ Event}
Recently a candidate for sparticle production has been reported
~\badcite{event} by the CDF collaboration.
This has been interpreted in several
ways ~\badcite{PRL}, ~\badcite{DimopoulosPRL}, ~\badcite{DimopoulosSecond},
~\badcite{Grav} and later with additional variations ~\badcite{kolda},
~\badcite{LopezNanopoulos}, ~\badcite{Hisano}. The main two paths are whether
the LSP is the lightest neutralino ~\badcite{PRL}, ~\badcite{sandro}, or a
nearly massless gravitino ~\badcite{DimopoulosPRL,DimopoulosSecond,
Grav,kolda,LopezNanopoulos} or axino ~\badcite{Hisano}. In the
gravitino or axino case the LSP is not a candidate for cold dark matter,
SUSY can have no effect on $R_b$ or $\alpha^Z_s$ or $BR(b\to s\gamma),$
and stops and gluinos are not being observed at FNAL. In the case where
the lightest neutralino is the LSP, the opposite holds for all of
these observables, and we will pursue this case in detail here.
The SUSY Lagrangian depends on a number of parameters, all of
which have the dimension of mass. That should not be viewed as a weakness
because at present we have no theory of the origin of mass parameters.
Probably getting such a theory will depend on understanding how
SUSY is broken. When there is no data on sparticle masses and
couplings, it is appropriate to make simplifying assumptions,
based on theoretical prejudice,
to reduce the number of parameters. However, once there may be data,
it is important to constrain the most general set of parameters and
see what patterns emerge.
We proceed by making no assumptions about soft breaking parameters.
In practice, even though the full theory has over a hundred such
parameters, that is seldom a problem since any given observable depends
on at most a few.
The CDF event ~\badcite{event} has a 36 GeV $e^-$, a 59 GeV $e^+$, photons of
38 and 30 GeV, and $\slashchar{E}_T = $ 53 GeV.
A SUSY interpretation is $q\bar q\to \gamma^{*}, Z^{*} \to {\tilde e}^+
{\tilde e}^-$, followed by each ${\tilde e}^\pm \to e^\pm \tilde{\chi}_2^0,$
$\tilde{\chi}_2^0 \to \gamma\tilde{\chi}_1^0.$ The second lightest
neutralino, $\tilde{\chi}_2^0$, must be
photino-like since it couples strongly to $\tilde ee$. Then the
LSP = $\tilde{\chi}_1^0$ must be
Higgsino-like ~\badcite{Komatsu,HaberWyler,AmbrosMele1} to
have a large $BR(\tilde{\chi}_2^0 \to\tilde{\chi}_1^0 \gamma).$
The range of parameter choices for
this scenario are given in Table \ref{eeggtab}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|} \hline
\multicolumn{2}{|c|}{$e^+e^-\gamma\gamma + \slashchar{E}_T$
constraints on supersymmetric parameters}
\\ \hline \hline
$\tilde e_L$ & $\tilde e_R$ \\ \hline
$100 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} m_{\tilde e_L} \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 130 \; {\rm GeV}$
& $100 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} m_{\tilde e_R} \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 112 \; {\rm GeV}$ \\
$50 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} M_1 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 92 \; {\rm GeV}$
& $60 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} M_1 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 85 \; {\rm GeV}$ \\
$50 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} M_2 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 105 \; {\rm GeV}$
& $40 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} M_2 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 85 \; {\rm GeV}$ \\
$0.75 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} M_2/M_1 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 1.6$
& $0.6 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} M_2/M_1 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 1.15$ \\
$-65 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} \mu \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} -35 \; {\rm GeV}$
& $-60 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} \mu \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} -35 \; {\rm GeV}$ \\
$0.5 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} |\mu|/M_1 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 0.95$
& $0.5 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} |\mu|/M_1 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 0.8$ \\
$1 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} \tan \beta \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 3 $
& $1 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} \tan \beta \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 2.2$ \\ \hline
\end{tabular}
\caption{Constraints on the MSSM parameters and masses in the
neutralino LSP scenario.
}
\label{eeggtab}
\end{center}
\end{table}
If light superpartners indeed exist, FNAL and LEP will
produce thousands of them, and measure their properties very well.
The first thing to check at FNAL is whether the produced selectron is
${\tilde e}_L$ or ${\tilde e}_R.$ If ${\tilde e}_L,$ then the charged current
channel $u\overline{d} \to W^+ \to {\tilde e}_L \tilde \nu$ has 5-10 times
the rate of ${\tilde e}_L^+ {\tilde e}_L^-$.
We expect ${\tilde e}_L \to e\tilde{\chi}_2^0(\to \gamma\tilde{\chi}_1^0).$
Most likely
~\badcite{sandro} $\tilde \nu \to e \tilde{\chi}_1^{\pm},$
where $\tilde{\chi}_1^{\pm}$ is the lightest
chargino. If the stop mass $m_{\tilde t} < m_{\tilde{\chi}_1^{\pm}}$, then
$\tilde{\chi}_1^{\pm} \to \tilde t (\to c\tilde{\chi}_1^0)b$
so $\tilde \nu \to ebc\tilde{\chi}_1^0$; if
$m_{\tilde t} > m_{\tilde{\chi}_1^{\pm}}$
then $\tilde{\chi}_1^{\pm} \to W^* (\to jj)\tilde{\chi}_1^0$ so $\tilde \nu
\to ejj\tilde{\chi}_1^0,$ where $j=u,d,s,c.$
Either way, dominantly ${\tilde e}_L \tilde \nu
\to ee\gamma jj \slashchar{E}_T$ where $j$ may be light or heavy quarks.
If no such signal is found, probably the produced selectron was ${\tilde e}_R$.
Also, $\sigma (\tilde\nu\tilde\nu) \cong \sigma({\tilde e}_L {\tilde e}_L)$.
Cross sections for many channels are given in Ref.~ ~\badcite{sandro}.
The most interesting channel (in our opinion)
at FNAL is $u\overline{d} \to W^+ \to \tilde{\chi}^+_i
\tilde{\chi}_2^0.$ This gives a signature $\gamma jj \slashchar{E}_T,$ for
which
there is only small parton-level SM backgrounds.
If $m_{\tilde t} < m_{\tilde{\chi}^{\pm}_i}$, one of $j$
is a $b.$ If $t\to \tilde t \tilde{\chi}_2^0$ (expected about 10\%
of the time) and,
if $\tilde q$ are produced at FNAL, there are additional sources of such
events (see below).
If charginos, neutralinos and sleptons are light, then gluinos and
squarks may not be too heavy. If stops are light ($m_{{\tilde t}_1}\simeq
M_W$), then $BR(t\to
\tilde t \tilde{\chi}^0_i) \simeq 1/2$ ~\badcite{wells}.
In this case, an extra source
of tops must exist beyond SM production,
because $\sigma \times BR(t\to Wb)^2$ is near or above its SM
value with $BR(t\to Wb)=1.$
With these motivations,
the authors of ~\badcite{KaneMrenna}
have suggested that one assume $m_{\tilde g} \geq m_t + m_{\tilde t}$
and $m_{\tilde q} \geq m_{\tilde g}$, with
$m_{\tilde q}\simeq 250-300$ GeV. Then there are several pb of top
production via channels $\tilde q \tilde g, \tilde g \tilde g, \tilde q
\bar{\tilde q}$ with $\tilde q \to q \tilde g,$ and $\tilde g \to t
\tilde t$ since $t\tilde t$ is the gluino's only two-body
decay mode.
This analysis points out that
$P_T(t\bar t)$ should peak at smaller $P_T$ for the SM than
for the SUSY scenario, since the system is recoiling against extra jets in
the SUSY case.
The SUSY case suggests that if
$m_t$ or $\sigma_{t\bar t}$ are measured in different channels one will
obtain different values, which may be consistent with reported data.
This analysis also argues that the present data is consistent with
$BR(t\to \tilde t \tilde{\chi}^0_i)= 1/2.$
At present ~\badcite{Rapporteur} $R_b$ and $BR(b\to s\gamma)$ differ from their
SM predictions by 1.5-2$\sigma$, and $\alpha_s$ measured by the $Z$
width differs by about 1.5-2$\sigma$ from its value measured in DIS and
other ways. If these effects are real they can be explained by
$\tilde{\chi}^{\pm}_i$ - $\tilde t$ loops,
using the same SUSY parameters
deduced from the $ee\gamma\gamma$ event ($+$ a light, mainly
right-handed, stop). Although $\tan\beta, \mu,$ and $M_2$ a priori could be
anything, they come out the same from the analysis of these loops as
from $ee\gamma\gamma$ ($\tan\beta \leq 1.5, \mu \sim -m_Z/2, M_2 \sim
60-80$ GeV).
The LSP=$\tilde{\chi}_1^0$ apparently escapes the CDF detector in the
$ee\gamma\gamma$ event, suggesting it is stable (though only proving it
lives longer than $\sim 10^{-8}$ sec). If so it is a candidate for CDM.
The properties of $\tilde{\chi}_1^0$ are deduced from the
analysis ~\badcite{sandro} so the calculation of the relic density
~\badcite{KaneWells} is highly constrained. The analysis shows that the
s-channel annihilation of $\tilde{\chi}_1^0\tilde{\chi}_1^0$ through the $Z$
dominates, so the needed
parameters are $\tan\beta$, $m_{\tilde{\chi}_1^0}$
and the Higgsino fraction for
$\tilde{\chi}_1^0$, which is large.
The results are encouraging, giving $0.1
\leq \Omega h^2 \leq 1,$ with a central value $\Omega h^2 \simeq 1/4.$
The parameter choices of Table \ref{eeggtab} can be
input to event generators such as {\tt Spythia} or {\tt ISAJET}
(via $MSSMi$ keywords) to check that
the event rate and kinematics of the $ee\gamma\gamma$ event are
satisfied and then to determine other related signatures. {\tt Spythia}
includes the $\tilde\chi_2^0\rightarrow\tilde{\chi}_1^0\gamma$ branching
ratio for low $\tan\beta$ values; for {\tt ISAJET}, the
$\tilde{\chi}_2^0\rightarrow\tilde{\chi}_1^0\gamma$ branching must be input
using the $FORCE$ command, or must be explicitly added into the decay table.
\subsection{CDF/D0 Dilepton Plus Jets Events}
Recently, CDF and D0 have reported various dilepton plus multi-jet
events which are presumably top-quark candidate events. For several of
these events, however, the
event kinematics do not match well with those expected from a top
quark with mass $m_t\sim 175$ GeV. The authors of Ref. ~\badcite{barnett}
have shown that the match to event kinematics can be improved by
hypothesizing a supersymmetry source for the recalcitrant events.
The supersymmetry source is best matched by considering
$\tilde q\tilde q$ production, where each
$\tilde q\rightarrow q\tilde{\chi},\tilde{\chi}\rightarrow\nu\tilde{l},
\tilde{l}\rightarrowl\tilde{\chi}_1^0$. A recommended set of parameters
is as follows ~\badcite{barnett}:
$m_{\tilde g}\simeq 330$ GeV, $m_{\tilde q}\simeq 310$ GeV,
$m_{\tilde{l}_L}\simeq 220$ GeV, $m_{\tilde\nu}\simeq 220$ GeV,
$m_{\tilde{l}_R}\simeq 130$ GeV, $\mu\simeq -400$ GeV,
$M_1\simeq 50$ GeV and $M_2\simeq 260$ GeV. Note that this parameter
set discards the common hypothesis of gaugino mass unification.
These parameters can be input into {\tt Spythia} or
{\tt ISAJET} (via $MSSMi$ keywords), taking care to use the non-unified
gaugino masses as inputs.
\section{$R$ Parity Violation}
\def1.{1.}
\def0.{0.}
\def\mathrel{\rlap{\raise 1pt \hbox{$>$}}{\mathrel{\rlap{\raise 1pt \hbox{$>$}}}
{\rlap{\lower 2pt \hbox{$\sim$}}}
}
\def\mathop{\rm sgn}{\mathop{\rm sgn}}
\def\slashchar{E}_T{\slashchar{E}_T}
\def{\rm fb}{{\rm fb}}
\def{\rm fb}^{-1}{{\rm fb}^{-1}}
\defM_{\rm eff}{M_{\rm eff}}
\defM_{\rm SUSY}{M_{\rm SUSY}}
\def\tilde{\chi}_1^0{\tilde{\chi}_1^0}
\def\rightarrow{\rightarrow}
\def{\rm GeV}{{\rm GeV}}
\def{\rm TeV}{{\rm TeV}}
\let\badcite= ~\badcite
\def ~\badcite{~\badcite}
\def\slashchar#1{\setbox0=\hbox{$#1$}
\dimen0=\wd0
\setbox1=\hbox{/} \dimen1=\wd1
\ifdim\dimen0>\dimen1
\rlap{\hbox to \dimen0{\hfil/\hfil}}
#1
\else
\rlap{\hbox to \dimen1{\hfil$#1$\hfil}}
/
\fi} %
\def\dofig#1#2{\centerline{\epsfxsize=#1\epsfbox{#2}}%
\vskip-.2in}
$R$ parity ($R$) is a quantum number which is +1 for any ordinary
particle, and -1 for any sparticle. $R$-violating $(\slashchar{R})$
interactions occur naturally in supersymmetric theories, unless they
are explicitly forbidden. Each $\slashchar{R}$ coupling also violates
either lepton number $L$, or baryon number $B$. Together, these
couplings violate both $L$ and $B$, and lead to tree-level diagrams
which would make the proton decay at a rate in gross violation
of the observed bound. To forbid such rapid decay, such
$\slashchar{R}$ couplings are normally set to zero. However, what if such
couplings are actually present?
In supersymmetry with minimal field content, the allowable
$\slashchar{R}$ part of the superpotential is
\begin{equation}
W_{\slashchar{R}}=\lambda_{ijk}L_iL_j\bar E_k
+ \lambda^{\prime}_{ijk} L_iQ_j \bar D_k
+ \lambda^{\prime\prime}_{ijk} \bar U_i\bar D_j\bar D_k.
\end{equation}
Here, $L$, $Q$, $\bar E$, $\bar U$, and $\bar D$ are superfields
containing, respectively, lepton and quark doublets, and charged
lepton, up quark, and down quark singlets. The indices $i,j,k$, over
which summation is implied, are generational indices. The first term
in $W_{\slashchar{R}}$ leads to $L$-violating $(\slashchar{L})$
transitions such as $e+\nu_\mu\to\tilde e$. The second one leads to
$\slashchar{L}$ transitions such as $u+\bar d\to\bar{\tilde e}$. The
third one produces $\slashchar{B}$ transitions such as $\bar u+\bar
d\to\tilde d$. To forbid rapid proton decay, it is often assumed that
if $\slashchar{R}$ transitions are indeed present, then only the
$L$-violating $\lambda$ and $\lambda^{\prime}$ terms occur, or only
the $B$-violating $\lambda^{\prime\prime}$ term occurs, but not both.
While the flavor components of $\lambda '\lambda ''$ involving $u,\ d,\ s$
are experimentally constrained to be $<10^{-24}$ from proton decay limits,
the other components of $\lambda '\lambda ''$ and $\lambda \lambda ''$ are
significantly less tightly constrained.
Upper bounds on the $\slashchar{R}$ couplings $\lambda$,
$\lambda^{\prime}$, and $\lambda^{\prime\prime}$ have been inferred
from a variety of low-energy processes, but most of these bounds are
not very stringent. An exception is the bound on
$\lambda^{\prime}_{111}$, which comes from the impressive lower limit
of $9.6\times 10^{24}{\rm yr}$ ~\badcite{1} on the half-life for the
neutrinoless double beta decay ${}^{76}{\rm Ge}\to {}^{76}{\rm
Se}+2e^-$. At the quark level, this decay is the process $2d\to
2u+2e^-$. If $\lambda^{\prime}_{111}\ne O$, this process can be
engendered by a diagram in which two $d$ quarks each undergo the
$\slashchar{R}$ transition $d\to\tilde u+e^-$, and then the two
produced $\tilde u$ squarks exchange a $\tilde g$ to become two $u$
quarks. It can also be engendered by a diagram in which $2d\to
2\tilde d$ by $\tilde g$ exchange, and then each of the $\tilde d$
squarks undergoes the $\slashchar{R}$ transition
$\tilde d\to u+e^-$. Both of these diagrams are proportional to
$\lambda^{\prime 2}_{111}$. If we assume that the squark masses
occurring in the two diagrams are equal, $m_{{\tilde u}_{L}}\simeq
m_{{\tilde d}_{R}}\equiv m_{\tilde q}$, the previously quoted limit on
the half-life implies that ~\badcite{2}
\begin{equation}
\vert\lambda^{\prime}_{111}\vert < 3.4\times 10^{-4}
\left(\frac{m_{\tilde q}}{100\ {\rm GeV}}\right)^2
\left(\frac{m_{\tilde g}}{100\ {\rm GeV}}\right)^{1/2}.
\end{equation}
It is interesting to recall that if the amplitude for neutrinoless
double beta decay is, for whatever reason, nonzero, then the electron
neutrino has a nonzero mass ~\badcite{3}. Thus, if
$\lambda^{\prime}_{1jj}\ne 0$, SUSY interactions lead to nonzero
neutrino mass ~\badcite{3p5}.
The way ~\badcite{4} in which low-energy processes constrain many of the
$\slashchar{L}$ couplings $\lambda$ and $\lambda^{\prime}$ is
illustrated by consideration of nuclear $\beta^-$ decay and $\mu^-$
decay. In the Standard Model (SM), both of these decays result from
$W$ exchange alone, and the comparison of their rates tells us about
the CKM quark mixing matrix. However, in the presence of
$\slashchar{R}$ couplings, nuclear $\beta^-$ decay can receive a
contribution from $\tilde d$, $\tilde s$, or $\tilde b$ exchange, and
$\mu^-$ decay from $\tilde e$, $\tilde \mu$, or $\tilde \tau$
exchange. The information on the CKM elements which has been inferred
assuming that only $W$ exchange is present bounds these new
contributions, and it is found, for example, that ~\badcite{4}
\begin{equation}
\vert \lambda_{12k}\vert < 0.04
\left(\frac{m_{{\tilde e}^k_R}}{100\ {\rm GeV}}\right),
\end{equation}
for each value of the generation index $k$. In a similar fashion, a
number of low-energy processes together imply ~\badcite{4} that for many of the
$\slashchar{L}$ couplings $\lambda_{ijk}$ and
$\lambda^{\prime}_{ijk}$,
\begin{equation}
\vert \lambda^{(\prime)}_{ijk}\vert < (0.03 \to 0.26)
\left(\frac{m_{\tilde f}}{100\ {\rm GeV}}\right).
\end{equation}
Here, $m_{\tilde f}$ is the mass of the sfermion relevant to the bound
on the particular $\lambda^{(\prime)}_{ijk}$.
Bounds of order 0.1 have also been placed on the $\slashchar{L}$ couplings
$\lambda^{\prime}_{1jk}$ by searches for squarks formed through the action of
these couplings in $e^+p$ collisions at
HERA ~\badcite{4h}.
Constraints on the $\slashchar{B}$ couplings $\lambda^{\prime\prime}$
come from nonleptonic weak processes which are suppressed in the SM,
such as rare $B$ decays and $K-\bar K$ and $D-\bar D$ mixing ~\badcite{5}.
For example, the decay $B^+\to\overline{K^0}K^+$ is a penguin (loop)
process in the SM, but in the presence of $\slashchar{R}$ couplings
could arise from a tree-level diagram involving ${\tilde u}^k_R$
($k=1,2$, or $3$) exchange. The present upper bound on the branching
ratio for this decay ~\badcite{6} implies that ~\badcite{5}
\begin{equation}
\vert \lambda^{\prime\prime}_{k12}\lambda^{\prime\prime}_{k23}\vert^{1/2}
< 0.09 \left(\frac{m_{\tilde{u}^k_R}}{100\ {\rm GeV}}\right);\ k=1,2,3.
\end{equation}
Recently, bounds $\lambda'_{12k}<0.29$ and $\lambda'_{22k}<0.18$ for
$m_{\tilde q}=100$ GeV have been obtained from data on $D$ meson
decays ~\badcite{3p5}. For a recent review of constraints on $R$-violating
interactions, see Ref. ~\badcite{6p5}.
We see that if sfermion masses are assumed to be of order 100 GeV or
somewhat larger, then for many of the $\slashchar{R}$ couplings
$\lambda_{ijk}$,
$\lambda^{\prime}_{ijk}$ and $\lambda^{\prime\prime}_{ijk}$, the
existing upper bound is $\sim$ 0.1 for a sfermion mass of 100 GeV.
We note that this upper bound is comparable to
the values of some of the SM gauge
couplings. Thus, $\slashchar{R}$ interactions could still prove to
play a significant role in high-energy collisions.
What effects of $\slashchar{R}$ might we see, and how would
$\slashchar{R}$ interactions affect future searches for SUSY? Let us
assume that $\slashchar{R}$ couplings are small enough that sparticle
production and decay are still dominated by gauge interactions, as in
the absence of $\slashchar{R}$. The main effect of $\slashchar{R}$ is
then that the LSP is no longer
stable, but decays into ordinary particles, quite possibly within the
detector in which it is produced. Thus, the LSP no longer carries
away transverse energy, and the missing transverse energy $(\slashchar{E}_T)$
signal, which is the mainstay of searches for SUSY when $R$ is assumed
to be conserved, is greatly degraded. (Production of SUSY particles
may still involve missing $E_T$, carried away by neutrinos.)
At future $e^+e^-$ colliders, sparticle production may include the processes
$e^+e^-\to\tilde{\chi}_i^+\tilde{\chi}_j^-$,
$\tilde{\chi}^0_i\tilde{\chi}^0_j$,
$\tilde{e}^+_L\tilde{e}^-_L$, $\tilde{e}^+_L\tilde{e}^-_R$,
$\tilde{e}^+_R \tilde{e}^-_L$, $\tilde{e}^+_R\tilde{e}^-_R$,
$\tilde{\mu}^+_L \tilde{\mu}^-_L$, $\tilde{\mu}^+_R\tilde{\mu}^-_R$,
$\tilde{\tau}^+_L\tilde{\tau}^-_L$, $\tilde{\tau}^+_R\tilde{\tau}^-_R$,
$\tilde{\nu}_L\bar{\tilde{\nu}}_L$. Here, the $\tilde{\chi}^{\pm}_i$ are
charginos, and the $\tilde{\chi}^0_i$ are neutralinos. Decay of the
produced sparticles will often yield high-$E_T$ charged leptons, which
can be sought in seeking evidence of SUSY. Now, suppose the LSP is
the lightest neutralino, $\tilde{\chi}^0_1$. If the $\slashchar{L}$,
$\slashchar{R}$ couplings $\lambda$ are nonzero, the $\tilde{\chi}^0_1$
can have the decays $\tilde{\chi}^0_1\to\mu\bar e\nu,\ e\bar e\nu$.
These yield high-energy leptons, so the strategy of looking for the
latter to seek evidence of SUSY will still work. However, if the
$\slashchar{B}$, $\slashchar{R}$ couplings $\lambda^{\prime\prime}$
are nonzero, the $\tilde{\chi}^0_1$ can have the decays
$\tilde{\chi}^0_1\to cds, \bar c\bar d\bar s$. When followed by these
decays, the production process $e^+e^-\to\tilde{\chi}^0_1\tilde{\chi}^0_1$
yields six jets which form a pair of three-jet systems. The
invariant mass of each system is $m_{{\tilde{\chi}^0_1}}$, and there is
no missing energy. This is quite an interesting signature.
Nonvanishing $\slashchar{L}$ and $\slashchar{R}$ couplings
$\lambda$ would also make possible resonant sneutrino production in $e^+e^-$
collisions. ~\badcite{4} For example, we could have
$e^+e^-\to\tilde{\nu}_\mu\to\tilde{\chi}^{\pm}_1\mu^{\mp},
\tilde{\chi}^0_1\nu_\mu$. At the resonance peak, the cross section
times branching ratio could be large ~\badcite{4}.
In future experiments at hadron colliders, one can seek evidence of
gluino pair production by looking for the multilepton signal that may
result from cascade decays of the gluinos. This signal will be
affected by the presence of $\slashchar{R}$ interactions. The worst
case is where the LSP decays via $\slashchar{B}$, $\slashchar{R}$
couplings to yield hadrons. The presence of these hadrons can cause
leptons in SUSY events to fail the lepton isolation criteria,
degrading the multilepton signal ~\badcite{7}. This reduces considerably the reach
in $m_{\tilde g}$ of the Tevatron. At the
Tevatron with an integrated luminosity of 0.1 fb$^{-1}$, there is {\it no}
reach in $m_{\tilde g}$, while for 1 fb$^{-1}$
it is approximately 200 GeV ~\badcite{7}, if $m_{\tilde q}=2m_{\tilde g}$.
At the LHC with an
integrated luminosity of 10 fb$^{-1}$, the reach extends beyond
$m_{\tilde g}=1\ {\rm TeV}$, even in the presence of $\slashchar{B}$ and
$\slashchar{R}$ interactions ~\badcite{8}.
If $\slashchar{R}$ couplings are large, then conventional SUSY event
generators will need many production and decay mechanisms to be re-computed.
The results would be very model dependent, owing to the large parameter space
in the $\slashchar{R}$ sector. If $\slashchar{R}$ couplings are assumed
small, so that gauge and Yukawa interactions still dominate production
and decay mechanisms, then event generators can be used by simply
adding in the appropriate expected decays of the LSP (see the approach in
Ref. ~\badcite{7,8}). For {\tt ISAJET}, the relevant LSP decays must be
explicitly added (by hand) to the {\tt ISAJET} decay table.
\section{Gauge-Mediated Low-Energy Supersymmetry Breaking}
\subsection{Introduction}
Supersymmetry breaking must be transmitted from the
supersymmetry-breaking sector to the visible sector through
some messenger sector.
Most phenomenological studies of supersymmetry implicitly assume
that messenger-sector interactions are of gravitational strength.
It is possible, however, that the messenger scale for transmitting
supersymmetry breaking is anywhere between the Planck and just
above the electroweak scale.
The possibility of
supersymmetry breaking at a low scale has two important
consequences.
First,
it is likely that
the standard-model gauge interactions play some role in the
messenger sector.
This is because standard-model gauginos couple
at the renormalizable level only through
gauge interactions.
If Higgs bosons received mass predominantly
from non-gauge interactions,
the standard-model gauginos would be unacceptably lighter than
the electroweak scale.
Second, the gravitino is naturally the lightest supersymmetric
particle (LSP).
The lightest standard-model superpartner
is the next to lightest supersymmetric particle (NLSP).
Decays of the NLSP to its partner plus the Goldstino component
of the gravitino within a detector lead to very distinctive
signatures.
In the following subsections the minimal model of gauge-mediated
supersymmetry breaking, and the experimental
signatures of decay to the Goldstino, are presented.
\subsection{The Minimal Model of Gauge-Mediated
Supersymmetry Breaking}
The standard-model gauge interactions act as messengers of
supersymmetry breaking if fields within the supersymmetry-breaking
sector transform under the standard-model gauge group.
Integrating out these messenger-sector fields gives rise to
standard-model gaugino masses at one-loop, and scalar masses
squared at two loops.
Below the messenger scale the particle content is just that of
the MSSM plus the essentially massless Goldstino discussed in
the next subsection.
The minimal model of gauge-mediated supersymmetry breaking
(which preserves the successful predictions of perturbative
unification) consists of messenger fields which transform as a single
flavor of ${\bf 5} + {\bar{\bf 5}}$ of $SU(5)$, i.e. there are
triplets, $q$ and $\bar{q}$, and doublets, $l$ and $\bar{l}$.
These fields couple to a single gauge singlet field, $S$,
through the superpotential
\begin{equation}
W = \lambda_3 S q \bar{q} + \lambda_2 S l \bar{l}.
\end{equation}
A non-zero expectation value for the scalar component of
$S$ defines the messenger scale, $M = \lambda S$, while
a non-zero expectation value for the auxiliary component, $F$,
defines the supersymmetry-breaking scale within the messenger sector.
For ${F} \ll \lambda S^2 $,
the one-loop visible-sector gaugino masses at the messenger scale
are given by ~\badcite{gms}
\begin{equation}
m_{\lambda_i}=c_i\
{\alpha_i\over4\pi}\ \Lambda\
\end{equation}
where $c_1 =c_2=c_3=1$ (we define $g_1=\sqrt{5\over 3}g'$), and
$\Lambda = F / S $.
The two-loop squark and slepton masses squared at the messenger
scale are ~\badcite{gms}
\begin{equation}
\tilde{m}^2 ={2 \Lambda^2}
\left[
C_3\left({\alpha_3 \over 4 \pi}\right)^2
+C_2\left({\alpha_2\over 4 \pi}\right)^2
+{3 \over 5}{\left(Y\over2\right)^2}
\left({\alpha_1\over 4 \pi}\right)^2\right]
\end{equation}
where $C_3 = {4 \over 3}$
for color triplets and zero for singlets, $C_2= {3 \over 4}$ for
weak doublets and zero for singlets,
and $Y$ is the ordinary hypercharge normalized
as $Q = T_3 + {1 \over 2} Y$.
The gaugino and scalar masses go roughly as their gauge
couplings squared.
The Bino and right-handed sleptons gain masses only through
$U(1)_Y$ interactions, and are therefore lightest.
The Winos and left-handed sleptons, transforming under $SU(2)_L$,
are somewhat heavier.
The strongly interacting squarks and gluino are significantly
heavier than the electroweak states.
Note that the parameter $\Lambda = F / S $ sets the scale for the
soft masses (independent of the $\lambda_i$ for ${F} \ll \lambda S^2$).
The messenger scale $M_i$, may be anywhere between roughly 100 TeV and
the GUT scale.
The dimensionful parameters within the Higgs sector,
$W = \mu H_u H_d$ and $V = m_{12}^2 H_u H_d + h.c.$,
do not follow from the ansatz of gauge-mediated supersymmetry breaking,
and require
additional interactions.
At present there is no good model which gives rise to these
Higgs-sector masses without tuning parameters.
The parameters $\mu$ and $m_{12}^2$ are therefore taken as free
parameters in the minimal model, and can be eliminated as usual in favor
of $\tan \beta$ and $m_Z$.
Electroweak symmetry breaking results (just as for high-scale
breaking) from the negative one-loop
correction to $m_{H_u}^2$ from stop-top loops
due to the large top quark Yukawa coupling.
Although this effect is formally three loops, it is larger in magnitude
than the electroweak contribution to $m_{H_u}^2$
due to the large squark masses.
Upon imposing electroweak symmetry breaking, $\mu$ is typically found to be
in the range $\mu \sim (1-2) m_{\tilde{l}_L}$ (depending on
$\tan \beta$ and the messenger scale).
This leads to a lightest neutralino, $\tilde{\chi}_1^0$, which is mostly
Bino, and a lightest chargino, $\tilde{\chi}_1^{\pm}$, which is mostly
Wino.
With electroweak symmetry breaking imposed,
the parameters of the minimal model may be taken to be
\begin{equation}
(~\tan \beta~,~ \Lambda = F/S~,~ {\rm sign}~\mu~,~ \ln M~)
\end{equation}
The most important parameter is $\Lambda$
which sets the overall scale for the superpartner spectrum.
It may be traded for a physical mass, such
as $m_{{\tilde{\chi}_1^0}}$ or $m_{\tilde{l}_L}$.
The low energy spectrum is only weakly sensitive to $\ln M_i$, and the
splitting between $\ln M_3$ and $\ln M_2$ may be neglected for most
applications.
\subsection{The Goldstino}
In the presence of supersymmetry breaking
the gravitino gains a mass by the super-Higgs mechanism
\begin{equation}
m_{ G}=\frac{F}{\sqrt{3}M_{p}} \simeq 2.4\, \left(
\frac{F}{(100~{\rm TeV})^2} \right) \rm{eV}
\end{equation}
where $M_p \simeq 2.4 \times 10^{18}$ GeV is the reduced
Planck mass.
With low-scale supersymmetry breaking the gravitino is naturally the
lightest supersymmetric particle.
The lowest-order couplings of the spin-${1 \over 2}$
longitudinal Goldstino component of the gravitino, $G_{\alpha}$,
are fixed by the supersymmetric Goldberger-Treiman low energy
theorem to be given by ~\badcite{gdecay}
\begin{equation}
L=-\frac{1}{F}j^{\alpha\mu}\partial_{\mu}G_{\alpha}+h.c.
\label{goldcoupling}
\end{equation}
where $j^{\alpha\mu}$ is the supercurrent.
Since the Goldstino couplings (\ref{goldcoupling})
are suppressed compared to electroweak
and strong interactions, decay to the Goldstino is only relevant for the
lightest standard-model superpartner (NLSP).
With gauge-mediated supersymmetry breaking it is natural that
the NLSP is either a neutralino (as occurs in the minimal model)
or a right-handed slepton (as occurs for a messenger sector
with two flavors of ${\bf 5} + \bar{\bf 5}$).
A neutralino NLSP can decay by
$\tilde{\chi}_1^0 \rightarrow (\gamma, Z^0, h^0, H^0, A^0) + G$, while
a slepton NLSP decays by $\tilde{l} \rightarrow l + G$.
Such decays
of a superpartner to its partner plus the Goldstino
take place over a macroscopic distance, and for
$\sqrt{F}$ below a few 1000 TeV,
can take place within a detector.
The decay rates into the above
final states can be found in
Ref.~ ~\badcite{DimopoulosPRL,DimopoulosSecond,Grav,kolda}.
\subsection{Experimental Signatures of Low-Scale Supersymmetry Breaking}
The decay of the lightest standard-model superpartner
to its partner plus the Goldstino within a detector
leads to very distinctive
signatures for low-scale supersymmetry breaking.
If such signatures were established experimentally, one of the
most important challenges would be to measure the distribution
of finite path lengths for the NLSP,
thereby giving a direct measure of the supersymmetry-breaking
scale.
\subsubsection{Neutralino NLSP}
In the minimal model of gauge-mediated supersymmetry breaking,
$\tilde{\chi}_1^0$ is the NLSP.
It is mostly gaugino and decays predominantly by
$\tilde{\chi}_1^0 \rightarrow \gamma + G$.
Assuming $R$ parity conservation, and decay within the detector,
the signature for supersymmetry at a collider is then
$\gamma \gamma X + {\not \! \! E_{T}}$,
where $X$ arises from cascade decays to $\tilde{\chi}_1^0$.
In the minimal model the strongly interacting states are
much too heavy to be relevant to discovery, and it is
the electroweak states which are produced.
At $e^+e^-$ colliders $\tilde{\chi}_1^0$ can
be probed directly by $t$-channel $\tilde e$ exchange, yielding
the signature $e^+e^-\to \tilde{\chi}^0_1\tilde{\chi}^0_1\to\gamma\gamma +
{\not \! \! E_{T}}$.
At a hadron collider the most promising signals include
$ q q' \rightarrow \tilde{\chi}^0_2\tilde{\chi}^\pm_1, \tilde{\chi}_1^+
\tilde{\chi}_1^- \rightarrow
\gamma \gamma X + {\not \! \! E_{T}}$, where
$ X = WZ, WW, Wl^+l^-,\dots$.
Another clean signature is $q q' \rightarrow
\tilde{l}^+_R \tilde{l}^-_R \rightarrow l^+l^-
\gamma\gamma+{\not \! \! E_{T}}$.
One event of this type has in fact been reported by the CDF
collaboration~ ~\badcite{event}.
In all these signatures both the missing energy and photon energy
are typically greater than $m_{\tilde{\chi}^0_1}/2$.
The photons are also generally isolated.
The background from initial- and final-state
radiation typically has non-isolated photons with a much
softer spectrum.
In non-minimal models it is possible for $\tilde{\chi}_1^0$ to have
large Higgsino components, in which case $\tilde{\chi}_1^0 \rightarrow h^0+G$
can dominate.
In this case the signature $bbbbX + {\not \! \! E_{T}}$ arises
with the $b$-jets reconstructing $m_{h^0}$ in pairs.
This final state topology may be difficult to reconstruct
at the LHC -- a systematic study has not yet been attempted.
Detecting the finite path length associated with
$\tilde{\chi}_1^0$ decay represents a major experimental challenge.
For the case $\tilde{\chi}_1^0 \rightarrow \gamma + G$, tracking within
the electromagnetic calorimeter (EMC) is available.
A displaced photon
vertex can be detected as a non-zero impact parameter
with the interaction region.
For example, with a photon angular resolution of 40 mrad/$\sqrt{E}$
expected in the CMS detector with a preshower array covering
$| \eta | < 1$ ~\badcite{CMS},
a sensitivity to
displaced photon vertices of about 12 mm at the 3$\sigma$ level results.
Decays well within the EMC or hadron calorimeter (HC)
would give a particularly distinctive signature.
In the case of decays to charged particles, such as from
$\tilde{\chi}_1^0 \rightarrow (h^0, Z^0) +G$ or
$\tilde{\chi}_1^0 \rightarrow \gamma^* +G$ with $\gamma^* \rightarrow f
\bar{f}$,
tracking within a silicon vertex detector (SVX) is available.
In this case displaced vertices down to the 100 $\mu$m level should
be accessible.
In addition, decays outside the SVX, but inside the EMC, would
give spectacular signatures.
\subsubsection{Slepton NLSP}
It is possible within non-minimal models that a right-handed slepton
is the NLSP, which decays by $\tilde{l}_R \rightarrow l + G$.
In this case the signature for supersymmetry
is $l^+l^- X + {\not \! \! E_{T}}$.
At $e^+e^-$ colliders such signatures are fairly clean.
At hadron colliders
some of these signatures have backgrounds from $WW$ and $t \bar{t}$
production.
However, $\tilde{l}_L \tilde{l}_L$ production can give $X=4l$,
which has significantly reduced backgrounds.
In the case of $\tilde{l}_R \tilde{l}_R$ production
the signature is nearly identical to slepton pair production with
$\tilde{l} \rightarrow l + \tilde{\chi}_1^0$ with
$\tilde{\chi}_1^0$ stable.
The main difference here is that the missing energy is carried
by the massless Goldstino.
The decay $\tilde{l} \rightarrow l + G$ over a macroscopic distance
would give rise to the spectacular signature of
a greater than minimum ionizing track
with a kink to a minimum ionizing track.
Note that if the decay takes place well outside the
detector, the signature for supersymmetry is heavy charged particles
rather than the traditional missing energy.
\subsection{Event Generation}
For event generation by {\tt ISAJET}, the user must provide a program to
generate the appropriate spectra for a given point in the above
parameter space. The corresponding $MSSMi$ parameters can be entered into
{\tt ISAJET} to generate the decay table, except for the NLSP decays to
the Goldstino. If $NLSP\rightarrow G+\gamma$ at 100\%, the $FORCE$ command can
be used. Since the $G$ particle is not currently defined in {\tt ISAJET},
the same effect can be obtained by forcing the NLSP to decay to a neutrino
plus a photon. If several decays of the NLSP are relevant, then each decay
along with its branching fraction must be explicitly added to the {\tt ISAJET}
decay table. Decay vertex information is not saved in {\tt ISAJET}, so that
the user must provide such information.
In {\tt Spythia}, the $G$ particle is defined, and decay vertex
information is stored.
\section{Conclusions}
In this report we have looked beyond the discovery
of supersymmetry, to the even more exciting prospect of
probing the new physics (of as yet unknown type) which
we know must be associated with supersymmetry and
supersymmetry breaking.
The collider experiments which
disentangle one weak-scale SUSY scenario from another will
also be testing hypotheses about new physics at very high
energies: the SUSY-breaking scale, intermediate symmetry-breaking
scales, the GUT scale, and the Planck scale.
We have briefly surveyed the variety of ways that
weak-scale supersymmetry may manifest itself at colliding
beam experiments.
We have indicated for each SUSY scenario how Monte Carlo
simulations can be performed using existing event generators
or soon-to-appear upgrades. In most cases very little
simulation work has yet been undertaken. Even in the case
of minimal supergravity the simulation studies to date
have mostly focused on discovery reach, rather than the
broader questions of parameter fitting and testing key
theoretical assumptions such as universality.
Clearly more studies are needed.
We have seen that alternatives to the
minimal supergravity scenario often provide distinct
experimental signatures. Many of these signatures involve
displaced vertices: the various NLSP decays, LSP decays
from $R$ parity violation, chargino decays in the 200 and O-II
models, and enhanced $b$ multiplicity in the 24 model.
This observation emphasizes the crucial importance of
accurate and robust tracking capabilities in future collider
experiments.
The phenomenology of some scenarios is less dramatic and thus
harder to distinguish from the bulk of the mSUGRA parameter
space. In any event, precision measurements will be
needed in the maximum possible number of channels.
In the absence of a ``smoking gun'' signature like those
mentioned above, the most straightforward way to identify
variant SUSY scenarios will be to perform an overconstrained
fit to the mSUGRA parameters. Any clear inconsistencies in
the fit should point to appropriate alternative scenarios.
More study is needed of how to implement this procedure in
future experiments with real-world detectors and data.
| proofpile-arXiv_065-384 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\@startsection {section}{1}{0pt}{-3.5ex plus -1ex minus
-.2ex}{2.3ex plus .2ex}{\raggedright\large\bf}}
\catcode`\@=12
\catcode`\@=11
\def\eqnarray{\stepcounter{equation}\let\@currentlabel=\theequation
\global\@eqnswtrue
\global\@eqcnt\z@\tabskip\@centering\let\\=\@eqncr
\gdef\@@fix{}\def\eqno##1{\gdef\@@fix{##1}}%
$$\halign to \displaywidth\bgroup\@eqnsel\hskip\@centering
$\displaystyle\tabskip\z@{##}$&\global\@eqcnt\@ne
\hskip 2\arraycolsep \hfil${##}$\hfil
&\global\@eqcnt\tw@ \hskip 2\arraycolsep $\displaystyle\tabskip\z@{##}$\hfil
\tabskip\@centering&\llap{##}\tabskip\z@\cr}
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}
\else \def\@tempa{&}\fi
\@tempa \if@eqnsw\@eqnnum\stepcounter{equation}\else\@@fix\gdef\@@fix{}\fi
\global\@eqnswtrue\global\@eqcnt\z@\cr}
\def Collaboration{Collaboration}
\def {\it ibid.}~{{\it ibid.}~}
\def \ibj#1#2#3{~{\bf#1}, #2 (#3)}
\newcommand{\prl}[3]{ Phys.\ Rev.\ Lett.\ {\bf #1} (#2) #3}
\newcommand{\jpg}[3]{ J.\ Phys.\ {\bf G#1} (#2) #3}
\newcommand{\prw}[3]{ Phys.\ Rev.\ {\bf #1} (#2) #3}
\newcommand{\prd}[3]{ Phys.\ Rev.\ {\bf D#1} (#2) #3}
\newcommand{\plb}[3]{ Phys.~Lett.\ {\bf B#1} (#2) #3}
\newcommand{\npb}[3]{ Nucl.~Phys.\ {\bf B#1} (#2) #3}
\newcommand{\rmp}[3]{ Rev.\ Mod.\ Phys.\ {\bf #1} (#2) #3}
\newcommand{\pol}[3]{ Acta Phys.\ Polon.\ {\bf #1} (#2) #3}
\newcommand{\rpp}[3]{ Rep.\ Prog.\ Phys.\ {\bf #1} (#2) #3}
\newcommand{\mpl}[3]{ Mod.\ Phys.\ Lett.\ {\bf #1} (#2) #3}
\newcommand{\rnc}[3]{ Riv.\ Nuovo\ Cim.\ {\bf #1} (#2) #3}
\newcommand{\for}[3]{ Fortschr.\ Phys.\ {\bf #3} (#2) #3}
\newcommand{\nci}[3]{ Nuovo\ Cimento\ {\bf #1} (#2) #3}
\newcommand{\apj}[3]{ Astrophys.~J.\ {\bf #1} (#2) #3}
\newcommand{\epl}[3]{ Europhys.\ Lett.\ {\bf #1} (#2) #3}
\newcommand{\ptp}[3]{ Progr.\ Theor.\ Phys.\ {\bf #1} (#2) #3}
\newcommand{\app}[3]{ Astropart.\ Phys.\ {\bf #1} (#2) #3}
\newcommand{\prp}[3]{ Phys.\ Rep.\ {\bf #1} (#2) #3}
\newcommand{\aas}[3]{ Astron.\ and~Astrophys.\ {\bf #1} (#2) #3}
\newcommand{\spj}[3]{ Sov.\ Phys.\ JETP\ {\bf #1} (#2) #3}
\newcommand{\sjn}[3]{ Sov.~J.\ Nucl.\ Phys.\ {\bf #1} (#2) #3}
\newcommand{\zpc}[3]{ Z.~Phys.\ {\bf C#1} (#2) #3}
\def \apny#1#2#3{Ann. Phys. (N.Y.) {\bf#1}, #2 (#3)}
\def \ijmpa#1#2#3{Int. J. Mod. Phys. A {\bf#1}, #2 (#3)}
\def {\it et al.}{{\it et al.}}
\def \jpb#1#2#3{J.~Phys.~B~{\bf#1}, #2 (#3)}
\def \mpla#1#2#3{Mod. Phys. Lett. A {\bf#1}, #2 (#3)}
\def \nc#1#2#3{Nuovo Cim. {\bf#1}, #2 (#3)}
\def \np#1#2#3{Nucl. Phys. {\bf#1}, #2 (#3)}
\def Particle Data Group, L. Montanet \ite, \prd{50}{1174}{1994}{Particle Data Group, L. Montanet {\it et al.}, \prd{50}{1174}{1994}}
\def \pl#1#2#3{Phys. Lett. {\bf#1}, #2 (#3)}
\def \pla#1#2#3{Phys. Lett. A {\bf#1}, #2 (#3)}
\def \plb#1#2#3{Phys. Lett. B {\bf#1}, #2 (#3)}
\def \pr#1#2#3{Phys. Rev. {\bf#1}, #2 (#3)}
\def \prd#1#2#3{Phys. Rev. D {\bf#1}, #2 (#3)}
\def \prl#1#2#3{Phys. Rev. Lett. {\bf#1}, #2 (#3)}
\def \prp#1#2#3{Phys. Rep. {\bf#1}, #2 (#3)}
\def \ptp#1#2#3{Prog. Theor. Phys. {\bf#1}, #2 (#3)}
\def \rmp#1#2#3{Rev. Mod. Phys. {\bf#1}, #2 (#3)}
\def \rp#1{~~~~~\ldots\ldots{\rm rp~}{#1}~~~~~}
\def \yaf#1#2#3#4{Yad. Fiz. {\bf#1}, #2 (#3) [Sov. J. Nucl. Phys. {\bf #1},
#4 (#3)]}
\def \zhetf#1#2#3#4#5#6{Zh. Eksp. Teor. Fiz. {\bf #1}, #2 (#3) [Sov. Phys. -
JETP {\bf #4}, #5 (#6)]}
\def \zpc#1#2#3{Zeit. Phys. C {\bf#1}, #2 (#3)}
\def \zpd#1#2#3{Zeit. Phys. D {\bf#1}, #2 (#3)}
\begin{document}
\renewcommand{\thetable}{\Roman{table}}
\setlength{\baselineskip}{0.8 cm}
\rightline{UNIBAS--MATH 6/96}
\vspace{2.5cm}
\begin{center}
{\Large \bf THE CLASSICAL ANALOGUE OF CP--VIOLATION}
\end{center}
\vspace{1cm}
\begin{center}
{\large Decio Cocolicchio$^{(1,2)}$ and Luciano Telesca$^{(3)}$}
\end{center}
\begin{center}
$^{(1)}$
{\it
Dipartimento di Matematica, Univ. Basilicata, Potenza, Italy\\
Via N. Sauro 85, 85100 Potenza, Italy}
\end{center}
\begin{center}
$^{(2)}$
{\it
Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Italy\\
Via G. Celoria 16, 20133 Milano, Italy}
\end{center}
\begin{center}
$^{(3)}$
{\it
Consiglio Nazionale delle Ricerche, Istituto di Metodologie
Avanzate\\
C/da S. Loya TITO, Potenza, Italy}
\end{center}
\vspace{0.5cm}
P.A.C.S. number(s):
11.30.Er,~~
13.20.Eb,~~
13.25.+m,~~
14.40.Aq
\vspace{1.5cm}
\begin{abstract}
\noindent
The phenomenological features of the mixing in the neutral
pseudoscalar mesons $K^0- \overline {K^0} $ can be illustrated in the classical
framework of mechanics and by means of electromagnetic coupled
circuits. The time-reversed not-invariant processes and the related
phenomenon of $CP$-nonconservation can be induced by dissipative
effects which yield a not vanishing imaginary part for the relevant
Hamiltonian. Thus, two coupled dissipative oscillators can resemble the
peculiar asymmetries which are so common in the realm
of high energy particle physics.
\end{abstract}
\normalsize
\vfill~\vfill~
\thispagestyle{empty}
\newpage
\baselineskip=12pt
\section{The Major Issues of CP Violation}
\bigskip
\noindent
Symmetries are one of the basic cornerstones in the formulation of the
laws of nature leading to conservative quantities. Unexpected
violations of symmetries indicate some dynamical mechanisms underlying
the current understanding of physics. In this context, the space-time
discrete symmetries and their violation represent one of the
most controversial topic. For a long time, the fact that Maxwell
equations were invariant under space-inversion or parity ($P$) and
time-reversal ($T$) bolstered the idea that all the laws of physics
are invariant under those discrete operations. It was easily seen that
electromagnetic equations possess another discrete symmetry since they
are unaffected by a charge conjugation ($C$) operation which reverses
the sign of all the charges and converts a particle into its
antiparticle. However, since 1957 \cite{WU}\
we know that parity is violated in
weak interactions among fundamental particles.
During early sixties, $CP$ was supposed to be conserved although, $C$
and $P$ were individually violated. Since 1964 \cite{CCFT}, we know
that $CP$ is also violated although to a much lesser extent.
$CP$ globally stands for the operation that takes all particles into their
mirror-image antiparticles (and viceversa). If the universe started
out in a $CP$-symmetric state, and if the laws of physics were
$CP$-invariant, that is, if $CP$ were conserved, then the world would
not be in its present asymmetric state, and we would not be here.
On the
other hand, the origin of $CP$-violation is still not explained since
$CP$-violating tiny effects are known to be smaller than the usual
weak interaction strength by about three orders of magnitude and it
is not excluded that $CP$-violation could be an indication of some effects
of new physics at a much higher energy scale. The only
almost overwhelming theoretical prejudice comes against $CPT$
violation.
There are very strong reasons \cite{CPT} to believe that fundamental
interactions can never violate $CPT$ invariance. This again would have
been wrong \cite{CPTnoft} in some extension of the Lorentz
invariant and local commutativity field theories,
like in the case of string theory \cite{CPTnostr}.
We must also say that the experimental evidence on $CPT$ is very
limited at present. It is concerned with the equality of masses and
lifetimes, and the reversal of magnetic moments between particle and
antiparticle. Thus, except some loosing theoretical models,
the validity of $CPT$ is assumed and consequently the $T$ violation is
supposed to be reflected immediately in a $CP$ counterpart.
However, it should be borne in mind that observation of a $T$ odd
asymmetry or correlation is not necessarily an indication of $CP$
(or even $T$) violation. The reason for this is the anti-unitary
nature of the time reversal operator in quantum mechanics. As a
consequence of this, a $T$ operation not only reverses spin and
three-momenta of all particles, but also interchanges initial and
final states. Put differently, this means that final-state
interactions can mimic $T$ violation, but not genuine $CP$ violation.
Presently, genuine experimental evidence of CP--violation comes from the
mixing and decays of the unstable two-level neutral kaon system. In
literature exists a large extension of successful modelling and
phenomenological computations~\cite{Buras}.
Although the discovery of $CP$-violation
indicated that the kaon system is somewhat more complex
than the typical two-state
problem and involves considerably more subtle complications;
there are many ways to illustrate this two-state system in
classical physics~\cite{BW}.
From the point of view of Quantum Mechanics,
the $ {\rm K}^ 0 - \overline{\rm K}{}^0 $ represents a complex system consisting
of two communicating metastable states.
Such puzzle system can be related to the problem of two coupled
degenerate oscillators with dissipation.
In fact, the intrinsic dissipative nature of this unstable system
and its decay sector, faced with the problem of the complex eigenvalue of
the Hamiltonian and therefore with the extension of the Hilbert space
which is a common tool to deal with open systems far from equilibrium.
\noindent
In particle physics, the neutral kaon decays exhibit unusual and
peculiar properties arising from the degeneracy of $K^0$ and
$ \overline{\rm K}{}^0 $. Although $K^0$ and $ \overline{\rm K}{}^0 $ are expected to be distinct
particles from the point of view of the strong interactions, they
could transform into each other through the action of the weak
interactions. In fact, the system $K^0- \overline {K^0} $ results degenerate due to
a coupling to common final states (direct $CP$-violation)
($K^0\leftrightarrow\pi\pi\leftrightarrow { \overline{\rm K}{}^0 } $) or by means of a
mixing ($K^0 \leftrightarrow \overline{\rm K}{}^0 $) (indirect $CP$
violation). Then, it is one of the most important tasks of physics
to understand the sources of $CP$-asymmetry~\cite{Coco}.
Even if this is an effect of second order in the weak interactions,
the transition from the $K^0$ to $ \overline{\rm K}{}^0 $
becomes important because of the degeneracy.
$K^0$ and $ \overline{\rm K}{}^0 $ mesons are charge $C$ conjugate of one another and
are states of definite strangeness $+1$ and $-1$ respectively (conserved
in strong productions $\Delta S=0$, violated in weak decays $\Delta S=1$).
However, they do not have definite lifetimes for weak decay nor do they
have any definite mass, this means that they are not eigenstates.
The mass eigenstates are linear combinations
of the states $ | {\rm K^ 0} \rangle $ and $ | {\rm \overline{K}{}^0} \rangle $ namely $|K_S^0>$ and $|K_L^0>$,
which have definite masses and lifetimes.
The short lived $|K_S^0>$ meson decays with a characteristic time
$\tau_S \ = \ (0.8922 \ \pm \ 0.0020)\ {10}^{-10} \ sec$
into the two predominant modes $\pi^+
\pi^-$ and $\pi^0\pi^0$ each with the CP eigenvalue $+1$, whereas the long
lived $|K_L^0>$ mesons with a decay time
$\tau_L \ = \ (5.17 \ \pm \ 0.04)\ {10}^{-8} \ sec$
has among its decay modes $\pi^+\pi^-\pi^0$ and
$\pi^0\pi^0\pi^0$, which are eigenstates of CP with eigenvalue $-1$.
In 1964, it was observed~\cite{CCFT}
that there is a small but finite probability for the decay
$K^0_L\rightarrow\pi^+\pi^-$, in which the final state has the CP eigenvalue +1.
Thus we cannot identify $K^0_L$ with $K^0_1$ and $K^0_S$ with $K^0_2$,
the real eigenstates of $CP$-symmetry.
\noindent
Let $ | {\rm K^ 0} \rangle $ and $ | {\rm \overline{K}{}^0} \rangle $ be the stationary states of the
$K^0 $-meson and its antiparticle $ \overline{\rm K}{}^0 $, respectively. Both
states are eigenstates of the strong and electromagnetic interactions.
The $K^0 $ and $ \overline{\rm K}{}^0 $ states are connected through
CP transformations. With the exception of an arbitrary phase
we obtain
\begin{equation}
\begin{array}{c}
C\!P \, | {\rm K^ 0} \rangle = e^{i \, \theta _{\rm CP} } | {\rm \overline{K}{}^0} \rangle ~~{\rm and}~~
C\!P \, | {\rm \overline{K}{}^0} \rangle = e^{- \,i \, \theta_{\rm CP} } | {\rm K^ 0} \rangle ~~ \\
T \, | {\rm K^ 0} \rangle = e^{ i \, \theta_{\rm T} } | {\rm K^ 0} \rangle ~~{\rm and}~~
T \, | {\rm \overline{K}{}^0} \rangle = e^{ i \, \overline {\theta}{}_{\rm T } } | {\rm \overline{K}{}^0} \rangle
\end{array} \label{cp-phase}\end{equation}
where $\theta$'s are arbitrary phases and it follows that
\begin{equation}
2 \, \theta_{\rm CP} =\overline{\theta}{}_{\rm T}\, - \, \theta_{\rm
T}~.
\end{equation}
by assuming $C\!PT\, | {\rm K^ 0} \rangle = TC\!P \, | {\rm K^ 0} \rangle $.
As stated before, $ | {\rm K^ 0} \rangle $ and $ | {\rm \overline{K}{}^0} \rangle $ are eigenstates of strange
flavour.
Since strangeness quantum numbers is conserved in strong interactions,
their relative phase can never be measured. So every observable is
independent of the phase transformation $ | {\rm K^ 0} \rangle \rightarrow
e^{i\xi} | {\rm K^ 0} \rangle $.
In presence of a new interaction
violating strangeness conservation,
the K-mesons can decay into final states with no strangeness ($|\Delta
S|= 1$) and $K^0 $ and $ \overline{\rm K}{}^0 $ can oscillate to each other
($|\Delta S| = 2$). A convenient way to discuss this mixing new
interaction, which is very much weaker than strong and
electromagnetic interactions, require the detailed
knowledge of the perturbative effects \cite{Buras}.
Although this kind of problems require the
formalism of density matrix, and the notion of
rest frame for an unstable degenerate system
appears difficult to be implemented, nevertheless perturbation theory
and the single pole approximation of Weisskopf-Wigner method
\cite{wiger-weis} can be applied to derive
the eigenstates $K_S$ and $K_L$
of a $2 \times 2$ effective Hamiltonian
${\cal H}$ \cite{Buras}.
The time evolution of basis states $K^0$ and $\overline K^0$ can be
written as
\begin{equation}
i{d\over {dt}}|\Psi(t)>={\cal H}|\Psi(t)>=
\left(
\matrix {H_{11} & H_{12} \cr
H_{21} & H_{22} \cr}
\right)
|\Psi(t)>
\end{equation}
\par \noindent
where the effective Hamiltonian ${\cal H}$
is an arbitrary matrix
which can be
uniquely rewritten in terms of two Hermitian matrices
$M$ and $\Gamma$: ${\cal H} = M - i \Gamma /2$.
\par \noindent
M is called the mass matrix and $\Gamma$ the decay matrix.
Their explicit expressions can be derived by the weak scattering matrix
responsible of the decay ${S^w}_{\alpha ', \alpha }$
\begin{equation}
{S^w}_{\alpha ', \alpha }=<\alpha '|Te^{-i \int{H_w'(t)dt}}|\alpha >
\simeq \delta_{\alpha' ,\alpha }+(2\pi )^4 \delta ^4(p-
p')iT_{\alpha' ,\alpha }(p)
\end{equation}
\noindent
where
$H_w'=e^{iHt}H_we^{-iHt}$ and $H_w$ representing the weak component
of the Hamiltonian, whereas its strong sector gives at rest
\begin{equation}
<\alpha '|H_{strong}|\alpha>=m_0\delta_{\alpha ,\alpha '}
\end{equation}
Therefore we can resume that the matrix elements are given by
\begin{equation}
<\alpha '|{\cal H}|\alpha>={\cal H}_{\alpha ',\alpha }=m_0\delta_{\alpha
,\alpha '}-T_{\alpha ',\alpha }(m_0)
\end{equation}
\par \noindent
and by means of the following
relations
\begin{equation}
\Theta ={{-1}\over {2\pi i}} \int{ d\omega {{e^{-i\omega t}}\over
{\omega + i\eta}}}
\ \ \ \ \ \ \ \ \
{1 \over {\chi - a + i\eta}}=P {1 \over {\chi - a}} - i\pi \delta (
\chi -a)
\end{equation}
let us derive straightforward that
\begin{equation}
\Gamma_{\alpha ',\alpha}=2\pi\sum_n{<\alpha'
|H_w|n><n|H_w|\alpha >\delta(E_n-m_0)}
\end{equation}
\par \noindent
\begin{equation}
M_{\alpha ',\alpha}=m_0\delta_{\alpha ',\alpha}+
<\alpha '|H_w|\alpha >+\sum_n {P{{<\alpha '|H_w|n><n|H_w|\alpha>}\over
{m_0-E_n}}}
\end{equation}
\par \noindent
Note that this last sum is taken over all
possible intermediate states whereas for the width we take only real
final states common to $K^0 $ and $ \overline{\rm K}{}^0 $.
\par \noindent
It can be shown that if CPT holds then the restriction ${\cal H}_{11}
= {\cal H}_{22}$ and hence $M_{11} = M_{22}, ~ \Gamma_{11} =
\Gamma_{22}$ must be adopted.
Furthermore if $CP$ invariance holds too, then besides
$H_{11}=H_{22}$ we get also $H_{12}=H_{21}$ and consequently
$\Gamma_{12}=\Gamma_{21}={\Gamma_{21}}^*$, $M_{12}=M_{21}={M_{21}}^*$
so that $\Gamma_{ij}$, $M_{ij}$ will result all real numbers.
\par \noindent
Since, $K^0 $ e $\overline{K^0} $ are only the eigenstates of the strong
Hamiltonian, the eigenvalues of the effective Hamiltonian ${\cal H}$
can be found by means the diagonalization which yields
\begin{equation}
\lambda_S = H_{11} -\sqrt {H_{12}H_{21}}=M_{11}-{i\over 2} \Gamma_{11}
-Q=m_S -{i\over 2} \Gamma_S
\end{equation}
\begin{equation}
\lambda_L = H_{11} +\sqrt {H_{12}H_{21}}=M_{11}-{i\over 2} \Gamma_{11}
+Q=m_L -{i\over 2} \Gamma_L
\end{equation}
\par \noindent
where
\begin{equation}
Q=\sqrt{ H_{12} H_{21} } =
\sqrt{(M_{12}-{i\over 2} \Gamma_{12}) ({M_{12}}^*-{i\over 2}
{\Gamma_{12}}^*)} .
\end{equation}
These real (m) and imaginary ($\Gamma$) components will define the
masses and the decay width of the ${\cal H}$ eigenstates $K_S$ e $K_L$.
\noindent
The mass difference and the relative decay width are easily given by
\begin{equation}
\Delta m=m_L-m_S=2Re Q\simeq 2 \hbox{Re} {M_{12}}=
(0.5351\pm0.0024)\cdot 10^{-10}\hbar sec^{-1}
\end{equation}
\begin{equation}
\Delta \Gamma =\Gamma_S - \Gamma_L = 4 ImQ\simeq 2 \hbox{Re}
\Gamma_{12} = (1.1189\pm 0.0025)\cdot 10^{-10} sec^{-1}
\end{equation}
The experimental evidence \cite{CCFT} in 1964 that both the short-lived
$K_S$ and long-lived $K_L$ states decayed to $\pi \pi$ upset
this tidy picture.
It means that the states of definite mass and lifetime are never
more states with a definite $CP$ character.
\noindent
With the conventional choice of phase, the $CP$ eigenstates $K_1$ and
$K_2$ enter into play so that the mass eigenstates
can be parameterized by means of an impurity complex parameter
$\epsilon$ which encodes the indirect mixing effect of CP violation
in the neutral kaon system.
The general expression of the relative eigenstates can be
obtained by the standard procedure of diagonalization.
\begin{equation}
|K_S>= {1\over \sqrt{2(1+|\epsilon_S |^2)}} \left [ (1+ \epsilon_S )|K^0> +
(1- \epsilon_S )|\overline K^0>\right ]=
{|K_1> + \epsilon_S |K_2>\over \sqrt {1+|\epsilon_S |^2}}
\end{equation}
\begin{equation}
|K_L>= {1\over \sqrt{2(1+|\epsilon_L |^2)}} \left [ (1+ \epsilon_L )|K^0>
-(1- \epsilon_L )|\overline K^0>\right ]=
{|K_2> + \epsilon_L |K_1>\over \sqrt {1+|\epsilon_L |^2}}
\end{equation}
\par \noindent
where $\epsilon_S=\epsilon +\Delta$, $\epsilon_L=\epsilon -\Delta$, being
\begin{equation}
\epsilon={{[-ImM_{12}+iIm{{\Gamma_{12}} \over 2}]}\over {{{(\Gamma_L-
\Gamma_S)}\over 2}+i(m_L-m_S)}}
\qquad
\Delta={1\over 2}{{i[(M_{11}-M_{22})+{{(\Gamma_{11}-\Gamma_{22}}\over
2})]}\over {{{(\Gamma_L-\Gamma_S)}\over 2}+i(m_L-m_S)}}
\end{equation}
The $\Delta$ parameter vanishes if we assume CPT invariance.
Presently, the experimental data \cite{PDG}\ are quite consistent with the
expectation of the spin--statistics connection which excludes CPT
violation in any local, Lorentz invariant theory.
So that,
we can suppose that $\Delta =0$ and consequently $\epsilon_L =
\epsilon_S$.
At present, the magnitude is
$|\epsilon| \ = \ (2.259 \ \pm \ 0.018)\ {10}^{-3} $
and its phase results $\Phi_{\epsilon} \ = \ {(43.67 \ \pm \ 0.13)}^o $.
Notice that, in contrast to $|K_1
\rangle$ and $|K_2 \rangle$, the states $|K_S \rangle$ and $|K_L \rangle$
are not orthogonal to one another but have a not vanishing scalar product
\begin{equation}
<K_S|K_L>= {{2 Re\epsilon
+i Im \Delta} \over {1+|\epsilon |^2}}
\end{equation}
Obviously, if CP were conserved, $\epsilon$ would vanish or would
reduce to a pure phase, which can be reabsorbed in the redefinition
of $K^0 $, $\overline{K^0} $ and $K_S=K_1$, $K_L=K_2$ would result.
A brief comment is required in previous formula where we propose as the
impurity parameter a term $\epsilon$ which does not take into
account the direct contribution to the mixing $K^0- \overline {K^0} $
due to the coupling via the common $\pi\pi$ decay sector.
Such a model, called superweak, can likely be selected in the next
future \cite{Coco}.
The small amount of the $CP$ violation is indeed reflected in the
impurity parameter
\begin{equation}
\epsilon={e^{i{\pi \over 4}}\over {2\sqrt {2}}} \left( {{ImM_{12}}\over
{ReM_{12}}}\ -{i\over 2}{{Im\Gamma_{12}}\over
{ReM_{12}}} \right)=
{e^{i{\pi \over 4}}\over {\sqrt{2} \Delta m}} \left( { ImM_{12} + 2
\xi_0 \hbox{Re} M_{12} } \right)
\end{equation}
where $\xi_0$ represents the additional manifestations in the mixing of
the $CP$ violation due to the effective isospin decay amplitudes.
In order to derive the amount of this complex parameter, it appears
more convenient to write the eigenstates of ${\cal H}$ as
\begin{equation}\eqalign{
|K_L \rangle =& p | K^0 \rangle + q | \overline K^0 \rangle =
{1\over {\sqrt{1+\vert\alpha\vert^2}}} ( | {\rm K^ 0} \rangle +\alpha | {\rm \overline{K}{}^0} \rangle )\; ,\cr
|K_S \rangle =& p | K^0 \rangle - q | \overline K^0 \rangle =
{1\over {\sqrt{1+\vert\alpha\vert^2}}} ( | {\rm K^ 0} \rangle -\alpha | {\rm \overline{K}{}^0} \rangle )\; ,\cr }
\end{equation}
where $|p|^2 + |q|^2 = 1$. With a proper phase choice, we introduced
the new relevant parameter
\begin{equation}
\alpha = {q \over p} = {{1 - \epsilon}\over {1 + \epsilon}}=
\sqrt{\frac{{\cal H}_{21}}{{\cal H}_{12}}} = \sqrt{
{ {M_{12}^* - {i\over 2}\Gamma_{12}^*}\over
{M_{12} - {i\over 2}\Gamma_{12} } } }
\end{equation}
and then
\begin{equation} \label{eqn:eps}
\eqalign{
\epsilon =& \frac{p-q}{p+q} = \frac{1-\alpha}{1+\alpha}=
\frac{\sqrt{{\cal H}_{12}} - \sqrt{{\cal H}_{21}}}{\sqrt{{\cal H}_{12}} +
\sqrt{{\cal H}_{21}}}=\cr
&= { {2i \hbox{Im}M_{12} + \hbox{Im}M_{12}} \over
{ (2\hbox{Re}M_{12} - i \hbox{Re}\Gamma_{12}) +
(\Delta m -{i\over 2} \Delta\Gamma) } }
\simeq
\frac{i~{\rm Im} M_{12} + {\rm Im} (\Gamma_{12}/2)}{(\Delta m
- {i\over 2} \Delta\Gamma)} \cr }
\end{equation}
Thus it results evident that the CP-violation parameter $\epsilon$
arises from a relative imaginary part between the
off-diagonal elements $M_{12}$ and $\Gamma_{12}$ i.e. if
$arg(M_{12}^*\Gamma_{12})\ne 0$.
If we rewrite in polar coordinate the complex ratio between these
relevant components
\begin{equation}
{{M_{12}}\over{\Gamma_{12}}}=-
{ {\vert{M_{12}\vert} \over {\vert{\Gamma_{12}}\vert} } }e^{i\delta}
= r e^{i \delta}
\end{equation}
where due to the smallness of $\vert \epsilon\vert$
($\hbox{Im}M_{12}\leq\hbox{Re}M_{12}$ and
$\hbox{Im}\Gamma_{12}<<\hbox{Re}\Gamma_{12}$)
\begin{equation}
r\simeq {{\Delta m}\over{\Delta \Gamma}}=(0.477\pm 0.003)
\end{equation}
It is clear that only the parameter $|\alpha|$ is significative.
In the sense that $\alpha\ne 1$ does not necessarily imply $CP$
violation. $CP$ is violated in the mixing matrix if $|\alpha|\ne 1$.
Remember that, since flavour is conserved in strong interactions,
there is some freedom in defining the phases of flavour eigenstates.
This means that $\alpha= q/p$ is a phase dependent quantity
manifesting its presence in the phase of $\epsilon$ which must only
satisfy
\begin{equation}
<K_S|K_L>= x = {{2 Re\epsilon} \over {1+|\epsilon |^2}}
\end{equation}
which reduces to the equation of a circle
\begin{equation}
(\hbox{Re}\epsilon - {1\over x})^2 +(\hbox{Im}\epsilon)^2 = ({1\over
x})^2 -1
\end{equation}
of radius $\sqrt{({1\over x})^2 -1}\simeq {1\over x}$ centered in
($1\over x$, 0) in the Gauss complex $\epsilon$-plane.
The relative pure phase $\delta$ can be derived from the fact that
\begin{equation}
\alpha = {q \over p} = {{1 - \epsilon}\over {1 + \epsilon}}\simeq
1 - { {2r}\over {4r^2 +1}} ( 1+ 2 i r)\delta
\end{equation}
The amount of its value can then be extracted only by analyzing the
experimental results of the semileptonic asymmetry
\begin{equation}
A_{SL} =
{{\Gamma(K_L\rightarrow \ell^+\nu X)-\Gamma(K_L\rightarrow \ell^-\nu X)}
\over
{\Gamma(K_L\rightarrow \ell^+\nu X) +\Gamma(K_L\rightarrow \ell^-\nu X)}}
=
{{1-|\alpha|^2}\over{1+|\alpha|^2}}
\simeq { {2r}\over{4r^2 +1}}\delta .
\end{equation}
Its experimental value is $A_{SL} = (3.27\pm 0.12)\cdot 10^{-3}$
and then the relative phase $\delta=(6.53\pm 0.24)\cdot 10^{-3}$.
In the Wu-Yang convention $\hbox{Im}\Gamma_{12}=0$, we obtain that
\begin{equation}
\arg(\epsilon)\simeq\cases{
\pi -\Phi_{SW}\quad\hbox{for}\quad\hbox{Im}{M_{12}}>0\cr
\Phi_{SW}\quad\hbox{for}\quad\hbox{Im}{M_{12}} < 0\cr}
\end{equation}
being the superweak phase $\Phi_{SW}=\tan^{-1}(2 r)$.
The matrices $\Gamma$ and $M$ may be expressed perturbatively
\cite{Buras} in terms of sums
over states connected to $K^0$ and $\overline{K^0} $ by means of the weak Hamiltonian $H_W$.
By considering specific electroweak models for
the kaon decays into $2 \pi,~3 \pi,~\pi l \nu$, and other final states, one
can then compare theory and experiments.
In general, we can resume that
there are three complementary ways to describe the evolution of the
complex neutral kaon system:
\medskip
{1)}{ In terms of the mass eigenstates $K_{L,S}$, which do not posses
definite strangeness
\begin{equation}
\eqalign{
|K_S(t)>=&|K_S(0)>e^{-\alpha_S t}\qquad\alpha_S={\it i}m_S+{{\gamma_S}\over 2}\cr
|K_L(t)>=&|K_L(0)>e^{-\alpha_L t}\qquad\alpha_L={\it i}m_L+{{\gamma_L}\over 2}\cr}
\end{equation}
\smallskip
{2)}{ In terms of the flavour eigenstates, whose time evolution
is more complex
\begin{equation}
\eqalign{
|K^0(t)>=&f_{+}(t) | {\rm K^ 0}(0) \rangle +\alphaf_{-}(t) | {\rm \overline{K}{}^0}(0) \rangle \cr
|{\overline K}^0(t)>=&{1\over\alpha}f_{-}(t) | {\rm K^ 0}(0) \rangle +f_{+}(t) | {\rm \overline{K}{}^0}(0) \rangle \cr}
\end{equation}
with
\begin{equation}
\eqalign{
f_{+}(t)=&{1\over 2}(e^{-\alpha_S t}+e^{-\alpha_L t})={1\over 2}e^{-\alpha_S t}\left[1+e^{-(\alpha_L-\alpha_S)t}\right]\cr
f_{-}(t)=&{1\over 2}(e^{-\alpha_S t}-e^{-\alpha_L t})={1\over 2}e^{-\alpha_S t}\left[1-e^{-(\alpha_L-\alpha_S)t}
\right]\quad.
\cr}
\end{equation}
{3)}{ In terms of the $CP$--eigenstates $K_1$ and $K_2$
\begin{equation}
\eqalign{
|K_1^0>=&{1\over{\sqrt 2}}( | {\rm K^ 0} \rangle + | {\rm \overline{K}{}^0} \rangle )\cr
|K_2^0>=&{1\over{\sqrt 2}}( | {\rm K^ 0} \rangle - | {\rm \overline{K}{}^0} \rangle )\cr}\qquad\qquad
\eqalign{
CP|K_1^0>=&+|K_1^0>\cr
CP|K_2^0>=&-|K_2^0>\cr}
\end{equation}
which we let us express the mass eigenstates as
\begin{equation}
\eqalign{
|K_S^0>=&{1\over{\sqrt 2}}\left[ (p+q) |K_1^0>+(p-q)|K_2^0>)\right]\cr
|K_L^0>=&{1\over{\sqrt 2}}\left[ (p-q) |K_1^0>+(p+q)|K_2^0>)\right]\cr}
\end{equation}
\medskip
The three bases $\{K_S,K_L\}$, $\{K^0,{\overline K}^0\}$
and $\{ K_1$, $K_2\} $, are completely equivalent.
\section{\bf The Mechanical Analogue of the $K^0- \overline {K^0} $
Complex System.}
\bigskip
We seek in classical physics an analogue of the two-state mixing problem which
leads to a non-zero value of $\epsilon$.
The problem is quite difficult since the equations of motion in
classical mechanics are time-reversal invariant. The main features of
irreversibility enter only considering the effects of dissipation.
Anyway, these results seem to reflect the well known requirements
of additional complementarity relations,
which occur at classical level,
to make the equations of motion of dissipative systems derivable from
a variational principle \cite{Bate}.
The neutral kaon system resembles just the typical problems we meet in
coupled degenerate oscillators with dissipation in classical physics.
A system of two coupled pendula~\cite{BW} provides the simplest analogy to the
two level system. The requirement of CPT invariance is satisfied by taking
the two pendula to have equal natural frequencies $\omega_0$ (and, for
simplicity, equal lengths and masses). If one couples them by a connecting
spring, the two normal modes will consist of one with frequency $\omega_1 =
\omega_0$ in which the two pendula oscillate in phase, and another with
$\omega_2 > \omega_0$ in which the two oscillate 180$^{\circ}$ out of phase,
thereby compressing and stretching the connecting spring. If the connection
dissipates energy, the mode with frequency $\omega_2$ will eventually decay
away, leaving only the in-phase mode with frequency $\omega_1 = \omega_0$.
\noindent
Another mechanical analogue of the two-level system can be obtained
using two coupled oscillators, each consisting of a identical mass $m$ and
spring constant $k_1$ coupled by a $k_2$ spring constant.
The way to obtain the equations of their motion is straightforward.
Supposing the mass unitary we get that
the equation of the motion of the two coupled oscillators are
\begin{equation}
\ddot x_1 = -k_1 x_1 - k_2 ( x_1 -x_2 )~~~
\end{equation}
\begin{equation}
\ddot x_2 = -k_1 x_2 + k_2 ( x_1 -x_2 )~~~
\end{equation}
\noindent
Now we assume an harmonic behavior:
$x_i = X_i e^{-i \omega t}$, and solve the two coupled equations in
$x_1$ and $x_2$ for characteristic values of $\omega$:
\begin{equation}
( k_1 +k_2 - \omega^2 )^2 = k_2^2~~~.
\end{equation}
\noindent
One solution has the natural frequency $\omega^2 = k_1$ independently of the
coupling. This is the solution with $x_1 = x_2$, the symmetric mode.
The other solution has a frequency $\omega^2 = k_1 + 2 k_2$, which corresponds
to the asymmetric mode, $x_1 \ne x_2$. The spring constant $k_2$ produces a
frequency difference between the two normal modes.
The symmetric solution with $x_1 = x_2$ corresponds to a symmetric
whereas the asymmetric solution corresponds to $x_1 \ne x_2$.
We consider, now, the effects of dissipation for the
two-state oscillator system introducing a small dissipative
coupling
between the two masses in the form of
an air dash-pot (Fig. 1).
This device gives a velocity-dependent force of the form
\begin{equation}
f= -k \dot x .
\end{equation}
In the particular case of the two coupled oscillators the dissipation force on
a mass can be written as a velocity-dependent force due to the other
mass.
The equations of motion contain a dissipative term and can be
expressed as
\begin{equation}
\ddot x_1 = -( k_1 + k_2 ) x_1 +k_2 x_2 + a \dot x_2
\end{equation}
\begin{equation}
\ddot x_2 = -( k_1 + k_2 ) x_2 +k_2 x_1 - a \dot x_1
\end{equation}
We can rewrite these equations by means of the standard formalism of
analytical mechanics
\begin{equation}
\frac{d}{dt} \frac{\partial T}{\partial x_j} -\frac{\partial T}{\partial t}
= Q_j
\end{equation}
where
\begin{equation}
T= \frac{1}{2} \sum_{i,j}\delta_{ij} \frac{d x_i}{dt}\frac{dx_j}{dt}\; ,
\end{equation}
and of the generalized momenta
\begin{equation}
Q_j = \sum_{i=1}^{2} A_{i j} x_{i} + B_{i j} \frac{dx_i}{dt}\; .
\end{equation}
being
\begin{equation}
\eqalign{
A_{11}&= A_{22} = - (k_1 +k_2)\cr
A_{12}&= A_{21} = k_2\cr
B_{11}&= B_{22} = 0\cr
-B_{12}&= B_{21}=a\cr}
\end{equation}
A normal mode solution of the kind $x_i = X_i e^{-i \omega t}$
can be derived from the secular equation
\begin{equation}
\vert \delta_{ij} \omega^2 + A_{ij} + i \omega B_{ij}\vert=0
\end{equation}
In order to recast the mechanical problem in a form closer to previous
relations, we rearrange the problem in a complex vectorial space of
constituent base $\ket{\mbox{{\boldmath $e_1$}}}$ and $\ket{\mbox{{\boldmath $e_2$}}}$. So that any vector state
\begin{equation} {\boldmath x} \equiv [ \begin{array}{c} x_1 \\ x_2 \end{array}
]~~~.
\end{equation}
can be expressed as
\begin{equation}
\ket{{\boldmath x}} = x_1 \ket{\mbox{{\boldmath $e_1$}}} + x_2 \ket{\mbox{{\boldmath $e_2$}}}.
\end{equation}
Thus the equations of the motion can be rewritten as
\begin{equation}
{\cal I} \frac{d^2 | x >}{dt^2} = {\cal K} | x >
- {\cal A} \frac{d| x >}{dt}
\end{equation}
with
\begin{equation}
{\cal K} \equiv [ \begin{array}{c c}
-( k_1 + k_2 )& k_2 \\
k_2 & -(k_1 +k_2 ) \\
\end{array} ]~~~,
\end{equation}
\begin{equation}
{\cal A} \equiv [ \begin{array}{c c}
0 & -a \\
a & 0 \\
\end{array} ]~~~,
\end{equation}
Its physical solution derive from the characteristic equation
\begin{equation}
{\cal H} {| \bf x >} = \omega^2 {| \bf x >}~~~,
\end{equation}
being
\begin{equation}
{\cal H} = i \omega {\cal A} - {\cal K} =
[ \begin{array}{c c}
k_1 + k_2 & -k_2 -i \sqrt{k_1} a \\
-k_2 +i \sqrt{k_1} a & k_1 +k_2 \\
\end{array} ]~~~,
\end{equation}
Here we have replaced $\sqrt{k_1}$ with $\omega$ in the coupling term.
The violation of ``CP'' invariance is parameterized by
the presence of small antisymmetric off-diagonal terms in ${\cal H}$.
The characteristic equation now becomes
\begin{equation}
(k_1 - \omega^2)^2 + 2 k_2 (k_1- \omega^2) - a^2 k_1= 0~~~.
\end{equation}
The physical solutions are normal modes with $\omega^2 \simeq k_1 +2 k_2$,
corresponding to $K_S$ (whose mass is affected strongly by the
``CP-conserving'' coupling $k_2$),
and with $\omega^2 = k_1 - \frac{a^2 k_1}{2 k_2}$, corresponding
to $K_L$ (whose mass now also receives a small contribution from the
``CP-violating'' coupling $a$). Thus the general solution
recovers the notation of the kaon system
with the identification
\begin{equation}
\epsilon \simeq \frac{-i a \sqrt {k_1}}{2 k_2}~~~.
\end{equation}
and
\begin{equation}
\frac{q}{p} = \sqrt{ \frac{-k_2 + i a \sqrt {k_1}}
{-k_2 - i a \sqrt {k_1}}}~~~.
\end{equation}
Here we have assumed $a$ sufficiently small so that $|\epsilon| \ll 1$.
The relative strength and the phase of the antisymmetric and symmetric couplings
thus governs $\epsilon$, just like in the case of neutral kaons.
\bigskip
\section{\bf The Electrical Analogue of CP Violation}
\bigskip
The dynamics of two state kaon system can be also reproduced by a
generalization of engineer's transmission line in load problems and,
as such, by means of any problem with degenerate constituents which
involve standing wave and dissipation.
A simple electrical analogue (Fig. 2) can be built up
using two $L-C$ ``tank'' circuits, each consisting of a capacitor $C_i$ in
parallel with an inductor $L_i$ and having resonant frequency $\omega_{0i} =
(L_iC_i)^{-1/2}$ $(i = 1,2)$. The two tank circuits are coupled through a
two-port device $G$, so that the voltages and the currents on each circuit
are related to each other by a linear relation, that is representative of the
2-port device,
\begin{equation}
\eqalign{
f_1(I_1,I_2 ; V_1,V_2)&=0\cr
f_2(I_1,I_2 ; V_1,V_2)&=0\; ,\cr}
\end{equation}
with $f_1$ and $f_2$ linear functions.
A 2-port component can have almost six representation, that are 2x2 matrices.
If we designate as $h_1$ and $h_2$ two of the four electrical variables
$I_1$, $I_2$,
$V_1$, $V_2$, and as $h^\prime_1$ and $h^\prime_2$
the remaining two variables, we can write a
matrix relation
\begin{equation}
{\cal G}\, {\it h_{inp}} = {\it h_{out}}~~~,
\end{equation}
where
\begin{equation}
{\cal G} \equiv \left[ \begin{array}{c c}
g_{11} & g_{12} \\
g_{21} & g_{22} \\
\end{array} \right]~~~,
\quad\hbox{where}\quad
{\it h_{inp}} \equiv \left[ \begin{array}{c} h_1 \\ h_2 \end{array}
\right]~~~
\qquad\hbox{and}\;\;
{\it h_{out}} \equiv \left[ \begin{array}{c} h^\prime_1 \\
h^\prime_2 \end{array}
\right]~~~.
\end{equation}
If we choose $h_1=V_1$, $ h_2=V_2$, and $h^\prime_1=I_1$,
$h^\prime_2=I_2$,
whereas $G$ represents the admittance matrix.
The equations of the circuit become:
\begin{equation}
\eqalign{
I_1&= g_{11} V_1 + g_{12} V_2\cr
I_2&= g_{21} V_1 + g_{22} V_2\; ,\cr}
\end{equation}
If we assume, for simplicity, $L_1 = L_2 = L$ and $C_1 = C_2 = C$,
and we calculate the currents
flowing in the two tank circuits, the symbolic solutions
$V_i = v_i e^{-i \omega t}$, $I_i = i_i e^{-i \omega t}$
give the voltages $v_i$:
\begin{equation}
v_i = \frac{i \omega L}{1- \omega^2 L C} i_i \quad .
\end{equation}
Substituting in the equation of the 2-port device, it is straightforward
to deduce the following matrix equation:
\begin{equation}
{\cal G'} {\bf i} = \omega^2 {\bf i}~~~,
\end{equation}
where
\begin{equation}
{\cal G'} \equiv \left[ \begin{array}{c c}
\omega_0 ^2 - \frac{i g_{11} \omega_0}{C} & \frac{-i g_{12} \omega_0 }{C} \\
\frac{-i g_{21} \omega_0}{C} & \omega_0 ^2 - \frac{i g_{22} \omega_0}{C} \\
\end{array} \right]~~~,
\end{equation}
and we have replaced $\omega$ by $\omega_0$ in the matrix representation.
We see that CP-violation can be obtained if $g_{12} \ne g_{21}$
(this condition is usually said not-reciprocal).
If we take $g_{11} \simeq g_{22}$ , $g_{12} \simeq g_{11} - g^* $ and
$g_{12} \simeq g_{11} + g^*$, with small antisymmetric off-diagonal terms
$g^*$, we can easily recognize that the solution
$\omega^2 = \omega_0 ^2 - 2 i g_{11} \frac{\omega_0}{C}$
corresponds to $K_S$ if
\begin{equation}
\epsilon \simeq \frac{g^*}{2 g_{11}}~~~.
\end{equation}
\vfill
\bigskip
\centerline{\bf ACKNOWLEDGMENTS}
\bigskip
One of us (L. T.) wishes to thank the warm hospitality of
the Institute of Advanced Methodologies of Environmental
Analysis and gratefully acknowledges the financial
support of a grant of the National
Research Council (Consiglio Nazionale delle Ricerche).
\bigskip
| proofpile-arXiv_065-385 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Conditional independence graphs are of vital importance in the
structuring, understanding and computing of high dimensional complex
statistical models. For a review of early work in this area, see
\cite{lslcprob}, the references and the discussion, and also
\cite{apdprop}. The above mentioned work is concerned
with updating in discrete probability networks. For a discussion of
updating in networks with continuous random variables, see
\cite{sllprop}, for example. For a general overview
of the theory of graphical models, see \cite{sllbook}.
Also relevant to this paper is the work on graphical Gaussian
models. \cite{sllbook}, \cite{whit} and \cite{tpshk}
discuss the properties of such models.
\cite{ntbayesnet} examine
data propagation through a graphical Gaussian network, and apply their
results to a dynamic linear model (DLM).
Here, the aim is to link the
theory of local computation over graphical Gaussian networks to the
Bayes linear framework for
subjective statistical inference, and the many interpretive and
diagnostic features associated with that methodology, in particular.
\section{Bayes linear methods}
\subsection{Overview}
In this paper, a Bayes linear approach is taken to subjective statistical inference,
making expectation (rather than probability) primitive. An overview of
the methodology is given in \cite{fgcross}.
The foundations of the theory are quite general, and are outlined in
the context of second-order exchangeability in \cite{mgrevexch},
and discussed for more general situations in \cite{mgpriorinf}.
Bayes
linear methods may be used in order to learn about any quantities of
interest, provided only that a mean and variance specification is made
for all relevant quantities, and a specification for the covariance
between all pairs of quantities is made. No distributional assumptions
are necessary.
There are many interpretive and
diagnostic features of the Bayes linear methodology. These are
discussed with reference to \ensuremath{[B/D]}\ (the Bayes linear computer
programming language) in \cite{gwblincomp}.
\subsection{Bayes linear conditional independence}
Conventional graphical models are defined via strict probabilistic
conditional independence \cite{apdci}. However, as \cite{jqsid}
demonstrates, all that is actually required is a tertiary operator
$\ci[\cdot]{\cdot}{\cdot}$ satisfying some simple properties. Any
relation satisfying these properties is known as a \emph{generalised
conditional independence} relation. Bayes
linear graphical models are based on what \cite{jsstatgraph}
refers to as \emph{weak conditional independence}. In this paper, the
relation will be referred to as \emph{adjusted orthogonality}, in
order to emphasise the linear structure underlying the relation.
Bayes
linear graphical models based upon the concept of adjusted
orthogonality are described in \cite{mginfl}. For completeness,
and to introduce some notation useful in the context of local
computation, the most important elements of the methodology are
summarised here, and the precise form of the adjusted orthogonality
relation is defined.
For vectors of random quantities, $X$ and $Y$, define
$\cov{X}{Y}=\ex{XY\mbox{}^\mathrm{T}}-\ex{X}\ex{Y}\mbox{}^\mathrm{T}$ and
$\var{X}=\cov{X}{X}$. Also, for any matrix, $A$, $A\mbox{}^{\dagger}$ represents
the Moore-Penrose generalised inverse of $A$.
\begin{defn}
For all vectors of random quantities $B$ and $D$, define
\begin{align}
\proj[D]{B} =& \cov{B}{D}\var{D}\mbox{}^{\dagger} \\
\transf[D]{B} =& \proj{B}\proj[B]{D}
\end{align}
\end{defn}
These represent the fundamental operators of the
Bayes linear methodology. $\proj[D]{B}$ is the operator which updates
the expectation vector for $B$ based on the observation of $D$, and
$\transf[D]{B}$ updates the variance matrix for $B$ based on
observation of $D$. Local computation over Bayes linear graphical
models is made possible by local computation of these operators.
\begin{defn}
For all vectors of random quantities $B$, $C$ and $D$, define
\begin{align}
\ex[D]{B} =& \ex{B}+\proj[D]{B}[D-\ex{D}] \label{eq:exbd}\\
\cov[D]{B}{C} =& \cov{B-\ex[D]{B}}{C-\ex[D]{C}} \label{eq:covdbc}
\end{align}
\end{defn}
$\ex[D]{B}$ is the \emph{expectation for $B$ adjusted by
$D$}. It represents the linear combination of a constant and the
components of $D$ \emph{closest to} $B$ in the sense of expected
squared loss. It corresponds to $\ex{B|D}$ when $B$ and $D$ are
jointly multivariate normal. $\cov[D]{B}{C}$ is the
\emph{covariance between $B$ and $C$ adjusted by $D$}, and represents
the covariance between $B$ and $C$ given
observation of $D$. It corresponds to $\cov{B}{C|D}$ when $B$, $C$ and
$D$ are jointly multivariate normal.
\begin{lemma}
For all vectors of random quantities $B$, $C$ and $D$
\begin{align}
\cov[D]{B}{C} =& \cov{B}{C}-\cov{B}{D}\proj[D]{C}\mbox{}^\mathrm{T} \label{eq:covdbc2}\\
\var[D]{B} =& (\mathrm{I}-\transf[D]{B})\var{B}\label{eq:vardb2}
\end{align}
\end{lemma}
\begin{proof}
Substituting \eqref{eq:exbd} into \eqref{eq:covdbc} we get
\begin{align}
\cov[D]{B}{C} =& \cov{B}{C-\ex[D]{C}} \\
=& \cov{B}{C-\proj[D]{C}D}
\end{align}
which gives \eqref{eq:covdbc2}, and replacing $C$ by $B$ gives
\eqref{eq:vardb2}.
\end{proof}
Note that \eqref{eq:vardb2} shows that $\transf[D]{B}$ is responsible
for the updating of variance matrices. Adjusted orthogonality is now defined.
\begin{defn}
\label{def:ci}
For random vectors $B$, $C$ and $D$
\begin{align}
\ci[D]{B}{C} \iff& \cov[D]{B}{C}=0
\end{align}
\end{defn}
\cite{mginfl} shows that this relation does indeed define a
generalised conditional independence property, and hence that all the
usual properties of graphical models based upon such a relation hold.
\subsection{Bayes linear graphical models}
\cite{mginfl} defines a Bayes linear influence
diagram based upon the adjusted orthogonality relation.
\cite{gfsbeer} illustrate the use of Bayes linear influence diagrams in
a multivariate forecasting problem.
Relevant graph
theoretic concepts can be found in the appendix of
\cite{apdsllhyper}. The terms \emph{moral graph} and \emph{junction
tree} are explained in \cite{sdlc}.
Briefly, an undirected moral graph is formed from a directed acyclic
graph by \emph{marrying} all pairs of parents of each node, by adding an arc
between them, and
then dropping arrows from all arcs. A junction tree is the \emph{tree} of
\emph{cliques} of a \emph{triangulated} moral graph. A tree is a graph
without any cycles. A graph is
triangulated if no cycle of length at least four is without a chord. A
clique is a maximally connected subset of a triangulated graph.
In this paper,
attention will focus on undirected graphs. An undirected graph
consists of a collection of nodes $B=\{B_i|1\leq i\leq n\}$ for some
$n$, together with a collection of undirected arcs. Every pair
of nodes, $\{B_i,B_j\}$ is joined by an undirected arc unless
$\ci[B\backslash\{B_i,B_j\}]{B_i}{B_j}$. Here, the standard
set theory notation, $B\backslash A$ is used to mean the set of elements of $B$
which are not in $A$. An
undirected graph may be obtained from a Bayes linear influence diagram
by forming the
moral graph of the influence diagram in the usual way.
In fact, \emph{local computation} (the computation of global
influences of particular nodes of the graph, using only information
local to adjacent nodes) requires that the undirected graph
representing the conditional independence structure is a
tree. This
tree may be formed as the junction
tree of a triangulated moral graph, or better, by grouping together
related variables ``by hand'' in order to get a tree structure for the
graph. For the rest of this paper, it will be assumed that the model
of interest is represented by an undirected tree defined via adjusted
orthogonality.
\section{Local computation on Bayes linear graphical models}
\subsection{Transforms for adjusted orthogonal belief structures}
\begin{lemma}
\label{lem:covbc}
If $B$, $C$ and $D$ are random vectors such that $\ci[D]{B}{C}$, then
\begin{equation}
\cov{B}{C}=\cov{B}{D}\proj[D]{C}\mbox{}^\mathrm{T}
\end{equation}
\end{lemma}
This follows immediately from Definition \ref{def:ci} and \eqref{eq:covdbc2}.
\begin{lemma}
\label{lem:covxyz}
If $X$, $Y$ and $Z$ are random vectors such that $\ci[Y]{X}{Z}$, then
\begin{align}
\cov[X]{Y}{Z} =& (\mathrm{I}-\transf[X]{Y})\cov{Y}{Z}
\end{align}
\end{lemma}
\begin{proof}
From \eqref{eq:covdbc2}
\begin{align}
\cov[X]{Y}{Z} =& \cov{Y}{Z}-\cov{Y}{X}\var{X}\mbox{}^{\dagger}\cov{X}{Z} \\
=& \cov{Y}{Z} - \proj[X]{Y}\cov{X}{Y}\proj[Y]{Z}\mbox{}^\mathrm{T} \quad
\mathrm{by\ Lemma\ \ref{lem:covbc}}
\end{align}
and the result follows.
\end{proof}
\begin{thm}
\label{thm:lc}
If $X$, $Y$ and $Z$ are random vectors such that $\ci[Y]{X}{Z}$, then
\begin{align}
\proj[X]{Z} =& \proj[Y]{Z}\proj[X]{Y} \label{eq:projxz} \\
\transf[X]{Z} =& \proj[Y]{Z}\transf[X]{Y}\proj[Z]{Y} \label{eq:transfxz}
\end{align}
\end{thm}
\begin{proof}
\begin{align}
\proj[X]{Z} =& \cov{Z}{X}\var{X}\mbox{}^{\dagger} \\
=& \cov{Z}{Y}\proj[Y]{X}\mbox{}^\mathrm{T}\var{X}\mbox{}^{\dagger} \quad \mathrm{by\ Lemma\
\ref{lem:covbc}}
\end{align}
which gives \eqref{eq:projxz}. Also
\begin{align}
\transf[X]{Z} =& \proj[X]{Z}\proj[Z]{X} \\
=& \cov{Z}{X}\var{X}\mbox{}^{\dagger}\cov{X}{Z}\var{Z}\mbox{}^{\dagger} \\
=&
\cov{Z}{Y}\proj[Y]{X}\mbox{}^\mathrm{T}\var{X}\mbox{}^{\dagger}\cov{X}{Y}\proj[Y]{Z}\mbox{}^\mathrm{T}\var{Z}\mbox{}^{\dagger}
\end{align}
which gives \eqref{eq:transfxz}.
\end{proof}
Theorem \ref{thm:lc} contains the two key results which allow local
computation over Bayes linear belief networks.
\subsection{Local computation on trees}
\begin{figure}
\epsfig{file=path.eps,width=6in}
\caption{Local computation along a path}
\label{fig:path}
\end{figure}
The
implications of Theorem \ref{thm:lc} to Bayes linear trees should be
clear from examination of Figure \ref{fig:path}.
To examine the effect of observing node $X$, it is sufficient to
compute the operators $\proj[X]{Z}$ and
$\transf[X]{Z}$ for every node, $Z$ on the
graph, since these operators contain all necessary
information about the adjustment of $Z$ by $X$.
There is a unique path
from $X$ to $Z$ which is shown in
Figure \ref{fig:path}. The direct predecessor of $Z$ is denoted by
$Y$. Note further that it is a property of the graph that
$\ci[Y]{X}{Z}$. Further, by Theorem \ref{thm:lc}, the
transforms $\proj[X]{Z}$ and $\transf[X]{Z}$ can be computed using
$\proj[X]{Y}$ and $\transf[X]{Y}$ together with information local to
nodes $Y$ and $Z$. This provides a recursive method for the
calculation of the transforms, which leads to the algorithm for the
propagation of transforms throughout the tree, which is described in
the next section.
\subsection{Algorithm for transform propagation}
Consider a tree with nodes $B=\{B_1,\ldots,B_n\}$ for some
$n$. Each node, $B_i$, represents a vector of random quantities. It
also has an edge set $G$,
where each $g\in G$ is of the form $g=\{B_k,B_l\}$ for some $k,l$. The
resulting tree should represent a conditional independence graph over
the random variables in question. It
is assumed that each node, $B_i$ has an expectation vector $E_{B(i)}=\ex{B_i}$
and variance matrix $V_{B(i)}=\var{B_i}$ associated with it. It is further
assumed that each edge, $\{B_k,B_l\}$ has the covariance matrix,
$C_{B(k),B(l)}=\cov{B_k}{B_l}$ associated with it. This is the only
information required in order to carry out Bayes linear local
computation over such structures.
Now consider the effect of
adjustment by the vector $X$, which consists of some or all of the
components of node $B_j$ for some $j$. Then, starting with node $B_j$,
calculate and store $T_{B(j)}=\transf[X]{B_j}$ and
$P_{B(j)}=\proj[X]{B_j}$. Then, for each node
$B_k\in b(B_j)\equiv\left\{B_i|\{B_i,B_j\}\in G\right\}$ calculate and
store $T_{B(k)}$ and $P_{B(k)}$, then for each node
$B_l\in b(B_k)\backslash B_j$, do the same, using Theorem \ref{thm:lc} to
calculate
\begin{align}
P_{B(l)} =& \proj[B_k]{B_l}P_{B(k)} \\
T_{B(l)} =& \proj[B_k]{B_l}T_{B(k)}\proj[B_l]{B_k}
\end{align}
In this way, recursively step outward through the tree, at
each stage computing and storing the transforms using the transforms
from the predecessor and the variance and covariance information over and
between the current node and its predecessor.
Once this process is completed, associated with every node, $B_i\in
B$, there are matrices
$T_{B(i)}=\transf[X]{B_i}$ and $P_{B(i)}=\proj[X]{B_i}$. These
operators represent all information about the adjustment of the
structure by $X$. Note however, that $X$ has not yet been observed, and
that expectations, variances and covariances associated
with nodes and edges have not been updated.
It is a crucial part of the Bayes linear methodology that \emph{a
priori} analysis of the model takes place, and that the expected
influence of potential observables is examined. Examination of the
eigen structure of the belief transforms associated with nodes of
particular interest is the key to understanding the structure of the
model, and the benefits of observing particular nodes. It is important
from a design perspective that such analyses can take place before any
observations are made. See \cite{mgcomp} for a more complete
discussion of such issues, and \cite{gwblincomp} for a
discussion of the technical issues it raises.
\subsection{Updating of expectation and covariance structures after
observation}
After observation of $X=x$, updating of the expectation,
variance and covariance structure over the tree is required. Start at
node $B_j$ and calculate
\begin{align}
E_{B(j)}^\prime =& E_{B(j)} + P_{B(j)}\left(x-\ex{X}\right) \\
V_{B(j)}^\prime =& (\mathrm{I}-T_{B(j)})V_{B(j)}
\end{align}
(using \ref{eq:vardb2}).
Replace $E_{B(j)}$ by $E_{B(j)}^\prime$ and $V_{B(j)}$ by
$V_{B(j)}^\prime$. Then for each $B_k\in b(B_j)$ do the same, and also
update the arc between $B_j$ and $B_k$ by calculating
\begin{align}
C_{B(j),B(k)}^\prime =& (\mathrm{I}-T_{B(j)})C_{B(j),B(k)}
\end{align}
(using Lemma \ref{lem:covxyz}), and replacing $C_{B(j),B(k)}$ by
$C_{B(j),B(k)}^\prime$. Again, step outwards through the tree, updating
nodes and edges using the transforms previously calculated.
\subsection{Pruning the tree}
Once the expectations, variances and covariances over the structure
have been updated, the tree should be pruned. If the adjusting node
was completely observed (\emph{i.e.} $X=B_j$), then $B_j$ should be
removed from $B$, and $G$ should have any arcs involving $B_j$
removed. Further, leaf nodes and their edges may always
be dropped without affecting the conditional independence structure of
the graph. This is important if a leaf node is partially observed and
the remaining variables are unobservable and of little diagnostic
interest, since it means that the whole node may be dropped after
observation of its observable components.
If a non-leaf node is
partially observed, or a leaf node is observed, but its remaining
components are observable or of interest, then the graph itself should
remain unaffected, but the expectation, variance and covariance
matrices associated with the node and its arcs should have the
observed (and hence redundant) rows and columns removed (for reasons
of efficiency --- use of the Moore-Penrose generalised inverse ensures
that no problems will arise if observed variables are left in the system).
\subsection{Sequential adjustment}
As data becomes available on various nodes, it should be incorporated
into the tree one node at a time. For each node with observations, the
transforms should be computed, and then the beliefs
updated in a sequential fashion. The fact that such sequential
updating provides a coherent method of adjustment is demonstrated in
\cite{mgadjbel}.
\subsection{Local computation of diagnostics}
Diagnostics for Bayes linear adjustments are a crucial part of the
methodology, and are discussed in \cite{mgtraj}. It follows that
for local computation over Bayes linear networks to be of practical
value, methodology must be developed for the local computation of
Bayes linear diagnostics such as the \emph{size}, \emph{expected size}
and \emph{bearing} of
an adjustment.
The bearing represents the magnitude and direction of changes
in belief. The magnitude of the bearing, which indicates the magnitude
of changes in belief, is known as the \emph{size} of the
adjustment.
Consider the observation of data, $X=x$, and
the partial bearing of the adjustment it
induces on some node, $B_p$. Before observation of $X$, record $E=E_{B(p)}$
and $V=V_{B(p)}$. Also calculate the Cholesky factor, $A$ of $V$, so
that $A$ is lower triangular, and $V=AA\mbox{}^\mathrm{T}$. Once the observed
value $X=x$ is known, propagate the revised expectations, variances
and covariances through the Bayes linear tree. The new value of
$E_{B(p)}$ will be denoted $E_{B(p)}^\prime$. Now the quantity
\begin{align}
E^\prime =& A\mbox{}^{\dagger}(E_{B(p)}^\prime-E)
\label{eq:bear}
\end{align}
represents the adjusted expectation for an orthonormal basis,
$F=A\mbox{}^{\dagger}(B_p-E)$ for $B_p$
with respect to the \emph{a priori} beliefs, $E$ and $V$.
Therefore, $E^\prime$ gives the coordinates of
the \emph{bearing} of the adjustment with respect to that
basis.
The
size of the partial adjustment is given by
\begin{align}
\size[x]{B_p} =& ||E^\prime||^2
\end{align}
where $||\cdot||$ represents the Euclidean norm. The expected size is
given by
\begin{align}
\ex{\size[X]{B_p}} =& \trace{T_{B(p)}}
\end{align}
and so the \emph{size ratio} for the adjustment (often of most
immediate interest) is given by
\begin{align}
\sizeratio[x]{B_p} =& \frac{||E^\prime||^2}{\trace{T_{B(p)}}}
\end{align}
A size ratio close to one indicates changes in belief close to what
would be expected. A size ratio smaller than one indicates changes in
belief of smaller magnitude than would have been anticipated \emph{a
priori}, and a size ratio bigger than one indicates changes in belief
of larger magnitude than would have been expected \emph{a
priori}. Informally, a size ratio bigger than 3 is often taken to
indicate a diagnostic warning of possible conflict between \emph{a
priori} belief specifications and the observed data.
Cumulative sizes and bearings may be calculated in exactly the same
way, simply by updating several times before computing
$E^\prime$. However, to calculate the expected size of the adjustment,
in order to compute the size ratio, the cumulative belief transform
must be recorded and updated at each stage, using the fact that
\begin{align}
\transf[X+Y]{B} =& \ \mathrm{I} - (\mathrm{I} - \transf[Y/]{B/})(\mathrm{I} - \transf[X]{B})
\end{align}
where $\transf[Y/]{B/}$ represents the partial transform for $B$ by
$Y$, with respect to the structure already adjusted by $X$. In other
words, $\mathrm{I}$ minus the transforms at each stage multiply together to
give $\mathrm{I}$ minus the cumulative transform. See \cite{mgcomp}
for a more complete discussion of the multiplicative properties of
belief transforms.
\subsection{Efficient computation for evidence from multiple nodes}
To adjust the tree given data at multiple nodes, it would be
inefficient to adjust the entire tree sequentially by each node in
turn, if the nodes in question are ``close together''. Here again,
ideas may be borrowed from theory for the updating of probabilistic expert
systems. It is possible to propagate transforms and projections from
each node for
which adjustment is required, to a \emph{strong root}, and then
propagate transforms from the
strong root out to the rest of the tree. A strong root is a node which
separates the nodes for which there is information, from as much as possible
of the rest of the tree. In practice, there are many ways in which one
can use the
strong root in order to control information flow through the tree. An
example of its use is given in Section \ref{sec:nse}.
\subsection{Geometric interpretation, and infinite collections}
In this paper, attention has focussed exclusively on finite
collections of quantities, and matrix representations for Bayes linear
operators. All of the theory has been developed from the perspective
of pushing matrix representations of linear operators around a
network. However, the Bayes linear methodology may be formulated and
developed from a purely geometric viewpoint, involving linear
operators on a (possibly infinite dimensional) Hilbert space. This is
not relevant to practical computer implementations of the theory and
algorithms -- hence the focus on matrix formulations in this
paper. However, from a conceptual viewpoint, it is very important,
since one sometimes has to deal, in principle, with infinite
collections of quantities, or probability measures over an infinite
partition. In fact, all of the theory for local computation over Bayes
linear belief networks developed in this paper is valid for the local
computation of Bayes linear operators on an arbitrary Hilbert
space. Consequently, the results may be interpreted geometrically, as
providing a method of pushing linear operators around a Bayes linear
Hilbert space network. A geometric form of Theorem \ref{thm:lc} is derived
and utilised in \cite{gwaeb}.
\section{Example: A dynamic linear model}
\begin{figure}
\epsfig{file=dlm.eps,width=6in}
\caption{Tree for a dynamic linear model}
\label{fig:dlm}
\end{figure}
Figure \ref{fig:dlm} shows a Bayes linear graphical tree model for the
first four time points of a dynamic linear model. Local computation
will be illustrated using the
example model, beliefs and data from \cite{djwll}. Here,
$\forall t,\ \theta_t$ represents the vector $(M_t,N_t)\mbox{}^\mathrm{T}$ from
that paper.
The model takes the form
\begin{align}
X_t =& (1,0)\theta_t + \nu_t \\
\theta_t =& \left(\begin{array}{cc}
1 & 1\\
0 & 1
\end{array}\right)
\theta_{t-1} + \omega_t
\end{align}
where $\var{\theta_1}=diag(400,9)$, $\ex{\theta_1}=(20,0)\mbox{}^\mathrm{T}$,
$\ex{\nu_t}=0$,
$\ex{\omega_t}=0,\ \var{\nu_t}=171,\ \var{\omega_t}=diag(4.75,0.36)\
\forall t$ and the $\nu_t$ and $\omega_t$ are uncorrelated.
First, the nodes and arcs shown in Figure \ref{fig:dlm}
are defined. Then the expectation and variance of each node is
calculated and associated with each node, and the covariances between
pairs of nodes joined by an arc is also computed, and associated with
the arc. All of the expectations variances and covariances are
determined by the model. For example, node $X_1$ has expectation
vector $(20)$ and variance matrix $(571)$ associated with it. Node
$\theta_1$ has expectation vector $(20,0)\mbox{}^\mathrm{T}$ and variance matrix
$diag(400,9)$ associated with it. The arc between $X_1$ and $\theta_1$
has associated with it the covariance matrix $(400,0)$. Note that
though the arc is undirected, the ``direction'' with respect to which
the covariance matrix is defined is important, and needs also to be
stored.
The effect of the observation of $X_1$ on
the tree structure will be examined, and the effect on the node
$\theta_4$ in particular, which
has \emph{a priori} variance matrix $\left(\begin{array}{rr}
500.3 & 29.2 \\
29.2 & 10.1
\end{array}\right)$ associated with it. Before actual observation
of $X_1$, the belief transforms for the adjustment, may be computed
across the tree structure. The transforms are computed recursively,
in the following order.
\begin{xalignat}{3}
T_{X(1)}=&\left(\begin{array}{r}
1
\end{array}\right) &
T_{\theta(1)}=&\left(\begin{array}{rr}
0.7 & 0\\
0 & 0
\end{array}\right) &
T_{\theta(2)}=&\left(\begin{array}{rr}
0.692 & -0.692\\
0 & 0
\end{array}\right) \\
T_{\theta(3)}=&\left(\begin{array}{rr}
0.684 & -1.342\\
0 & 0
\end{array}\right) &
T_{\theta(4)}=&\left(\begin{array}{rr}
0.674 & -1.949\\
0 & 0
\end{array}\right) &
T_{X(4)}=&\left(\begin{array}{r}
0.417
\end{array}\right) \\
T_{X(3)}=&\left(\begin{array}{r}
0.453
\end{array}\right) &
T_{X(2)}=&\left(\begin{array}{r}
0.479
\end{array}\right)
\end{xalignat}
The $P$ matrices are calculated similarly. In particular,
$P_{\theta(4)}=(0.7,0)\mbox{}^\mathrm{T}$. \emph{A priori} analysis of the belief
transforms is possible. For example, $\trace{T_{\theta(4)}}=0.674$,
indicating that observation of $X_1$ is expected to reduce overall
uncertainty about $\theta_4$ by a factor of $0.674$. This is also the
expected size of the bearing for the adjustment of $\theta_4$ by
$X_1$.
Now, $X_1$ is observed to be $17$, and so the expectations, variances
and covariances may be updated across the structure. For example,
beliefs about node $\theta_4$ were updated so that
$E_{\theta(4)}=(17.9,0)\mbox{}^\mathrm{T}$ and $V_{\theta(4)}=\left(\begin{array}{rr}
220.0 & 29.2 \\
29.2 & 10.1
\end{array}\right)$ after propagation. Also, calculating the bearing
for the adjustment of $\theta_4$ by $X_1=17$, using \eqref{eq:bear}
gives $E^\prime = (-0.094,0.042)\mbox{}^\mathrm{T}$. Consequently, the size of the
adjustment is $0.011$ and the size ratio is $0.016$.
Once evidence from the observed value of $X_1$ has been taken into
account, the $X_1$ node, and the arc between $X_1$ and $\theta_1$ may
be dropped from the graph. Note also that $\theta_1$ then becomes an
unobservable leaf node, which may be of little interest, and so if
desired, the $\theta_1$ node, and the arc between $\theta_1$ and
$\theta_2$ may also be dropped. Observation of $X_2$ may now be
considered. Using the updated, pruned tree, projections and transforms
for the adjustment by $X_2$ may be calculated and propagated through
the tree. For example, the (partial) belief transform for the
adjustment of $\theta_4$ by $X_2$ is $\left(\begin{array}{rr}
0.46 & -0.87 \\
0.03 & -0.05
\end{array}\right)$. If cumulative diagnostics are of interest,
then it is necessary to calculate the cumulative belief transform,
$\left(\begin{array}{rr}
0.82 & -1.92 \\
0.01 & 0.00
\end{array}\right)$. This has trace $0.82$, and so the resolution for
the combined adjustment of $\theta_4$ by $X_1$ and $X_2$ is
0.82. Similarly, the expected size of the cumulative bearing is
$0.82$. $X_2$ is observed to be $22$. The new expectations, variances
and covariances may then be propagated through the tree. For example,
the new expectation vector for $\theta_4$ is $(19.95,0.13)\mbox{}^\mathrm{T}$. The
size of the cumulative bearing is $0.002$, giving a size ratio of
approximately $0.002$. Again, the tree may be pruned, and the whole
process may continue.
\section{Example: $n$-step exchangeable adjustments}
\label{sec:nse}
\begin{figure}[t]
\epsfig{file=3se.eps,width=6in}
\caption{Graphical models for 3-step exchangeable quantities}
\label{fig:3se}
\end{figure}
An ordered collection of random quantities, $\{X_1,X_2,\ldots\}$ is
said to be (second-order)
$n$-step exchangeable if (second-order) beliefs about the collection
remain invariant under an arbitrary translation or reflection of the
collection, and if the covariance between any two members of the
collection is fixed, provided only that they are a distance of at
least $n$ apart. Such quantities arise naturally in the context of
differenced time series \cite{djwll}. $n$-step exchangeable quantities may be
written in the form
\begin{align}
X_i =& M + R_i,\ \forall i
\end{align}
where the $R_i$ are a mean zero $n$-step exchangeable collection such
that the covariance between them is zero provided they are a distance
of at least $n$ apart. $M$ represents the underlying mean for the
collection, and $R_i$ represents the residual uncertainty which would
be left if the underlying mean became known.
Introduction of a mean
quantity helps to simplify a graphical model for an $n$-step
exchangeable collection. For example, Figure \ref{fig:3se} (top) shows an
undirected graphical model for a $3$-step exchangeable
collection. Note that without the introduction of the mean quantity,
$M$, all nodes on the graph would be joined, not just those a distance
of one and two apart. Figure \ref{fig:3se} (bottom) shows a
conditional independence graph for the same collection of quantities,
duplicated and grouped together so as to make the resulting graph
a tree. Note that each node contains $3$ quantities, and that there is
one less node than observables.
In general, for a collection of $N$,
$n$-step exchangeable quantities, the variables can be grouped
together to obtain a simple chain graph, in the obvious way, so that
there are $N-n+2$ nodes, each containing $n$ quantities. The resulting
graph for
$5$-step exchangeable quantities is shown in Figure \ref{fig:5se}
(with the first four nodes missing).
In \cite{djwll}, $3$-, $4$-, and $5$-step exchangeable
collections, $\{{X^{(1)}_3}^2,{X^{(1)}_4}^2,\ldots,\}$,
$\{{X^{(2)}_4}^2,{X^{(2)}_5}^2,\ldots,\}$ and
$\{{X^{(3)}_5}^2,\linebreak {X^{(3)}_6}^2,\ldots,\}$ are used in order to learn
about the quantities, $V_1$, $V_2$ and $V_3$, representing the
variances underlying the DLM discussed in the previous section. Since
the observables sequences are $3$-, $4$-, and $5$-step exchangeable,
they may all be regarded as $5$-step exchangeable, and so Figure
\ref{fig:5se} represents a graphical model for the variables, where
$V=(V_1,V_2,V_3)\mbox{}^\mathrm{T}$, and $\forall i,\
Z_i=({X^{(1)}_i}^2,{X^{(2)}_i}^2,{X^{(3)}_i}^2)\mbox{}^\mathrm{T}$. Note that $V$
represents (a known linear function of) the mean of the $5$-step
exchangeable vectors, $Z_t$. Each node of the graph actually contains
$15$ quantities. For example, the first node shown contains $\{
V_1,V_2,V_3,{X^{(1)}_5}^2,{X^{(2)}_5}^2,{X^{(3)}_5}^2,
{X^{(1)}_6}^2,{X^{(2)}_6}^2,{X^{(3)}_6}^2,\linebreak
{X^{(1)}_7}^2,{X^{(2)}_7}^2,{X^{(3)}_7}^2
,{X^{(1)}_8}^2,{X^{(2)}_8}^2,{X^{(3)}_8}^2
\}$. Note that the fact that quantities are duplicated in other nodes
does not affect the analysis in any way. Observation of a particular
quantity in one node will reduce to zero the variance of that quantity
in any other node, as they will have a correlation of unity. Here,
the fact that the Moore-Penrose generalised inverse is used in the
definition of the projections and transforms becomes
important. Updating for this model may be locally computed over this
tree structure in the usual way.
Suppose now that information for quantities $Z_5$ to $Z_9$ is to
become available simultaneously. This corresponds to information on
the first two nodes (and others, but this will be conveyed
automatically). The second node acts as a strong root for information
from the first two nodes. The transform for the first node may be
calculated using information on the first node, thus allowing
computation of the transform for the
second node given information on the first. Once the information from
the first node has been incorporated into the first two nodes, the
transform for the
second node given information
from the first two nodes may be calculated, and the resulting
transform for the second node given information on the first two may
be used in order to propagate information to the rest of the tree.
\begin{figure}[t]
\epsfig{file=5se.eps,width=4in}
\caption{A graphical model for 5-step exchangeable vectors}
\label{fig:5se}
\end{figure}
\section{Implementation considerations}
A test system for model building and computation over Bayes linear
belief networks has been developed by the author using the \textsf{\textsl{MuPAD}}\
computer algebra system, described in
\cite{mupadp} and \cite{mupadb}. \textsf{\textsl{MuPAD}}\ is a very high level
object-oriented mathematical programming language, with symbolic
computing capabilities, ideal for the rapid prototyping of
mathematical software and algorithms. The test system allows definition of
nodes and arcs of a tree, and attachment of relevant beliefs to nodes
and arcs. Recursive algorithms allow computation of belief transforms
for node adjustment, and propagation of updated means, variances and
covariances
through the tree.
Note that whilst propagating outwards through the tree, updating of
the different branches of the tree may proceed in parallel. \textsf{\textsl{MuPAD}}\
provides a ``parallel for'' construct which allows simple exploitation
of this fact on appropriate hardware.
Simple functions to allow computation of diagnostics
for particular nodes also exist.
\section{Conclusions}
The algorithms described in this paper are very simple and easy to
implement, and very fast compared to many other algorithms for
updating in Bayesian belief networks. Further, by linking the theory
with the machinery of the Bayes linear methodology, full \emph{a
priori} and diagnostic analysis may also take place. \emph{A priori}
analysis is particularly important in large sparse networks, where it
is often not clear whether or not it is worth observing particular
nodes, which may be ``far'' from nodes of interest. Similarly,
diagnostic analysis is crucial, both for diagnosing misspecified
node and arc beliefs, and for diagnosing an incorrectly structured
model.
For those who already appreciate the benefits of working within
the Bayes linear paradigm, the methodology described in this paper
provides a mechanism for the tackling of much larger structured
problems than previously possible, using local computation of belief
transforms, adjustments and diagnostics.
| proofpile-arXiv_065-386 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Clusters of galaxies are the most recently assembled structures in the
universe, and the degree of observed substructure in a cluster is the
result of the complex interplay between the underlying cosmological
model (as has been demonstrated by many groups including
\citeN{bird93}, \citeN{evrard93} and \citeN{west90}) and the physical
processes by which clusters form and evolve. Many clusters
have more than one dynamical component in the velocity
structure in addition to spatial subclustering (\citeNP{colless95},
\citeNP{kriessler95}, \citeNP{bird93}, \citeNP{west90} and
\citeNP{fitchett88}). Substructure in the underlying cluster potential
and specifically the sub-clumping of mass on smaller-scales (galactic
scales) within the cluster can be directly mapped via lensing effects.
The observed gravitational lensing of the faint
background population by clusters is increasingly
becoming a promising probe of the detailed mass distribution within
a cluster as well as on larger scales (super-cluster scales). We
expect on theoretical grounds and do observe
local weak shear effects around individual bright galaxies in
clusters over and above the global shearing produced by the `smooth'
cluster potential. While there is ample evidence from lensing for the
clumping of dark matter on different scales within the cluster, the
spatial extent of dark halos of cluster galaxies are yet to be constrained.
The issue is of crucial importance as it addresses the key question
of whether the mass to light ratio of galaxies is a function of the
environment, and if it is indeed significantly different in the high density
regions like cluster cores as opposed to the field. Moreover, it is
the physical processes that operate within clusters like ram-pressure
stripping, merging and ``harassment'' that imply re-distribution of
mass on smaller scales and their efficiency can be directly probed using
accurate lensing mass profiles.
Constraining the fundamental parameters such as mass and halo size from
lensing effects for field galaxies was attempted first by
\citeN{tyson84} using plate material, the quality of which precluded
any signal detection. More recently, \citeN{brainerd95} used deep
ground-based imaging and detected the galaxy-galaxy lensing
signal and hence placed upper limits on the mean mass of an average
field galaxy. \citeN{griffiths96} used the Medium Deep Survey (MDS)
and HST archival data in a similar manner to extract the
polarization signal. Although the signal is unambiguously detected,
it is weak, and no strong constraints can yet be put on the mean profile of
field galaxies, but the prospects are promising for the near future.
On the other hand no such analysis has been pursued in dense regions
like clusters, and very little is known about the lensing effect of
galaxy halos superposed on the lensing effect of a cluster.
\shortciteN{kneib96} have demonstrated the importance of galaxy-scale
lenses in the mass modeling of the cluster A2218, where the effect of
galaxy-scale components (with a mean mass to light ratio $\sim$ 9 in
the R-band) needs to be included in order to reproduce the observed multiple
images. Mass modeling of several other clusters has also required the
input of smaller-scale mass components to consistently explain the
multiple images as well as the geometry of the arcs, for instance,
in the case of CL0024 (\citeN{kovner93}, \citeN{smail96}), where the
length of the three images of the cusp arc can only be explained if
the two nearby bright galaxies contribute mass to the system.
This strongly suggests that the dark matter associated with
individual galaxies is of consequence in accurately mapping
the mass distribution, and needs to be understood better, particularly
if clusters are to be used as gravitational telescopes to study
background galaxies.
The observed quantities in cluster lensing studies are the magnitudes
and shapes of
the background population in the field of the cluster. To reconstruct
the cluster mass distribution there are many
techniques currently available which allow the inversion of
the distortion map into a relative mass map or an absolute mass map if
(i) multiple arcs are observed (\citeNP{kneib96}) and or (ii) magnification effects are included (\citeNP{broadhurst}). Recent theoretical
work (\citeNP{kaiser93}, \citeNP{kaiser95a}, \citeNP{peter95a},
\citeNP{caro95a} and \citeNP{squires96})
has focused on developing various algorithms to recover
the mass distribution on scales larger than 20-30
arcsec, which is roughly the smoothing scale employed
(corresponding to $\sim 100\,{\rm kpc}$ at a redshift of $z\,\sim\,0.2$). These
methods involve locally averaging the shear field produced by the
lensing mass, and cannot be used to probe galaxy-scale
perturbations to the shear field.
Our aim in this paper is to understand and determine the parameters
that characterize galaxy-scale perturbations within a cluster.
In order to do so, we delineate 2 regimes:\\
(i) the `strong' regime where the local surface density is close to critical
($\kappa \sim$ 1, where $\kappa$ is the ratio of the local surface
density to the critical surface density) and (ii) the `weak' regime
where the local surface density is small ($\kappa\,<\,1$).
The `strong' regime corresponds to the cores of clusters, and in general
involves only a small fraction (typically 5-20) of the cluster galaxies
whereas the `weak' regime encompasses a larger fraction ($\sim$
50-200). We are restricting our treatment to early-type (E \& S0's)
bright cluster galaxies throughout.
We compare in this analysis the relative merits of our 2 proposed
methods: a direct method to extract the strength of the averaged local
shear field in the vicinity of bright cluster galaxies by subtracting
the mean large-scale shear field, and a statistical maximum likelihood
method. The former method affords us a physical understanding, helps
establish the importance and the role of the various relevant
parameters and yields a mean mass-to-light ratio; the latter permits
taking the strong lensing regime and the ellipticity of the mass of
galaxy halos into account correctly. Both approaches are investigated
in detail in this paper using numerical simulations.
The outline of the paper is as follows. In Section 2, we present the
formalism that takes into account the effect of individual
galaxy-scale perturbations to the global cluster potential.
In Section 3, the direct method to recover these small-scale distortions
is outlined and in Section 4 we present the results of the application
of these techniques to a simulated cluster with substructure.
In Section 5, we examine the constraints that can be obtained on the
parameter space of models via the proposed maximum-likelihood method. We also
explore the feasibility criteria for application to cluster data given the
typical uncertainties. The conclusions of this study and the
prospects for application to real data and future work are discussed
in Section 6. Throughout this paper, we have assumed
$H_{0} = 50\,$kms$^{-1}$Mpc$^{-1}$, $\Omega = 1$ and $\Lambda = 0$.
\section{Galaxy-Scale Lensing Distortions in Clusters}
\subsection{Analysis of the local distortions}
The mass distribution in a cluster of galaxies can be modeled as
the linear sum of a global smooth potential (on scales larger than
20 arcsec) and perturbing mass distributions which can then be
associated with individual galaxies (with a scale length less
than 20 arcsec).
Formally we write the global potential as:
\begin{equation}
\phi_{\rm tot} = \phi_{\rm c} + \Sigma_i \,\phi_{\rm p_i},
\end{equation}
where $\phi_{\rm c}$ is the smooth potential of the cluster and
$\phi_{\rm p_i}$ are the potentials of the perturbers (galaxy halos).
Henceforth, the use of the subscripts c and p refer to quantities
computed for the cluster scale component and the perturbers
respectively. The deflection angle is then given by,
\begin{eqnarray}
\theta_S\,=\,\theta_I\,-\,\alpha_I(\theta_I)\ ;
\ \alpha_I\,=\,{{\bmath \nabla}\phi_{\rm c}}\,+\,\Sigma_i \,{{\bmath \nabla}\phi_{\rm p_i}},
\end{eqnarray}
where $\theta_I$ is the angular position of the image and $\theta_S$
the angular position of the source. The amplification matrix at any
given point is,
\begin{equation}
A^{-1}\,=\,I\,-\,{{\bmath \nabla\nabla} {\phi_{\rm c}}}\,-
\,\Sigma_i \,{{\bmath \nabla\nabla} {\phi_{\rm p_i}}}.
\end{equation}
Defining the generic symmetry matrix,
\begin{displaymath}
J_{2\theta}\,=\, \left(\begin{array}{lr}
\cos {2\theta}&\sin {2\theta}\\
\sin {2\theta}&-\cos {2\theta}\\
\end{array}\right)
\end{displaymath}
we decompose the amplification matrix as a linear sum:
\begin{eqnarray}
A^{-1}\,=\,(1\,-\,\kappa_{\rm c}\,-\,\Sigma_i \kappa_{\rm p})\,I
- \gamma_{\rm c}J_{2\theta_{\rm c}}
- \Sigma_i \,\gamma_{\rm p_i}J_{2\theta_{\rm p_i}},
\end{eqnarray}
where $\kappa$ is the magnification and $\gamma$ the shear.
In this framework, the shear $\gamma$ is taken to be a complex number
and is used to define the quantity $\overline{g}$ as follows:
\begin{eqnarray}
\overline{g_{pot}} = {\overline{\gamma} \over 1-\kappa} =
{{\overline\gamma_c} + \Sigma_i \,{\overline\gamma_{p_i}}
\over 1-\kappa_c -\Sigma_i
\,\kappa_{p_i}},\,\,{\overline{\tau_{pot}}}\,=\,
{ 2\overline{g_{pot}}
\over 1 - \overline{g_{pot}}^*\overline{g_{pot}}}
\end{eqnarray}
which simplifies in the frame of the perturber $j$ to (neglecting
effect of perturber $i$ if $i \neq j$):
\begin{eqnarray}
{\overline g_{pot}}|_j} =
{ {{\overline \gamma_c} +{\overline \gamma_{p_j}} \over {1-\kappa_c -\kappa_{p_j}}},
\end{eqnarray}
where $\overline g_{pot}|_j$ is the total complex shear induced by
the smooth cluster potential and the potentials of the perturbers.
Restricting our analysis to the weak regime, and thereby retaining
only the first order terms from the lensing equation for the shape
parameters (see \citeNP{kneib96}) we have:
\begin{equation}
{\overline \tau_I}= {\overline \tau_S}+{\overline \tau_{pot}},
\end{equation}
where ${\overline \tau_I}$ is the distortion of the image, ${\overline
\tau_S}$ the intrinsic shape of the source, ${\overline \tau_{\rm pot}}$
is the distortion induced by the lensing potentials or explicitly
in terms of $\overline g_{pot}$ in the frame of perturber $j$:
\begin{equation}
{\overline g_I}= {\overline g_S}+{\overline g_{pot}}|_j
= {\overline g_S} +
{ {\overline \gamma_c} \over 1-\kappa_c -\kappa_{p_j}} +
{ {\overline \gamma_{p_j}} \over 1-\kappa_c - \kappa_{p_j}}.
\end{equation}
In the local frame of reference of the perturbers, the mean
value of the quantity ${\overline g_I}$ and its dispersion can be
computed in circular annuli (of radius $r$ from the perturber center)
{\underline{strictly in the weak-regime}},
assuming a constant value $\gamma_c e^{i\theta_{c0}}$ for the smooth
cluster component over the area of integration (see Figure 1 for the
schematic diagram).
\begin{figure*}
\psfig{figure=pr.eps,width=1.0\textwidth}
\caption{Local frame of reference of the perturber: The vector diagram
illustrating the choice of coordinate system. The total shear is
decomposed into a large-scale component due to the smooth cluster
and a small-scale one due to the perturbing galaxy. In the frame of
the perturber, the averaging procedure allows efficient subtraction of
the large-scale component as shown in the right panel, enabling the
extraction of the shear component induced in the background galaxies
only by the perturber as shown in the left panel. The background
galaxies (shown in the left panel of this figure) are assumed to have
the same intrinsic ellipticity for simplicity, therefore, we plot only
the induced components.}
\end{figure*}
\begin{figure*}
\psfig{figure=p.eps,width=0.8\textwidth}
\caption{A schematic representation of the effect of the cluster
on the intrinsic ellipticity distribution of background sources as viewed from
the two different frames of reference. In the top panel, as viewed in
the image frame - the effect of the cluster is to cause a coherent
displacement $\bmath{\tau}$ and the presence of perturbers merely adds
small-scale noise to the observed ellipticity distribution. In the bottom panel,
as viewed in the perturbers frame - here the perturber component
causes a small displacement $\bmath{\tau}$ and the cluster component
induces the additional noise.}
\end{figure*}
The result of the integration does depend on the choice of coordinate system.
In cartesian coordinates (averaging out the contribution of the perturbers):
\begin{eqnarray}
\nonumber \left<{\overline g_I}\right>_{xy} &=& \left<{\overline g_S}\right> +
\left<{\gamma_c e^{i\theta_{c0}} \over 1-\kappa_c -\kappa_{p_j}}\right> +
\left<{{\overline \gamma_{p_j}} \over 1-\kappa_c - \kappa_{p_j}}\right>,
\\ \nonumber
&=& {\gamma_c e^{i\theta_{c0}}} \left<{1 \over 1-\kappa_c -\kappa_{p_j}}\right>
\equiv {\overline g_c},\\
\end{eqnarray}
\begin{equation}
\sigma^2_{\overline g_I} = {\sigma^2_{\overline g_S} \over 2}
+ {\sigma^2_{\overline g_{p_j}} \over 2},
\end{equation}
where
\begin{eqnarray}
\sigma^2_{g_I}\,\approx\,{\sigma^2_{p(\tau_S)}\over 2 N_{bg} }
+ { \sigma^2_{\overline {g}_{p_j}} \over 2 N_{bg} }\,\approx\,
{\sigma^2_{p(\tau_S)}\over 2 N_{bg}}
\end{eqnarray}
${\sigma^2_{p(\tau_S)}}$ being
the width of the intrinsic ellipticity distribution of the sources,
$N_{bg}$ the number of background galaxies averaged over and
$\sigma^2_{\overline {g}_{p_j}}$ the dispersion due to perturber effects
which should be smaller than the width of intrinsic ellipticity distribution.
In the polar $uv$ coordinates, on averaging out the smooth part:
\begin{eqnarray}
\nonumber \left<{\overline g_I}\right>_{uv} &=& \left<{\overline g_S}\right> +
\left<{{\overline \gamma_c} \over 1-\kappa_c - \kappa_{p_j}}\right> +
\left<{\gamma_{p_j}\over 1-\kappa_c -\kappa_{p_j}}\right>,
\\ \nonumber
&=& {\gamma_{p_j}}\left<{ 1 \over {1-\kappa_c -\kappa_{p_j}}}\right>\equiv g_{p_j},\\
\end{eqnarray}
\begin{eqnarray}
\left(\sigma^2_{\overline{g_I}}\right)_{uv} = {{\sigma^2_{\overline g_S}} \over 2}
+ {{\sigma^2_{\overline g_c}} \over 2},
\end{eqnarray}
where
\begin{eqnarray}
\sigma^2_{g_I}\,\approx\,{\sigma^2_{p(\tau_S)}\over 2 N_{bg} } +
{\sigma^2_{\overline{g}_{c}} \over 2 N_{bg} }.
\end{eqnarray}
From these equations, we clearly see the two effects of the
contribution of the smooth cluster component: it
boosts the shear induced by the perturber due to the
($\kappa_c+\kappa_{p_j}$) term in the denominator, which becomes non-negligible
in the cluster center,
and it simultaneously dilutes the regular galaxy-galaxy lensing signal
due to the ${\sigma^2_{\overline g_c} / 2}$ term (equation 11) in the dispersion of
the polarization measure. However, one can in principle optimize
the noise in the polarization by `subtracting' the measured cluster signal
and averaging it in polar coordinates:
\begin{equation}
\left<{\overline g_I}-{\overline g_c}\right>_{uv} =
\left<{\gamma_{p_j}\over 1-\kappa_c -\kappa_{p_j}}\right>,
\end{equation}
which gives the same mean value as in equation (11) but with a reduced
dispersion:
\begin{equation}
\left(\sigma^2_{\overline g_I-\overline g_c}\right)_{uv}
= {\sigma^2_{\overline g_S} \over 2},
\end{equation}
where
\begin{eqnarray}
\sigma_{g_S}^2\,\approx\, {\sigma^2_{p(\tau_S)}\over 2 N_{bg}}.
\end{eqnarray}
This subtraction of the larger-scale component reduces the noise in
the polarization measure, by about
a factor of two; when $\sigma^2_{\overline g_S}\sim \sigma^2_{\overline
g_c}$, which is the case in cluster cores. Note that in subsequent
sections of the paper, we plot the averaged components of ${\overline{\tau}}$
(the quantity measurable from lensing observations) computed in the
(uv) frame. We reiterate here that the calculations above assume that
the cluster component is constant over the area of integration (a
reasonable assumption if we limit our analysis to small radii around
the centers of perturbers).
These results can be easily extended to the case when the cluster
component is linear (in $x$ and $y$) over the area of integration,
the likely case outside the core region. This direct averaging prescription for
extracting the distortions induced by the possible presence of dark halos
around cluster galaxies, by construction, does not require precise knowledge
of the center of the cluster potential well.
\subsection{Quantifying the lensing distortion}
To quantify the lensing distortion induced by the individual
galaxy-scale components using a minimal number of parameters to
characterize cluster galaxy halos, we model the density profile as a
linear superposition of two
pseudo-isothermal elliptical components (PIEMD models derived by \citeNP{kassiola93}):
\begin{eqnarray}
\Sigma(R)\,=\,{\Sigma_0 r_0 \over {1 - r_0/r_t}}
({1 \over \sqrt{r_0^2+R^2}}\,-\,{1 \over \sqrt{r_t^2+R^2}}),
\end{eqnarray}
with a model core-radius $r_0$ and a truncation radius $r_t\,\gg\, r_0$.
The useful feature of this model,
is the ability to reproduce a large range of mass
distributions by varying only the ratio $\eta$: defined as
$\eta=r_t/r_0$. It also provides the following simple relation between
the truncation radius and the effective radius $R_{\rm e}$, $r_t\sim
(4/3) R_{\rm e}$.
Furthermore, this apparently circular model can be easily generalized
to the elliptical case by re-defining the radial coordinate $R$ as follows:
\begin{eqnarray}
R^2\,=\,({x^2 \over (1+\epsilon)^2}\,+\,{y^2 \over (1-\epsilon)^2})\,;
\ \ \epsilon= {a-b \over a+b},
\end{eqnarray}
The mass enclosed within radius $R$ for the model is given by:
\begin{equation}
M(R)={2\pi\Sigma_0 r_0 \over {1-{{r_0} \over {r_t}}}}
[\,\sqrt{r_0^2+R^2}\,-\,\sqrt{r_t^2+R^2}\,+\,(r_t-r_0)\,],
\end{equation}
and the total mass, which is finite, is:
\begin{equation}
{M_{\infty}}\,=\,{2 \pi {\Sigma_0} {r_0} {r_t}}.
\end{equation}
Calculating $\kappa$, $\gamma$ and $g$, we have,
\begin{eqnarray}
\kappa(R)\,=\,{\kappa_0}\,{{r_0} \over {(1 - {r_0/r_t})}}\,
({1 \over {\sqrt{({r_0^2}+{R^2})}}}\,-\,{1
\over {\sqrt{({r_t^2}+{R^2})}}})\,\,\,,
\end{eqnarray}
\begin{eqnarray}
2\kappa_0\,=\,\Sigma_0\,{4\pi G \over c^2}\,{D_{\rm ls}D_{\rm ol}
\over D_{\rm os}},
\end{eqnarray}
where $D_{\rm ls}$, $D_{\rm os}$ and $D_{\rm ol}$ are respectively
the lens-source, observer-source and observer-lens angular diameter distances.
To obtain $g(R)$, knowing the magnification $\kappa(R)$, we solve
Laplace's equation for the projected potential $\phi_{\rm 2D}$,
evaluate the components of the amplification matrix and then proceed
to solve directly for $\gamma(R)$, $g(R)$ and $\tau(R)$.
\begin{eqnarray}
\phi_{2D}\,&=&\, \nonumber 2{\kappa_0}[\sqrt{r_0^2+R^2}\,-\,\sqrt{r_t^2+R^2}\,+
(r_0-r_t) \ln R\, \\ \nonumber \,\,
&-&
\,r_0\ln\,[r_0^2+r_0\sqrt{r_0^2+R^2}]\,+
\,r_t\ln\,[r_t^2+r_t\sqrt{r_t^2+R^2}] ].\\
\end{eqnarray}
To first approximation,
\begin{eqnarray}
\tau(R)\,\approx\,\gamma(R)\,
&=&\,\nonumber
\kappa_0[\,-{1 \over \sqrt{R^2 + r_0^2}}\,
+\,{2 \over R^2}(\sqrt{R^2 + r_0^2}-r_0)\,\\
\nonumber
&+&\,{1 \over {\sqrt{R^2 + r_t^2}}}\,-\,
{2 \over R^2}(\sqrt{R^2 + r_t^2} - r_t)\,].\\
\end{eqnarray}
Scaling this relation by $r_t$ gives for $r_0<R<r_t$:
\begin{equation}
\gamma(R/r_t)\propto {\Sigma_0 \over \eta-1} {{r_t} \over
R}\,\sim\,{\sigma^2 \over R},
\end{equation}
where $\sigma$ is the velocity dispersion and for $r_0<r_t<R$:
\begin{equation}
\gamma(R/r_t)\propto {\Sigma_0\over\eta} {{r_t}^2 \over
R^2}\,\sim\,{{M_{\rm tot}}
\over {R^2}},
\end{equation}
where ${M_{\rm tot}}$ is the total mass. In the limit that $R\,\gg\,r_t$, we have,
\begin{eqnarray}
\gamma(R)\,=\,{{3 \kappa_{0}} \over {2
{R^3}}}\,[{r_{0}^2}\,-\,{r_{t}^2}]\,+\,{{2 {\kappa_0}}
\over {R^2}} [{{r_t}\,-\,{r_0}}],
\end{eqnarray}
and as ${R\,\to\,\infty}$, $\gamma(R)\,\to\,0$, $g(R)\,\to\,0$ and
$\tau(R)\,\to\,0$ as expected.
\begin{figure}
\psfig{figure=fig3.ps,width=0.5\textwidth}
\caption{The effect of the assumed scaling relations are examined in a plot of the
magnification log $\kappa$ vs. ${R/r_t}$ and the shear $g$
vs. ${R/r_t}$ for various values
of $(L/{L^{*}})$: 0.5, 1.0, 5.0 and 10.0. The curves on the left panel
are for $\alpha=0.5$ and on the right panel for $\alpha=0.8$, (i) solid
curves - $(L/{L^{*}})$ = 0.5, (ii) dotted curves - $(L/{L^{*}})$ = 1.0,
(iii) short-dashed curves - $(L/{L^{*}})$ = 5.0, (iv) long-dashed curves
- $(L/{L^{*}})$ = 10. The magnification is normalized so that at r = 2
$r_0$, $\kappa$ = 1; the difference in the slope of $\kappa$ above and
below log r/rt = 0 can be clearly seen for both sets of scaling
laws. Note that a spike appears in the plots of log $g$ vs. log
$(R/r_t)$ at the radius where the mean enclosed surface density is
approximately equal to the critical surface density. For the mass
models studied here (cuspy with small core-radii) the surface mass
density has a large central value and hence a spike appears on a scale
that is roughly comparable to the core-radius.}
\end{figure}
\section{Recovering galaxy-scale perturbations}
In this section, we study the influence of the various parameters
using the direct averaging procedure on the synthetic data obtained
from simulations.
The numerical simulations involve modeling of the global cluster
potential, the individual perturbing cluster galaxies and calculating
their combined lensing effects on a catalog of
faint galaxies. We compute the
mapping between the source and image plane and hence solve the lensing
equation, using the lens tool
utility developed by \citeN{kneib93b}, which accounts consistently for
the displacement and distortion of images both in the strong and weak lensing regimes.
\subsection{Modeling the cluster galaxies}
\subsubsection{Spatial and Luminosity distribution}
A catalog of cluster galaxies was generated at random with the following
characteristics. The luminosities were drawn from a
standard Schechter function with ${L_*}\,=\,3.{10^{10}}L\odot$ and
$\alpha=-1.25$. The positions were assigned
consistent with the number density $\nu(r)$ of a modified Hubble law profile,
\begin{eqnarray}
\nu(r)\,=\,{\nu_0 \over {(1+{r^2 \over r_0^2})}^{1.5}},
\end{eqnarray}
with a core radius $r_0=250\,kpc$, as well as a more generic
`core-less' profile of the form:
\begin{eqnarray}
\nu(r)\,=\,{\nu_0 \over {{{r \over r_s}^\alpha}(1+{r^2 \over r_s^2})}^{2-\alpha}},
\end{eqnarray}
with a scale-radius ${r_s}=200\,kpc$ and $\alpha\,=\,0.1$ which was
found to be a good-fit to the galaxy data of the moderate redshift
lensing cluster A2218 by \citeN{natarajan96c}. We find however, that
the results for the predicted shear from the simulations is
independent of this choice.
\subsubsection{Scaling laws}
The individual galaxies are then parameterized by the mass model of Section
2.2, using in addition, the following scalings with luminosity (see
\citeN{brainerd95} for an analogous treatment) for the central velocity dispersion ${\sigma_0}$,
the truncation radius $r_t$ and the core radius $r_0$:
\begin{eqnarray}
{\sigma_0}\,=\,{\sigma_{0*}}({L \over L_*})^{1 \over 4}; \\
{r_0}\,=\,{r_{0*}}{({L \over L_*}) ^{1 \over 2}}; \\
{r_t}\,=\,{r_{t*}}{({L \over L_*})^{\alpha}}.
\end{eqnarray}
These imply the following scaling for the $r_t/r_0$ ratio $\eta$:
\begin{equation}
{\eta}\,=\,{r_t\over r_0}={{r_{t*}} \over {r_{0*}}}
({L \over L_*})^{\alpha-1/2}.
\end{equation}
The total mass $M$ then scales with the luminosity as:
\begin{equation}
\,\,M\,=\,{2 \pi {\Sigma_0} {r_0} {r_t}}\,=\,{9\over 2G}(\sigma_0)^2 r_t=
{9\over 2G}{{\sigma_{0*}}^2}{r_{t*}}({L \over L_*})^{{1 \over 2}+\alpha},
\end{equation}
where $\alpha$ tunes the size of the galaxy halo, and the
mass-to-light ratio $\Upsilon$ is given by:
\begin{eqnarray}
{\Upsilon}\,= 12 \left( {\sigma_{0*}\over 240\,km/s}\right)^2
\left( {r_{t*}\over 30\,kpc} \right)
\left( {L\over L_*} \right )^{\alpha-1/2}
\end{eqnarray}
Therefore, for $\alpha$ = 0.5 the assumed galaxy model has constant
$\Upsilon$ for each galaxy; if $\alpha>$ 0.5 ($\alpha<$ 0.5) then
brighter galaxies have a larger (smaller) halos than the fainter
ones.
The physical motivation for exploring these scaling laws arises from
trying to understand the observed empirical correlations for early-type (E \&
S0) galaxies in the fundamental plane (FP). The following tight relation
between the effective radius $R_e$, the central velocity dispersion
${\sigma_0}$ and the mean surface brightness within $R_e$ is found for
cluster galaxies (\citeN{inger96}, \citeN{djorgovski87} and
\citeN{dressler87}):
\begin{eqnarray}
\log {R_e}\,=\,1.24\,\log {{\sigma_0}}\,-\,0.82\,\log {{\left<{I}\right>}_e}
+ cste
\end{eqnarray}
One of the important consequences of this relation is the fact that it
necessarily implies that the mass-to-light ratio is a weak function of
the luminosity, typically $\Upsilon\,\sim\,{L^{0.3}}$ (\citeNP{inger96}).
In terms of our scaling scaling laws, this implies $\alpha=0.8$.
Henceforth, in this analysis we explore both the scaling relations,
for $\alpha\,=\,0.5$; the constant mass-to-light ratio case,
and $\alpha\,=\,0.8$; corresponding to the mass-to-light ratio being
proportional to ${L^{0.3}}$ - consistent with the observed FP.
In Figure 3, we plot the scaling relations for
various values of $({L/L_{*}})$, ranging
from 0.5 $\to$ 10.0 for $\alpha$ = 0.5 and $\alpha$ = 0.8.
\begin{figure}
\psfig{figure=ml_plot.ps,width=0.5\textwidth}
\caption{The constant mass-to-light ratio curves are plotted in the
(${\sigma_{0*}}$,${r_{t*}}$) plane for an ${L_{*}}$ galaxy with $\eta$ =
200: (i) dot-dashed curve - $\Upsilon$ = 4, (ii) dotted curve -
$\Upsilon$ = 6, (iii) solid curve - $\Upsilon$ = 12, (iv)
short-dashed curve - $\Upsilon$ = 24 and (v) long-dashed curve -
$\Upsilon$ = 48.}
\end{figure}
Additionally, for the constant mass-to-light ratio case, we also plot
the iso-$\Upsilon$ curves in terms of the fiducial $\sigma_{0}^{*}$
and $r_{t}^{*}$ in Figure 4. The scaling laws are calibrated by defining an
$L_*$ (in the R-band) elliptical galaxy to have ${r_{0*}}\,=\,$0.15 kpc,
${r_{t*}}\,=\,$30.0 kpc and a fiducial ${\sigma_{0*}}$, then chosen to
assign the different mass-to-light ratios, [${\sigma_{0*}}\,=\,$100,
140, 170, 240, 340, 480 km $s^{-1}$ corresponding to $\Upsilon\,=\,$2,
4, 6, 12, 24, 48 respectively].
\subsection{Modeling the background galaxies}
\subsubsection{Luminosity distribution}
The magnitude and hence the luminosity for the background population
was generated consistent with the number count distribution measured
from faint field galaxy surveys like the MDS
as reported in \citeN{glazebrook95}, as well as the more recent
results of the number-magnitude relations obtained from the Hubble
Deep Field data (\citeNP{abraham96}). The slope of the number count
distribution used was 0.33 over the magnitude range $m_R = 18 - 26$.
This power law for the number counts implies a surface number density
that is roughly 90 galaxies per square arcminutes in the given
magnitude range (see \citeN{smail95f}), which over the area of the
simulation frame [8 arcmin X 8 arcmin] corresponds to having $\sim$
5000 background galaxies.
\subsubsection{Redshift distribution}
The background galaxy population of sources was also generated,
consistent with the measured
redshift, magnitude and luminosity distributions (MODEL Z2 below) from
high-redshift surveys like the APM and CFRS
(\citeNP{efsta91a} and \citeNP{cfrs195} respectively).
For the normalized redshift distribution at a given magnitude
$m$ (in the R-band) we used the following fiducial forms:
\subsubsection*{{\bf MODEL Z1}:}
\begin{eqnarray}
N(z,m)\,=\,{N_0}{\delta(z-2)},
\end{eqnarray}
corresponding to the simple case of placing all the sources at
$z\,=\,$2.
\subsubsection*{{\bf MODEL Z2}:}
\begin{eqnarray}
N(z,m)\,=\,{{\beta\,({{z^2} \over {z_0^2}})\,
\exp(-({z \over {z_0}})^{\beta})} \over
{\Gamma({3 \over \beta})\,{{z_0}}}};
\end{eqnarray}
where $\beta\,=\,$1.5 and
\begin{eqnarray}
z_0\,=\,0.7\,[\,{z_{\rm median}}\,+\,{{d{z_{\rm median}}} \over
{dm_R}}{(m_R\,-\,{m_{R0}})}\,],
\end{eqnarray}
${z_{\rm median}}$ being the median redshift, $dz_{\rm median}/ dm_R$
the change in median redshift with R magnitude $m_R$.
We use for our simulations $m_{R0}=$22.0, $dz_{\rm median}/dm_R$=0.1
and $z_{\rm median}$= 0.58 (see \citeN{brainerd95} and \citeN{kneib96}).
\subsubsection{Ellipticity distribution}
Analysis of deep surveys such as the MDS
fields (\citeNP{griffiths94}) shows that the ellipticity distribution
of sources is a strong function of the sizes of individual galaxies as
well as their magnitude (\citeNP{kneib96}). For
the purposes of our simulations, since we assume `perfect seeing', we
ignore these effects and the ellipticities are assigned in concordance
with an ellipticity distribution $p(\tau_S)$ derived from fits to the MDS data
(\citeNP{ebbels96}) of the form,
\begin{equation}
p(\tau_S)\,=\,\tau_S\,\,\exp(-({\tau_S \over
\delta})^{\alpha});\,\,\,\alpha\,=\,1.15,\,\,\delta\,=\,0.25.
\end{equation}
\section{\bf Analysis of the Simulations}
We use the above as input distributions to simulate the background
galaxies and bright cluster galaxies in
addition to a model for the cluster-scale mass
distribution. Analogous to the mass model constructed for the
cluster Abell 2218 (\citeN{kneib96}), we set up an elliptical mass
distribution for the central clump with a velocity dispersion of
1100 km s$^{-1}$ placed at a redshift $z = 0.175$. The main clump was
modelled using a PIEMD profile
(as in equation (14)) with an ellipticity $\epsilon\,=\,0.3$,
core radius 70 kpc and a truncation radius 700 kpc; therefore the
surface mass density of the clump falls off as
$r^{-3}$ for $r\,\gg\,{r_{\rm cut}}$.
The lens equation was then
solved for the specified configurations of
sources and lenses set-up as above and the corresponding image frames
were generated. The averaged components of the shear binned in
circular annuli centred on the perturbing galaxies was evaluated in
their respective local (u,v) frames. An important check on the entire
recovery procedure arises from the fact that by construction (choice
of the (u,v) coordinate system) the mean value of the v-component of
the shear $<\tau_v>$ is required to vanish.
In the following sub-sections, we explore the dependence of the
strength of the detected signal on the various input parameters.
\begin{figure}
\psfig{figure=plota.ps,width=0.5\textwidth}
\caption{Demonstrating the robustness of the signal extraction by
comparing the analytic prediction with the measured radially
averaged shear from the simulation. The signal was extracted
from a simulation run of a PIEMD model with ${r_{t*}}$ = 30 kpc,
${r_{0*}}$ = 0.15 kpc and velocity dispersion of 480 kms$^{-1}$:
solid curve - estimate from the analytic formula and overplotted are the
measured values of the averaged shear.}
\end{figure}
First of all, Figure 5 demonstrates the good agreement between the
analytic formula for the shear derived at a given radial distance R
produced by a PIEMD model as computed in Section 2.2 and the averaged value
extracted from the simulation on solving the lensing equation exactly
for the redshift distribution of MODEL Z1. In all subsequent plots
(Figures 6, 7, 8, 9, 10, 12 and 13) the annuli are scaled such
that for an $L_{*}$ galaxy, the width of each ring corresponds to a
physical scale of $\sim$ 20 kpc at $z = 0.175$.
\subsection{Error Estimate on the signal}
There are two principal sources of error in the computation of the
averaged value of the shear aside from
the observational errors (which are not taken into account in these
simulations) arising from the effects of seeing etc. (i) shot noise
(due to a finite number of sources and the intrinsic width of their
ellipticity distribution) and (ii) in principle the unknown source
redshifts. Therefore, we require a minimum threshold number of
background objects to obtain a
significant level of detection. The unknown redshift
distribution of the sources also introduces noise and affects the
retrieval of the signal in a systematic way, for instance, the
obtained absolute value for the total mass estimate for cluster
galaxies is an under-estimate for a higher redshift population for a
given measured value of the shear.
The mean (or alternatively the median) and width of the redshift
distribution are the important
parameters that determine the errors incurred in the extraction
procedure.
For the simulation however, we need to obtain an
error estimate on the signal given that we measure the averaged shear for a
single realization. In order to do so, the simulation was set up with
a constant mass-to-light ratio ($\Upsilon\,=\,$12) for the 50 cluster
galaxies with 5000 background galaxies, and on solving the lens
equation the image frame was obtained. The averaging procedure as
outlined in Section 2.1 was then implemented to extract the output
signal with 1000 independent sets of random scrambled positions for
the cluster galaxies (in addition to the one set of 50 positions that
was actually used to generate the image); the results are plotted as
the lower solid curves in Figures 7 \& 8. This is a secure
estimate of the error arising for an individual realization, since
this error arises primarily from the dilution of the strength of the
measured shear due to uncorrelated sources and lensed images. We found that the
mean error in $\left<{\tau_{u}}\right>$
in the first annulus is $0.040\pm0.0012$ and $0.0048\pm0.0047$ in
$\left<{\tau_{v}}\right>$.
\subsection{Variation of the signal with mass-to-light ratio of
cluster galaxies}
The simulations were performed for mass-to-light ratios ($\Upsilon$)
ranging from 2 $\to$ 48 (see Figures 6, 7 \& 8). The velocity
dispersion of the fiducial galaxy model was adjusted to
give the requisite value for $\Upsilon$ keeping the scaling relations
intact. The detection is significant for mass-to-light ratios
$\Upsilon\,\geq\,$4 given the configuration with 50 cluster galaxies
and 5000 background galaxies. The strength of the signal varies with the
input $\Upsilon$ of the cluster galaxies, and increases with
increasing $\Upsilon$. As a test run, with $\Upsilon\,=\,0$, (i.e. no
cluster galaxies) and only the large-scale component of the shear, we
do recover the expected behavior for $\left<{\tau_{u}}\right>$. The signal was
extracted for both background source redshift distributions MODEL Z1 \&
MODEL Z2. While the amplitude of the signal is not very sensitive to
the details of the redshift distribution of the background population
and hence did not vary significantly, the error-bars are marginally
larger for MODEL Z2. This can be understood in terms of the
additional shot noise induced due to the change in
the relative number of objects `available' for lensing; in MODEL Z2 a
fraction of the galaxies in the low-z tail of the redshift
distribution end up as foreground
objects and are hence not lensed, thereby diluting the signal and
increasing the size of the error-bars marginally.
\begin{figure}
\psfig{figure=meantau.ps,width=0.5\textwidth}
\caption{Variation of the mean value of the signal in the first
annulus with mass-to-light ratio $\Upsilon$: for $\Upsilon$= 2, 4, 6,
12, 24, 48 of the
cluster galaxies plotted for MODEL Z1 (solid curve) and MODEL Z2
(dotted curve).}
\end{figure}
\begin{figure}
\psfig{figure=z1ml.ps,width=0.5\textwidth}
\caption{Recovering the signal for MODEL Z1: for various
values of the constant mass-to-light ratio $\Upsilon$ of the cluster galaxies
ranging from 2 - 48, (i) lower solid curve - the error estimate (ii) upper
solid curve - $\Upsilon$ = 2, (iii) dotted curve -
$\Upsilon$ = 4, (iv) dashed curve - $\Upsilon$ = 6, (v) long-dashed curve -
$\Upsilon$ = 12, (vi) dot-short dashed curve - $\Upsilon$ = 24,
(vii) dot-long dashed curve - $\Upsilon$ = 48. Note here that
$<{\tau_v}>$ is zero as expected by definition of the (u,v) coordinate
system.}
\end{figure}
\begin{figure}
\psfig{figure=z2ml.ps,width=0.5\textwidth}
\caption{Recovering the signal for MODEL Z2: for various values of the
constant mass-to-light ratio $\Upsilon$ of the cluster galaxies
ranging from 2 - 48,
(i) lower solid curve - the error estimate (ii) upper
solid curve - $\Upsilon$ = 2, (iii) dotted curve -
$\Upsilon$ = 4, (iv) dashed curve - $\Upsilon$ = 6, (v) long-dashed curve -
$\Upsilon$ = 12, (vi) dot-short dashed curve - $\Upsilon$ = 24,
(vii) dot-long dashed curve - $\Upsilon$ = 48.}
\end{figure}
\subsection{Variation with the number of background galaxies}
The efficiency of detection of the signal depends primarily on the
number of background galaxies averaged over in each annulus and
therefore on the number that are lensed by the individual
cluster galaxies. For a fixed value of $\Upsilon$, the total
number of background galaxies $N_{bg}$ was varied, assuming a redshift
distribution of the form of MODEL Z1. With increasing $N_{bg}$, 1000 $\to$
2500 $\to$ 5000, the detection is more secure and the error does vary
roughly as $\sqrt{N_{bg}}$ as shown in Figure 9. In principle, the larger
the number of background sources available for lensing, the more
significant the detection with tighter error bars: however we find
that a ratio of 50 cluster galaxies to 2500 background galaxies
provides a secure detection for $\Upsilon\,\geq\,4$, a larger number of background
source are required to detect the corresponding signal induced by lower
mass-to-light ratio halos. A secure detection in this case refers to
the fact that the difference in the mean values of the detected signal
in the two cases (with $N_{bg}$ = 5000 and $N_{bg}$ = 2500 background sources) is
comparable to the mean estimated error per realization computed in
Section 4.1. The number count distribution used to generate the
background sources corresponds to a background surface number density
of $\sim$ 90 galaxies per square arcmin which we find provides a
secure detection for $\Upsilon\,\geq\,4$. It is useful to point out
here that for the standard Bruzual \& Charlot (95) spectral evolution
of stellar population synthesis models with solar metallicity, a galaxy that is roughly
10 Gyr old (a reasonable age estimate for a galaxy in a $z\,\sim\,0.3$
cluster), formed in a single 1Gyr burst of star formation and
having evolved passively, one obtains a stellar mass-to-light ratio
in the R band of $\sim 8$ with a single power law Salpeter IMF with
lower mass limit of $0.1\,M\odot$ and upper mass limit $125\,M\odot$.
With the same ingredients but a Scalo IMF one obtains a M/L ratio
about a factor 2 smaller ($\sim 4$) since there are a smaller proportion of
very low-mass stars. Therefore, an R-band M/L of 4 for a cluster
galaxy is consistent with the observed mass just in stars and does not
imply the presence of any dark mass in the system. Therefore, if dark
halos were indeed present around the bright cluster members, the
corresponding inferred mass-to-light ratios would be greater than 4,
and with 5000 background galaxies, we would be sensitive to the signal
as shown in the plots of Figures 6, 7 \& 8.
\begin{figure}
\psfig{figure=g.ps,width=0.5\textwidth}
\caption{Variation of the signal with the number of background
galaxies for MODEL Z1:
for a given mass-to-light ratio $\Upsilon$= 12 of the cluster
galaxies. We find that the error bars and hence the noise decrease
as expected with increasing $N_{bg}$ (i) solid curve: $N_{bg}$ = 1000,
(ii) dashed curve: $N_{bg}$ = 2500, (iii) dotted curve: $N_{bg}$ = 5000.}
\end{figure}
\subsection{Variation with cluster redshift}
The lensing signal depends on the distance ratio $D_{ls}/D_{os}$,
the angular extent of the lensing objects, the number
density of faint objects and their redshift distribution.
We performed several runs with the cluster
(the lens) placed at different redshifts, ranging from $z\,=\,$0.01 to 0.5.
We scaled all the distances with the appropriate factors corresponding
to each redshift for both MODELS Z1 \& Z2. For MODEL Z1 (Figure 10 and
dotted curve in Figure 12), we
find that the signal (by which we refer to the
value of $\left<{\tau_{u}}\right>$ in the innermost annulus)
saturates at low redshifts; for $0.01\,<\,z_{\rm lens}\,<\,0.07$ the
measurements are consistent with no detection but the strength
increases as $z_{\rm lens}$ is placed further away and it remains
significant for upto $z_{\rm lens}\,=\,$0.4, subsequent to which it
falls sharply once again at 0.5. On the other hand, we find that for
MODEL Z2 (Figure 11, and solid curve in Figure 12),
there is a well-defined peak
and hence an optimal lens redshift range for extracting the signal. Thus,
in general, cluster-lenses lying between redshifts 0.1 and 0.3 are
the most suitable ones for constraining the mean $\Upsilon$ of
cluster galaxies via this direct averaging procedure. These trends
with redshift can be understood easily,
the shear produced is proportional to the surface mass
density and scales as ($D_{\rm ls}/D_{\rm os}$) - the saturation at
high-redshift is due to the combination of two diluting effects (i)
the decrease in $D_{\rm ls}$ as the lens is placed at successively
higher redshifts (ii) the effect of
additional noise induced due to a reduction in the number of
background objects for MODEL Z2.
The drop-off at low z (for both models) is
primarily due to behavior of the angular scale factors at low-redshifts.
Additionally, the shape of these curves is independent of the total
mass of the cluster (the total mass being dominated by the smooth
component), therefore even for a subcritical cluster we obtain the same
variation with redshift.
\begin{figure}
\psfig{figure=clusz1.ps,width=0.5\textwidth}
\caption{Variation of the signal with cluster redshift for MODEL Z1:
for a given mass-to-light ratio $\Upsilon$= 12 of the cluster galaxies
placing the lens at different redshifts, right panel:(i) solid curve: z =
0.1, (ii) dotted curve: z = 0.2, (iii) dashed curve: z
= 0.3, (iv) long dashed curve: z = 0.4, (v) dot dashed
curve: z = 0.5 and the left panel:(i) solid curve: z = 0.01, (ii) dotted
curve: z = 0.02, (iii) dashed curve: z = 0.05, (iv) long dashed
curve: z = 0.07, (v) dot dashed curve: z = 0.10}
\end{figure}
\begin{figure}
\psfig{figure=clusz2.ps,width=0.5\textwidth}
\caption{Variation of the signal with cluster redshift for MODEL Z2:
for a given constant mass-to-light ratio $\Upsilon$= 12 of the cluster
galaxies placing the lens at different redshifts, right panel (i) solid curve:
z = 0.1, (ii) dotted curve: z = 0.2, (iii) dashed curve: z
= 0.3, (iv) long dashed curve: z = 0.4, (v) dot dashed
curve: z = 0.5; left panel (i) solid curve: z = 0.01, (ii) dotted
curve: z = 0.02, (iii) dashed curve: z = 0.05, (iv) long dashed
curve: z = 0.07, (v) dot dashed curve: z = 0.10 }
\end{figure}
\begin{figure}
\psfig{figure=taulensmax.ps,width=0.5\textwidth}
\caption{Variation of the maximum value of the signal with redshift:
for a given constant mass-to-light ratio $\Upsilon$= 12 of the cluster
galaxies placing the lens at different redshifts for the 2 background
redshift distributions for the sources (i) dotted curve: MODEL Z1,
(ii) solid curve: MODEL Z2}
\end{figure}
\subsection{Dependence on assumed scaling laws}
In section 3.1, we outlined the simple scaling relations that were used
to model the cluster galaxies. The choice of the exponent $\alpha$ in
equation (26) allows the modelling of the trends for different galaxy
populations: $\alpha\,=\,$0.5 corresponding to a constant $\Upsilon$ and
$\alpha\,=\,$0.8 corresponding to $\Upsilon$ being a weak function of
the luminosity.
\begin{figure}
\psfig{figure=scaling.ps,width=0.5\textwidth}
\caption{Examining the scaling relations - the 2 choices of
$\alpha$ the exponent of the scaling
relation for the truncation radius for $\Upsilon$= 12. We plot the
recovered signal (i) solid curve: $\alpha$ = 0.8,
(ii) dotted curve: $\alpha$ = 0.5.}
\end{figure}
Simulating both these cases above, we find that the mean value
of the signal does depend on the assumed exponent for the scaling law
and therefore the mass enclosed (Figure 13).
We find that while the signal is stronger
for $\alpha = 0.8$, since that corresponds to the mass-to-light
ratio ${\Upsilon}\,\sim\,({L \over {L_*}})^{0.3}$ - on average that is
what we expect compared to the constant mass-to-light ratio
case; it is not possible however, to distinguish between a
correspondingly higher value of the constant M/L and a higher value of
$\alpha$. Therefore, the direct averaging procedure cannot
discriminate between the assumed detail
fall-off for the mass distribution.
\subsection{Examining the assumption of analysis in the weak regime}
While our mathematical formulation outlined in section 2 is strictly
valid only for $\kappa\,\ll\,1$, we examine how crucial this assumption
is to the implementation of the technique. For the output images from
the simulations, the magnification $\kappa$ is known at all points.
Prior to the averaging, we excised the high $\kappa$ regions
successively, by removing only the lenses in those regions.
The results are plotted in Figure 14 for input
$\Upsilon$ = 12, with the sources distributed as
specified by MODEL Z1. While the mean peak value of the signal does
not fluctuate much; on removing the high-$\kappa$ regions, we find
that the cluster subtraction does get progressively more efficient, as
evidenced by the sharp fall-off to zero of the signal in the second
annulus outward. Therefore, while the detectability and magnitude of
the signal is robust even in the `strong regime', the contribution
from the smooth cluster component, which for our purposes is a
contaminant, can be `removed' optimally only in the low $\kappa$ regions.
\begin{figure}
\psfig{figure=kappa_excise.ps,width=0.5\textwidth}
\caption{The effect of excising the high $\kappa$ regions in the image:
(for $\Upsilon=12$ of the cluster galaxies)
(i) solid curve: $\kappa \leq$ 0.1, (ii) dotted curve:$\kappa \leq$
0.2, (iii) dashed curve:$\kappa \leq$ 0.3, (iv) long dashed curve:
$\kappa \leq$ 0.4, (v) dot dashed curve:$\kappa \leq$ 0.5.}
\end{figure}
\section{Maximum-Likelihood Analysis}
\subsection{Limitations of the direct averaging method}
The simulations have enabled us to delineate the role of relevant
parameters and comprehend the trends with cluster redshift,
the redshift distribution of the sources and the mass-to-light ratio
of the cluster galaxies. While the direct method to estimate the
suffers from the following limitations, specially in the cluster core,
(i) being in the `strong' lensing regime, the `cluster
subtraction' is not very efficient and (ii) the probability of a
background galaxy being sheared due to the cumulative effect of two
or more cluster galaxies is enhanced; the core being a region with
a high number density of cluster galaxies; it does, however, provide
a robust estimate of the mass-to-light ratio modulo the assumed
model parameters.
We now explore applying a maximum-likelihood method
to obtain significance bounds on fiducial parameters that characterize
a `typical' galaxy halo in the cluster. \citeN{schneiderrix96} developed a
maximum-likelihood prescription for galaxy-galaxy lensing in the
field; here we develop one to study lensing by galaxy halos embedded in the cluster.
Schematically, we demonstrate the differences in the ellipticity
distribution that we are attempting to discern in Figure 15. Here we
have plotted the intrinsic ellipticity distribution of the unlensed
sources, sources lensed only by a cluster scale component
and sources sheared by both a cluster scale component and 50
cluster galaxies; from which it is obvious that the effect that we
intend to measure in terms of parameters that characterize the cluster
galaxies is indeed small, hence recovery of the fiducial parameters in
this case is considerably harder than in the case of purely galaxy-galaxy lensing.
\begin{figure}
\psfig{figure=elip_dist.ps,width=0.5\textwidth}
\caption{The ellipticity distribution $p_{\tau_S}$: (i) solid curve - intrinsic
input ellipticities of the sources, (ii) dotted curve - the
ellipticity distribution on being lensed by 50 galaxy-scale mass
components and one larger-scale smooth component and (iii) dashed
curve - the ellipticity distribution induced by lensing
only by the larger-scale smooth cluster component.}
\end{figure}
\subsection{Application of the maximum-likelihood method}
The basic idea is to maximize a likelihood function of the estimated
probability distribution of the source ellipticities for a set of
model parameters, given the functional form of the intrinsic
ellipticity distribution measured for faint galaxies.
We briefly outline the exact procedure below. From the simulated image
frames we extract the observed ellipticity $\tau_{\rm obs}$.
For each `faint' galaxy $j$, the source ellipticity can then be
estimated {\sl in the weak regime} by just subtracting the lensing
distortion induced by the smooth cluster and galaxy halos given the
parameters that characterize both these mass distributions, in other
words,
\begin{eqnarray}
\tau_{S_j} \,=\,\tau_{\rm obs_j}\,-{\Sigma_i^{N_c}}\,{\gamma_{p_i}}\,-\,
\gamma_{c},
\end{eqnarray}
where $\Sigma_{i}^{N_{c}}\,{\gamma_{p_i}}$ is the sum of the shear
contribution at a given position $j$ from $N_{c}$ perturbers, and the
term $\gamma_{c}$ is the shear induced by the smooth cluster component.
In the strong regime, similarly, one can compute the source ellipticity
using the inverse of equation (7). The lensing distortion depends on
the parameters of the smooth cluster potential, the perturbers and on
the redshift of the observed arclet (lensed image), which is in general
unknown. Therefore, in order to invert equation (7), for each lensed galaxy
we need to assign a redshift, from a distribution of the form
in equation (33) given the observed magnitude $m_{j}$ and take the mean of
many such realizations. In principle, one needs to also
correct the observed magnitude for amplification to obtain the true
magnitude prior to drawing a redshift from $N(z,m)$, but this
correction in turn depends on the redshift as well. An alternative
procedure is then to correct for the amplification using the median $z$
corresponding to the observed magnitude from the same distribution.
This entire inversion procedure is performed
within the lens tool utilities, which accurately takes into account
the non-linearities arising in the strong regime. As an input for this
calculation, we parameterize both the large-scale component and
perturbing galaxies as described in Section 2.2 and Section 3.1
respectively. Additionally, we assume that a well-determined `strong
lensing' model for the cluster-scale halo is known.
For our analysis, we also assume that the functional form of
$p(\tau_{S})$ from the field is known, and is specified by equation (34);
the likelihood for a guessed model can then be expressed as,
\begin{eqnarray}
{\cal L}({{\sigma_{0*}}},{r_{t*}}, ...) =
\Pi_j^{N_{gal}} p(\tau_{S_j}).
\end{eqnarray}
However, note that we ought to compute ${\cal L}$ for different
realizations of the drawn redshift for individual images (say about
10-20) and then compute the mean of the different realizations of
${z_{j}}$; but it is easily shown to be equivalent to constructing
the ${\cal L}$ for a single realization where the redshift ${z_{j}}$
of the arclet drawn is the median redshift corresponding to the
observed source magnitude. For the case when we perform a Monte-Carlo sum over $N_{\rm
MC}$ realizations of $z_j$, the likelihood is:
\begin{eqnarray}
{\cal L}({{\sigma_{0*}}},{r_{t*}}, ...) =
\Pi_j^{N_{gal}} {\Pi_k^{N_{\rm MC}}}p(\tau_{S_j^k}),
\end{eqnarray}
where $p_{\tau}(\tau_{S_j^k})$ is the probability of the source
ellipticity distribution at the position $j$ for $k$ drawings
for the redshift of the arclet of known magnitude $m_j$. The mean
value for $N_{\rm MC}$ realizations gives:
\begin{eqnarray}
\left<{p(\tau_{S_j})}\right>\,=\,{1 \over {N_{\rm
MC}}}\,{\Sigma_{k=1}^{N_{\rm MC}}}\,{p(\tau_{S_j^k})}
\end{eqnarray}
which written out in integral form is equivalent to
\begin{eqnarray}
\left<{p(\tau_{S_j})}\right>\,&=&\,{{\int {{p(\tau_{S_j}(z))}{N(z,{m_j})\,dz}}} \over
{\int {N(z,{m_j})\,dz}}}\,\\ \nonumber &=&\,{p(\tau_{S_j}({z_{\rm avg}}))}\,\sim\,{p(\tau_{S_j}({z_{\rm median}}))}
\end{eqnarray}
${z_{\rm avg}}$ being the average redshift corresponding to the magnitude
$m_j$.
Therefore the corresponding likelihood ${\cal L}$ is then simply,
\begin{eqnarray}
{\cal L}\,=\,{\Pi_j}{\left<{p({\tau_{S_j}})}\right>}
\end{eqnarray}
as before and the log-likelihood $l\,=\,\ln {\cal
L}\,=\,{\Sigma}{\left<{p(\tau_{S_j})}\right>}$.
The best fitting model parameters are then obtained by maximizing this
log-likelihood function $l$ with respect to the parameters
${\sigma_{0*}}$ and ${r_{t*}}$, the characteristic central velocity
dispersion and truncation radius respectively. The results of the
maximization are presented in Figures 16 - 18. For all reasonable choices of
input parameters we find that the log-likelihood function has a
well-defined and broad maximum (interior to the innermost contour on
the plots). The contour levels are calibrated such that ${l_{\rm
max}}\,-\,l\,=\,1, 2, 3$ can be directly related to confidence
levels of 63\%, 86\%, 95\% respectively (we plot only the first 10
contours for each of the cases in Figures 16 - 18) and the value marked by the
dotted lines denotes the input values. In Figure 16, we plot the
likelihood contour for the MDS ellipticity distribution (equation
(34)) - the left panel for an assumed scaling law with $\alpha$ =
0.5 and a constant mass-to-light ratio $\Upsilon$ = 12. On the right
panel, the corresponding contours for $\alpha$ = 0.8 are plotted. For the MDS
ellipticity distribution, we find that the velocity dispersion ${\sigma_{0*}}$
can be more stringently constrained than the halo size, and
the contours are elongated along the constant mass-to-light ratio curves
and yield an output $\Upsilon$ very nearly equal to the input value. For narrower
ellipticity distributions both the parameters can be constrained
better and the inferred $\Upsilon$ is every nearly equal to the input
value. We find that there is very little perceptible difference in
the retrieval of parameters for the two cases with the different scaling laws.
For a sub-critical cluster (see bottom left panel in Figure 18), we find that the
parameters are recovered just as reliably, which is not surprising and in
some sense illustrates the robustness of the maximum-likelihood method.
Thus, the physical quantity of interest that can be estimated best
from the analysis above is the mass $M_*$ of a fiducial $L_*$ galaxy.
\begin{figure*}
\psfig{figure=pap_plot1.ps,width=1.0\textwidth}
\caption{Log-likelihood contours for the retrieval of the fiducial
parameters $\sigma_{0*}$ and $r_{t*}$ - the input values are indicated
by the intersection of the dashed lines. In the left panel: For
the MDS ellipticity distribution, with assumed scaling $\alpha$ = 0.5, right
panel: the same with $\alpha$ = 0.8}
\end{figure*}
\begin{figure*}
\psfig{figure=pap_plot3.ps,width=1.0\textwidth}
\caption{Sensitivity of log-likelihood contours to the strong lensing
input parameters: examining the tolerance of the
significance bounds obtained on $\sigma_{0*}$ and $r_{t*}$ with regard to the
accuracy with which the cluster velocity dispersion needs to be
known. All plots are for input $\Upsilon$ = 12, $\alpha$ = 0.5 and the
MDS source ellipticity distribution. Top left panel: given that
the exact value of the velocity dispersion is known (value in this
case is 1090 km$s^-1$), top right panel: the velocity dispersion
known to within 2\%, bottom left panel: attempt to
retrieve the incorrect scaling law - input $\alpha$ = 0.5,
log-likelihood maximized for $\alpha$ = 0.8, bottom right panel:
retrieval with fewer background galaxies}
\end{figure*}
\begin{figure*}
\psfig{figure=pap_plot2.ps,width=1.0\textwidth}
\caption{Sensitivity of the log-likelihood contours to input
parameters: examining the tolerance of the
significance bounds obtained on $\sigma_{0*}$ and $r_{t*}$ given the
accuracy to which the cluster center needs to be
known. All plots are for input $\Upsilon$ = 12, $\alpha$ = 0.5 and the
MDS source ellipticity distribution. Top left panel: knowing the cluster
center exactly for the critical cluster, top right panel: knowing
the center to within 5 arcsec, bottom left panel: knowing the
center exactly for the sub-critical cluster and the bottom right
panel: for the subcritical cluster, center known to within 5 arcsec.}
\end{figure*}
\subsection{Estimating the required number of background galaxies}
The largest source of noise in our analysis arises due to the finite number of
objects in the frame. To estimate the required signal-to-noise that would permit
obtaining reliable constraints on both $\sigma_{0*}$ and $r_{t*}$, we reduced the
number of background sources to 2500 keeping the number of lenses at
50 as before. We do not converge to a maximum in the log-likelihood,
and consequentially, no confidence limits can be obtained on
the parameters. Therefore, to apply this technique to the data we require the
ratio $r$ of the number of cluster galaxies to the number of background
galaxies to be roughly $r\,<\,0.2$, which can be achieved only by
stacking the data from many clusters. Also, as found from the direct
averaging procedure, we require $\sim 5000$ lensed images in order to
securely detect $<\Upsilon>\,\geq\,4$. Although typical HST cluster data
fields are of the order of [3 arcmin X 3 arcmin] have $\sim 700$
background galaxies (with a 10 orbit exposure) of these the shape parameters
can be reliably measured only for about 200 galaxies, therefore on
stacking the data from 20 (10-orbit) HST cluster fields, we shall be
able to constrain statistically the mean mass-to-light ratios as well
as the 2 fiducial parameters.
\subsection{Uncertainties in the smooth cluster component}
In all of the above, we have assumed that the parameters that
characterize the smooth cluster-scale component are very accurately known
which is unlikely to be the case for the real data. We investigate the
error incurred in retrieving the correct input parameters from not
knowing this central strong lensing model well enough. So we can now
place limits on the order of magnitude of errors that can be tolerated
due to the lack of knowledge of the exact position of the cluster
center and the velocity dispersion of the main clump. In Figure 18,
we see that an uncertainty of the order of 20 arcsec in
the position of the center yields unacceptably incorrect values for $\sigma_{0*}$
and $r_{t*}$. Conversely, if the center is off by only 5 arcsec or so, for
both the critical cluster and the sub-critical one, the results remain
unaffected and we obtain as good a retrieval of the input $r_{t}^{*}$
as when the position of
the center is known exactly. Similarly, in Figure 17, we demonstrate
that an error of $\sim\,$5\% in the velocity dispersion is enough
to make the max-likelihood analysis inconclusive, but an error of $\sim$ 2-3\%
at most would still enable getting sensible bounds on both parameters.
\section{\bf Conclusions and Prospects}
We conclude this section and assert that both the maximum-likelihood
method and the direct averaging method developed in this paper can be
feasibly applied to the real data on stacking a minimum of 20 WFPC2
deep cluster fields. These methods are well-suited to being used
simultaneously as they are
somewhat complementary, both yield the statistical mass-to-light ratio
reliably and while the averaging does not require either the knowledge
of the center or any details of the strong lensing model, it also
cannot provide the decoupling of the 2 fiducial parameters and hence no
independent constraints on the velocity dispersion and the
halo size can be obtained, meanwhile the maximum likelihood approach
permits estimation of the fiducial $\sigma_{0*}$ and $r_{t*}$
($\sigma_{0*}$ more reliably than $r_{t*}$), it necessarily requires
knowledge of the cluster center and the central velocity dispersion
rather accurately. In offset fields, however, where the gradient of
the smooth cluster potential is constant over the smaller scales that
we are probing, we expect both methods to perform rather well.
In this paper, we have not investigated the likely sources of error in the
real data, which we do in detail in a subsequent paper \cite{natarajan96a},
our simulations have enabled the study of the feasibility of
application to HST cluster data in as far as a statistical estimate of
the required number of background galaxies required for a significant detection
given the limitations in the accuracy to which the input parameters (like
the strong lensing mass model and hence the magnification) are presently known.
Our analysis points to the fact that the extraction of the signal
would therefore be feasible if approximately 20 -- 25 clusters are
stacked, and the enterprise is specially suited to using the new ACS (advanced
camera for survey) due to be installed on HST in 1999. Additionally,
since there exists a well-defined optimum lens redshift for signal
detection (0.1$\,<\,{z_{\rm lens}}\,<\,$0.3), it might be useful to
target clusters in this redshift range in future surveys in order to
apply the techniques developed here. In our proposed analysis with the
currently available HST data, we intend to incorporate parameters
characterizing the smooth cluster (main clump) alongwith those of the
perturbing galaxies into the maximum likelihood machinery.
In summary, we have presented a new approach to infer the possible
existence of dark halos around individual bright galaxies in clusters
by extracting their local lensing signal. The composite lensing effect
of a cluster is modeled in numerical simulations via a large-scale
smooth mass component with additional galaxy-scale masses as
perturbers. The correct choice of coordinate frame i.e. the local frame
of each perturber, enables efficient subtraction of the shear induced
by the larger scale component, yielding the averaged shear field
induced by the smaller-scale mass component. Cluster galaxy halos
are modeled using simple scaling relations and the background
high redshift population is modeled in consonance with observations
from redshift surveys. For several configurations of the sources and
lens, the lensing equation was solved to obtain the resultant images.
Not surprisingly, we find that the strength of the signal varies most
strongly with the mass-to-light ratio of the cluster galaxies and is
only marginally sensitive to the assumed details of the precise
fall-off of the mass profile. We also find that there is an optimum
lens redshift range for detection of the signal. Although the entire
procedure works in the `strong lensing' regime as well, it is less
noisy in the `weak regime'. The proposed maximum-likelihood method
independently constrains the halo size and mass of a fiducial cluster
galaxy and we find that the velocity dispersion and hence the mass of a
fiducial galaxy can be more reliably constrained than the characteristic halo size.
Examining the feasibility of application to real data, we find that
stacking $\sim$ 20 clusters allows a first attempt at extraction
(\citeNP{natarajan96a}). The prospects for the application of this
technique are potentially promising,
specially with sufficient and high-quality data (either HST images or
ground-based observations under excellent seeing conditions of wider fields);
the mass-to-light ratios of the different morphological/color types in
clusters for instance can be probed. More importantly, comparing with similar
estimates in fields offset from the cluster center would allow us to
make the essential connections in order to understand the dynamical
evolution of galaxies in clusters and the re-distribution of dark
matter within smaller scales within clusters. Application of this
approach affords the probing of the structure of cluster
galaxies as well as the efficiency of violent dynamical processes
like tidal stripping, mergers and interactions which modify them and
constitute the processes by which clusters assemble.
\section*{Acknowledgments}
PN thanks Martin Rees for his support and encouragement during the
course of this work. We acknowledge useful discussions with Alfonso
Aragon-Salamanca, Philip Armitage, Richard Ellis, Bernard Fort, Jens
Hjorth, Yannick Mellier, Ian Smail and Simon White. PN acknowledges
funding from the Isaac Newton Studentship and Trinity College, JPK
acknowledges support from an EC-HCM fellowship and from the CNRS.
| proofpile-arXiv_065-387 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The importance of studying continuous but nowhere differentiable
functions was emphasized a long time ago by Perrin,
Poincar\'e and others (see Refs. \cite{1} and \cite{2}).
It is possible for a continuous function to be sufficiently irregular
so that its graph is a fractal. This observation points out to a
connection between the lack of differentiability of such a function and
the dimension of its graph.
Quantitatively one would like to convert the question concerning the lack of
differentiability into one concerning the amount of loss
of differentiability. In other words, one would
look at derivatives of fractional order rather than only those of
integral order and relate them to dimensions.
Indeed some recent papers~\cite{3,4,5,6} indicate a connection
between fractional calculus~\cite{7,8,9}
and fractal structure \cite{2,10} or fractal processes \cite{11,22,23}.
Mandelbrot and Van Ness \cite{11} have
used fractional integrals to formulate fractal processes such as fractional
Brownian motion. In Refs. \cite{4} and \cite{5}
a fractional diffusion equation has been
proposed for the diffusion on fractals.
Also Gl\"ockle and Nonnenmacher \cite{22} have formulated fractional
differential equations for some relaxation processes which are essentially
fractal time \cite{23} processes.
Recently Zaslavasky~\cite{46} showed that the Hamiltonian chaotic dynamics
of particles can be described by a fractional generalization of the
Fokker-Plank-Kolmogorov equation which is defined by two fractional
critical exponents $(\alpha , \beta)$ responsible for the space and
time derivatives of the distribution function correspondingly.
However, to our knowledge, the precise nature of the
connection between the dimension of the graph of a fractal curve
and fractional differentiability
properties has not been established.
Irregular functions arise naturally in various
branches of physics. It is
well known that the graphs of projections of Brownian paths
are nowhere differentiable and have
dimension $3/2$. A generalization of Brownian motion called fractional
Brownian motion \cite{2,10} gives rise to graphs having dimension between 1 and 2.
Typical Feynmann paths \cite{30,31}, like the Brownian paths are continuous but nowhere
differentiable.
Also, passive scalars advected by a turbulent fluid
\cite{19,20} can have isoscalar surfaces which are highly irregular, in the limit
of the diffusion constant going to zero. Attractors of some dynamical systems
have been shown \cite{15} to be continuous but nowhere differentiable.
All these irregular functions are characterized at every point by a local H\"older
exponent typically lying between 0 and 1.
In the case of functions having the same H\"older exponent $h$ at every
point it is well known that the
box dimension of its graph is $2-h$. Not all functions have the same exponent $h$
at every point but have a range of H\"older exponents.
A set $\{x\vert h(x)=h \}$ may be a fractal set.
In such situations the corresponding functions are multifractal. These kind of
functions also arise in various physical situations, for instance, velocity
field of a turbulent fluid \cite{26} at low viscosity.
Also there exists a
class of problems where one has to solve a partial
differentiable equation subject to fractal boundary
conditions, e.g. the Laplace equation near a fractal conducting surface.
As noted in reference \cite{39} irregular boundaries may appear, down to
a certain spatial resolution, to be non-differentiable everywhere and/or
may exhibit convolutions over many length scales.
Keeping in view such
problems there is a need to characterize pointwise behavior using something
which can be readily used.
We consider the Weierstrass function as a prototype example of a function which is
continuous everywhere but differentiable nowhere and has an exponent which is
constant everywhere.
One form of the Weierstrass
function is
\begin{eqnarray}
W_{\lambda}(t) = \sum_{k=1}^{\infty} {\lambda}^{(s-2)k} \sin{\lambda}^kt,\;\;\;\;\;
t \;\;\;{\rm real}.
\end{eqnarray}
For this form, when $\lambda > 1$ it is well known \cite{12} that
$W_{\lambda}(t)$ is nowhere differentiable if $1<s<2$.
This curve has been extensively studied \cite{1,10,13,14} and
its graph is known to have a box dimension $s$, for sufficiently large $\lambda$.
Incidently, the Weierstrass functions are not just mathematical curiosities
but occur at several places. For instance,
the graph of this function is known \cite{10,15} to be a repeller
or attractor of some dynamical
systems.
This kind of function can also be recognized as
the characteristic function of a L\'evy
flight on a one dimensional lattice \cite{16}, which means that such a L\'evy
flight can be considered as a superposition of Weierstrass type functions.
This function has also been used \cite{10} to generate a fractional Brownian signal by multiplying
every term by a random amplitude and randomizing phases of every term.
The main aim of the present paper is to explore the precise nature of the
connection between fractional differentiability properties of irregular
(non-differentiable) curves and dimensions/ H\"older exponents of
their graphs. A second aim is to provide a possible tool to study
pointwise behavior.
The organization of the paper is as follows.
In section II we motivate and define what we call
local fractional differentiability,
formally and use a local fractional derivative to formulate the Taylor series.
Then in section III we
apply this definition to a specific example, viz., Weierstrass' nowhere
differentiable function and show that this function, at every point, is
locally fractionally differentiable for all orders below $2-s$
and it is not so for orders between $2-s$ and 1, where $s$, $1<s<2$
is the box dimension of the graph of the function. In section IV we prove a
general result showing the relation between local fractional differentiability
of nowhere differentiable functions and the local H\"older exponent/
the dimension of its graph. In section V we demonstrate the use of the local
fractional derivatives (LFD) in unmasking isolated singularities and
in the study of the pointwise behavior of multifractal functions.
In section VI we conclude after pointing out a few possible consequences of
our results.
\section{Fractional Differentiability}
We begin by recalling the Riemann-Liouville definition of the
fractional integral
of a real function, which is
given by \cite{7,9}
\begin{eqnarray}
{{d^qf(x)}\over{[d(x-a)]^q}}={1\over\Gamma(-q)}{\int_a^x{{f(y)}\over{(x-y)^{q+1}}}}dy
\;\;\;{\rm for}\;\;\;q<0,\;\;\;a\;\;{\rm real},\label{def1}
\end{eqnarray}
and of the fractional derivative
\begin{eqnarray}
{{d^qf(x)}\over{[d(x-a)]^q}}={1\over\Gamma(1-q)}{d\over{dx}}{\int_a^x{{f(y)}\over{(x-y)^{q}}}}dy
\;\;\;{\rm for}\;\;\; 0<q<1.\label{def2}
\end{eqnarray}
The case of $q>1$ is of no relevance in this paper.
For future reference we note \cite{7,9}
\begin{eqnarray}
{{d^qx^p}\over {d x^q}} = {\Gamma(p+1) \over {\Gamma(p-q+1)}} x^{p-q}\;\;\;
{\rm for}\;\;\;p>-1.\label{xp}
\end{eqnarray}
We also note that the fractional derivative has the property (see Ref.
\cite{7}), viz.,
\begin{eqnarray}
{d^qf(\beta x)\over{d x^q}}={{\beta}^q{d^qf(\beta x)\over{d(\beta x)^q}}}
\end{eqnarray}
which makes it suitable for the study of scaling.
One may note that except in the case of positive integral $q$, the $q$th derivative
will be nonlocal through its dependence
on the lower limit "$a$".
On the other hand we wish to
study local scaling properties and hence we need to introduce the notion
of local fractional differentiability.
Secondly from Eq. (\ref{xp}) it is clear that the fractional derivative of a constant
function is not zero.
These two features play an
important role in defining local fractional differentiability.
We note that changing the lower
limit or adding a constant to a function alters the value of the fractional
derivative. This forces one to choose the lower limit as well as the additive
constant before hand. The most natural choices are as follows.
(1) We subtract, from the function, the value of the function at the point where
fractional differentiability is to be checked. This makes the value of the function
zero at that point, washing out the effect of any constant term.
(2) The natural choice of a lower limit will be
that point, where we intend to examine the fractional differentiability, itself.
This has an advantage in that it preserves local nature of
the differentiability property. With these motivations we now introduce
the following.
\begin{defn} If, for a function $f:[0,1]\rightarrow I\!\!R$, the limit
\begin{eqnarray}
I\!\!D^qf(y) =
{\lim_{x\rightarrow y} {{d^q(f(x)-f(y))}\over{d(x-y)^q}}},\label{defloc}
\end{eqnarray}
exists and is finite, then we say that the {\it local fractional derivative} (LFD)
of order $q$, at $x=y$,
exists.
\end{defn}
\begin{defn}
We define {\it critical order} $\alpha$, at $y$, as
$$
\alpha(y) = Sup \{q \vert {\rm {all\;local\; fractional\; derivatives\; of\; order\; less\; than\;}} q{{\rm\; exist\; at}\;y}\}.
$$
\end{defn}
Incidentally we note that Hilfer \cite{17,18} used a similar notion to extend
Ehrenfest's classification of phase
transition to continuous transitions. However in his work only the singular part of
the free energy was considered. So the first of the above mentioned
condition was automatically
satisfied. Also no lower limit of fractional derivative was considered
and by default it was taken as zero.
In order to see the information contained in the LFD we consider the
fractional Taylor's series with a remainder term for a real function $f$.
Let
\begin{eqnarray}
F(y,x-y;q) = {d^q(f(x)-f(y))\over{[d(x-y)]^q}}.
\end{eqnarray}
It is clear that
\begin{eqnarray}
I\!\!D^qf(y)=F(y,0;q).
\end{eqnarray}
Now, for $0<q<1$,
\begin{eqnarray}
f(x)-f(y)& =& {1\over\Gamma(q)} \int_0^{x-y} {F(y,t;q)\over{(x-y-t)^{-q+1}}}dt\\
&=& {1\over\Gamma(q)}[F(y,t;q) \int (x-y-t)^{q-1} dt]_0^{x-y} \nonumber\\
&&\;\;\;\;\;\;\;\;+ {1\over\Gamma(q)}\int_0^{x-y} {dF(y,t;q)\over{dt}}{(x-y-t)^q\over{q}}dt,
\end{eqnarray}
provided the last term exists. Thus
\begin{eqnarray}
f(x)-f(y)&=& {I\!\!D^qf(y)\over \Gamma(q+1)} (x-y)^q \nonumber\\
&&\;\;\;\;\;\;\;\;+ {1\over\Gamma(q+1)}\int_0^{x-y} {dF(y,t;q)\over{dt}}{(x-y-t)^q}dt,\label{taylor}
\end{eqnarray}
i.e.
\begin{eqnarray}
f(x) = f(y) + {I\!\!D^qf(y)\over \Gamma(q+1)} (x-y)^q + R_1(x,y),\label{taylor2}
\end{eqnarray}
where $R_1(x,y)$ is a remainder given by
\begin{eqnarray}
R_1(x,y) = {1\over\Gamma(q+1)}\int_0^{x-y} {dF(y,t;q)\over{dt}}{(x-y-t)^q}dt
\end{eqnarray}
Equation (\ref{taylor2}) is a fractional Taylor expansion of $f(x)$ involving
only the lowest and the second leading terms. This expansion can be carried
to higher orders provided the corresponding remainder term is well defined.
We note that the local fractional derivative as defined above
(not just the fractional derivative) provides
the coefficient $A$ in the approximation
of $f(x)$ by the function $f(y) + A(x-y)^q/\Gamma(q+1)$, for $0<q<1$,
in the vicinity of $y$.
We further note that the terms
on the RHS of Eq. (\ref{taylor}) are non-trivial and finite only in the case
$q=\alpha$.
Osler in Ref.\cite{21} has constructed a fractional Taylor
series using usual (not local in the present sense) fractional derivatives.
His results are, however, applicable to analytic functions and cannot be
used for non-differentiable scaling functions directly. Further Osler's
formulation involves terms with negative $q$ also and hence is not suitable
for approximating schemes.
One may further notice that when $q$ is set equal to
one in the above approximation one gets
the equation of the tangent.
It may be recalled that all the curves passing through a point $y$ and having
the same tangent
form an equivalence class (which is modeled by a linear behavior).
Analogously all the functions (curves) with the same critical order $\alpha$
and the same $I\!\!D^{\alpha}$
will form an equivalence class modeled by $x^{\alpha}$ [
If $f$ differs from $x^{\alpha}$ by a logarithmic correction then
terms on RHS of Eq. (\ref{taylor})
do not make sense precisely as in the case of ordinary calculus].
This is how one may
generalize the geometric interpretation of derivatives in terms of tangents.
This observation is useful when one wants to approximate an irregular
function by a piecewise smooth (scaling) function.
To illustrate the definitions of local fractional differentiability and critical order
consider an example of a polynomial of degree $n$ with its graph passing through the
origin and for which the first derivative at the origin
is not zero. Then all the local fractional derivatives of order
less than or equal to one exist at the origin. Also all derivatives
of integer order greater than
one exist, as expected. But local derivatives of any other order,
e.g. between 1 and 2 [see
equations (\ref{xp}) and (\ref{defloc})] do not exist.
Therefore critical order for this function at
$x=0$ is one. In fact, except at a finite number of points
where the function has a
vanishing first derivative, critical order
of a polynomial function will be one, since the linear term is expected to
dominate near these points.
\noindent
{\bf Remark}: We would like to point out that
there is a multiplicity of definitions of a fractional derivative.
The use of a Riemann-Liouville
definition, and other equivalent definitions such as Grunwald's
definition, are suitable for our purpose.
The other definitions of fractional derivatives which
do not allow control over both the limits, such as Wyel's definition or definition
using Fourier transforms, are not suitable since
it would not be possible to retrieve the local nature of
the differentiability property which is essential for the study of
local behavior. Also, the important difference between our work and
the work of \cite{4,22} is that while we are trying to study the local scaling behavior these works apply to asymptotic scaling properties.
\section{Fractional Differentiability of Weierstrass Function}
Consider a form of the Weierstrass function as given above, viz.,
\begin{eqnarray}
W_{\lambda}(t) = \sum_{k=1}^{\infty} {\lambda}^{(s-2)k}
\sin{\lambda}^kt,\;\;\;\;
\lambda>1.
\end{eqnarray}
Note that $W_{\lambda}(0)=0$.
Now
\begin{eqnarray}
{{d^qW_{\lambda}(t)}\over{dt^q}}
&=& {\sum_{k=1}^{\infty} {\lambda}^{(s-2)k}{{d^q\sin({\lambda}^kt)}\over {dt^q}}}\nonumber\\
&=& {\sum_{k=1}^{\infty} {\lambda}^{(s-2+q)k}{{d^q\sin({\lambda}^kt)}\over {d({\lambda}^kt)^q}}}, \nonumber
\end{eqnarray}
provided the right hand side converges uniformly. Using, for $0<q<1$,
\begin{eqnarray}
{{d^q\sin(x)}\over {d x^q}}={{d^{q-1}\cos(x)}\over{d x^{q-1}}}, \nonumber
\end{eqnarray}
we get
\begin{eqnarray}
{{d^qW_{\lambda}(t)}\over{dt^q}}
&=& {\sum_{k=1}^{\infty} {\lambda}^{(s-2+q)k}{{d^{q-1}cos({\lambda}^kt)}\over {d({\lambda}^kt)^{q-1}}}}\label{a}
\end{eqnarray}
From the second mean value theorem it follows that the fractional integral
of $\cos({\lambda}^kt)$ of order $q-1$ is
bounded uniformly for all values of ${\lambda}^kt$.
This implies that the series on the right
hand side will converge uniformly for $q<2-s$, justifying our action of taking
the fractional derivative operator inside the sum.
Also as $t \rightarrow 0$ for
any $k$ the fractional integral in the summation of equation (\ref{a}) goes to zero.
Therefore it is easy to see from this that
\begin{eqnarray}
I\!\!D^qW_{\lambda}(0) = {\lim_{t\rightarrow 0} {{d^qW_{\lambda}(t)}\over{dt^q}}}=0\;\;\;
{\rm for} \;\;\;q<2-s.
\end{eqnarray}
This shows that the $q${th} local derivative of the Weierstrass function exists and
is continuous, at $t=0$, for $q<2-s$.
To check the fractional differentiability at any other point, say $\tau$,
we use $t'=t-\tau$ and $\widetilde{W} (t' )=W(t'+\tau )-W(\tau)$ so that
$\widetilde{W}(0)=0$. We have
\begin{eqnarray}
\widetilde{W}_{\lambda} (t' ) &=& \sum_{k=1}^{\infty} {\lambda}^{(s-2)k} \sin{\lambda}^k(t' +\tau)-
\sum_{k=1}^{\infty} {\lambda}^{(s-2)k} \sin{\lambda}^k\tau \nonumber\\
&=&\sum_{k=1}^{\infty} {\lambda}^{(s-2)k}(\cos{\lambda}^k\tau \sin{\lambda}^kt' +
\sin{\lambda}^k\tau(\cos{\lambda}^kt' -1)). \label{c}
\end{eqnarray}
Taking the fractional derivative of this with respect to $t'$ and following the
same procedure we can show that the fractional derivative of the Weierstrass
function of order $q<2-s$ exists at all points.
For $q>2-s$, right hand side of the equation (\ref{a}) seems to diverge.
We now prove that the LFD of order $q>2-s$ in fact does not
exist.
We do this by showing that there exists a sequence of
points approaching 0 along which
the limit of the fractional derivative of order $2-s < q <1$ does not exist.
We use the property of the Weierstrass function \cite{10}, viz.,
for each $t' \in [0,1]$ and $0 < \delta \leq {\delta}_0$
there exists $t$ such that $\vert t-t' \vert \leq \delta$ and
\begin{eqnarray}
c{\delta}^{\alpha} \leq \vert W(t)-W(t') \vert , \label{uholder}
\end{eqnarray}
where $c > 0$ and $\alpha=2-s$, provided $\lambda$ is sufficiently large.
We consider the case of $t'=0$ and
$t>0$.
Define $g(t)=W(t)-ct^{\alpha}$.
Now the above mentioned property, along with continuity
of the Weierstrass function assures us a
sequence of points $t_1>t_2>...>t_n>...\geq 0$ such that
$t_n \rightarrow 0$ as $n \rightarrow \infty$ and $g(t_n) = 0$
and $g(t)>0$ on $(t_n,\epsilon)$ for some $\epsilon>0$, for all
$n$ (it is not ruled out that $t_n$ may be zero for finite $n$).
Define
\begin{eqnarray}
g_n(t)&=&0,\;\;\;{\rm if}\;\;\;t\leq t_n, \nonumber\\
&=&g(t),\;\;\; {\rm otherwise}.\nonumber
\end{eqnarray}
Now we have, for $0 <\alpha < q < 1$,
\begin{eqnarray}
{{d^qg_n(t)}\over{d(t-t_n)^q}}={1\over\Gamma(1-q)}{d\over{dt}}{\int_{t_n}^t{{g(y)}\over{(t-y)^{q}}}}dy,\nonumber
\end{eqnarray}
where $t_n \leq t \leq t_{n-1}$. We assume that the left hand side of the above
equation exists for if it does not then we have nothing to prove.
Let
\begin{eqnarray}
h(t)={\int_{t_n}^t{{g(y)}\over{(t-y)^{q}}}}dy.\nonumber
\end{eqnarray}
Now $h(t_n)=0$ and $h(t_n+\epsilon)>0$, for a suitable $\epsilon$, as the integrand is positive.
Due to continuity there must exist an ${\epsilon}'>0$ and ${\epsilon}'<\epsilon $
such that $h(t)$ is increasing on $(t_n,{\epsilon}')$.
Therefore
\begin{eqnarray}
0 \leq {{d^qg_n(t)}\over{d(t-t_n)^q}} {\vert}_{t=t_n},\;\;\;\;n=1,2,3,... .
\end{eqnarray}
This implies that
\begin{eqnarray}
c{{d^qt^{\alpha}}\over{d(t-t_n)^q}} {\vert}_{t=t_n} \leq {{d^qW(t)}\over{d(t-t_n)^q}} {\vert}_{t=t_n},
\;\;\;\;n=1,2,3,... .
\end{eqnarray}
But we know from Eq. (\ref{xp}) that, when $0<\alpha <q<1$,
the left hand side in the above inequality approaches infinity as $t\rightarrow 0$.
This implies that the right hand side of the above inequality does not
exist as $t \rightarrow 0$. This argument can be generalized
for all non-zero $t'$ by
changing the variable $t''=t-t'$.
This concludes the proof.
Therefore the critical order of the Weierstrass function
will be $2-s$ at all points.
\noindent
{\bf Remark}: Schlesinger et al \cite{16} have considered a
L\'evy flight on a one dimensional
periodic lattice where a particle jumps from one lattice site
to other with the probability given by
\begin{eqnarray}
P(x) = {{{\omega}-1}\over{2\omega}} \sum_{j=0}^{\infty}{\omega}^{-j}
[\delta(x, +b^j) + \delta(x, -b^j)],
\end{eqnarray}
where $x$ is magnitude of the jump, $b$ is a lattice spacing and $b>\omega>1$.
$\delta(x,y)$ is a Kronecker delta.
The characteristic function for $P(x)$ is given by
\begin{eqnarray}
\tilde{P}(k) = {{{\omega}-1}\over{2\omega}} \sum_{j=0}^{\infty}{\omega}^{-j}
\cos(b^jk).
\end{eqnarray}
which is nothing but the Weierstrass cosine function.
For this distribution the L\'evy index is $\log{\omega}/\log{b}$, which can be
identified as the critical order of $\tilde{P}(k)$.
More generally for the L\'evy distribution with index $\mu$
the characteristic function
is given by
\begin{eqnarray}
\tilde{P}(k) =A \exp{c\vert k \vert^{\mu}}.
\end{eqnarray}
The critical order of this function at $k=0$
also turns out to be same as $\mu$. Thus the L\'evy index can be identified as
the critical order of the characteristic function at $k=0$.
\section{Connection between critical order and the box dimension of the curve}
\begin{thm}
Let $f:[0,1]\rightarrow I\!\!R$ be a continuous function.
a) If
\begin{eqnarray}
\lim_{x\rightarrow y} {d^q(f(x)-f(y)) \over{[d(x-y)]^q}}=0,\;\;\;
{\rm for}\;\; q<\alpha\;\;
,\nonumber
\end{eqnarray}
where $q,\alpha \in (0,1)$,
for all $y \in (0,1)$,
then $dim_Bf(x) \leq 2-\alpha$.
b) If there exists a sequence $x_n \rightarrow y$ as
$n \rightarrow \infty$ such that
\begin{eqnarray}
\lim_{n\rightarrow \infty} {d^q(f(x_n)-f(y)) \over{[d(x_n-y)]^q}}=\pm \infty,\;\;\;
{\rm for}\;\; q>\alpha,\;\;
,\nonumber
\end{eqnarray}
for all $y$,
then $dim_Bf \geq 2-\alpha$.
\end{thm}
\noindent
{\bf Proof}: (a) Without loss of generality assume $y=0$ and $f(0)=0$.
We consider the case of $q<\alpha$.
As $0<q<1$ and $f(0)=0$ we can write \cite{7}
\begin{eqnarray}
f(x)&=&{d^{-q}\over{d x^{-q}}}{d^qf(x)\over{d x^q}}\nonumber\\
&=&{1\over\Gamma(q)}{\int_0^x{{d^qf(y)\over{dy^q}}\over{(x-y)^{-q+1}}}}dy. \label{comp}
\end{eqnarray}
Now
\begin{eqnarray}
\vert f(x)\vert \leq {1\over\Gamma(q)}{\int_0^x{\vert {d^qf(y)\over{dy^q}}\vert
\over{(x-y)^{-q+1}}}}dy. \nonumber
\end{eqnarray}
As, by assumption, for $q<\alpha$,
\begin{eqnarray}
\lim_{x\rightarrow 0}{d^qf(x)\over{d x^q}}=0,\nonumber
\end{eqnarray}
we have, for any $\epsilon > 0$, a $\delta > 0$ such that
$\vert {d^qf(x)/{d x^q}}\vert < \epsilon$ for all $x< \delta$,
\begin{eqnarray}
\vert f(x)\vert &\leq& {\epsilon\over\Gamma(q)}{\int_0^x{dy
\over{(x-y)^{-q+1}}}}\nonumber\\
&=&{\epsilon\over \Gamma(q+1)}x^q.\nonumber
\end{eqnarray}
As a result we have
\begin{eqnarray}
\vert f(x)\vert &\leq& K \vert x\vert ^q, \;\;\;\;{\rm for}\;\;\; x<\delta.\nonumber
\end{eqnarray}
Now this argument can be extended for general $y$ simply by considering
$x-y$ instead of $x$ and $f(x)-f(y)$ instead of $f(x)$. So finally
we get for $q<\alpha$
\begin{eqnarray}
\vert f(x)-f(y)\vert &\leq& K \vert x-y\vert ^q, \;\;\;\;{\rm for}
\;\;\vert x-y \vert < \delta,\label{holder}
\end{eqnarray}
for all $y \in (0,1)$. Hence we have \cite{10}
\begin{eqnarray}
{\rm dim}_Bf(x) \leq 2-\alpha.\nonumber
\end{eqnarray}
b) Now we consider the case $q>\alpha$. If we have
\begin{equation}
\lim_{x_n\rightarrow 0}{d^qf(x_n)\over{dx_n^q}}=\infty, \label{k0}
\end{equation}
then for given $M_1 >0$ and $\delta > 0$ we can find positive integer $N$ such that $|x_n|<\delta$ and
$ {d^qf(x_n)}/{dx_n^q} \geq M_1$ for all $n>N$. Therefore by Eq. (\ref{comp})
\begin{eqnarray}
f(x_n) &\geq& {M_1\over\Gamma(q)}{\int_0^{x_n}{dy
\over{(x_n-y)^{-q+1}}}}\nonumber\\
&=&{M_1\over \Gamma(q+1)}x_n^q\nonumber
\end{eqnarray}
If we choose $\delta=x_N$ then we can say that there exists $x<\delta$
such that
\begin{eqnarray}
f(x) \geq k_1 {\delta}^q. \label{k1}
\end{eqnarray}
If we have
\begin{eqnarray}
\lim_{x_n\rightarrow 0}{d^qf(x_n)\over{dx_n^q}}=-\infty, \nonumber
\end{eqnarray}
then for given $M_2 >0$ we can find a positive integer $N$ such that
$ {d^qf(x_n)}/{dx_n^q} \leq -M_2$ for all $n>N$. Therefore
\begin{eqnarray}
f(x_n) &\leq& {-M_2\over\Gamma(q)}{\int_0^{x_n}{dy
\over{(x_n-y)^{-q+1}}}}\nonumber\\
&=&{-M_2\over \Gamma(q+1)}x_n^q.\nonumber
\end{eqnarray}
Again if we write $\delta=x_N$, there exists $x<\delta$ such that
\begin{eqnarray}
f(x) \leq -k_2 {\delta}^q.\label{k2}
\end{eqnarray}
Therefore by (\ref{k1}) and (\ref{k2}) there exists $x<\delta$ such that, for $q>\alpha$,
\begin{eqnarray}
\vert f(x)\vert &\geq& K \delta^q.\nonumber
\end{eqnarray}
Again for any $y \in (0,1)$ there exists $x$ such that
for $q>\alpha$ and $|x-y|<\delta$
\begin{eqnarray}
\vert f(x)-f(y)\vert &\geq& k \delta^q.\nonumber
\end{eqnarray}
Hence we have \cite{10}
\begin{eqnarray}
{\rm dim}_Bf(x) \geq 2-\alpha.\nonumber
\end{eqnarray}
Notice that part (a) of the theorem above is the generalization
of the statement that $C^1$ functions are locally Lipschitz (hence their
graphs have dimension 1) to the case when the function has a H\"older type
upper bound (hence their dimension is greater than one).
Here the function is required to
have the same critical order throughout the interval. We can weaken this
condition slightly. Since we are dealing with a box dimension which
is finitely stable \cite{10}, we can allow a finite number of points having
different critical order so that we can divide the set in finite parts
having the same critical order in each part.
The example of a polynomial of degree $n$ having critical order one and dimension one is
consistent with the above result, as we can divide the graph of the polynomial
in a finite
number of parts such that at each point in every part the critical order is one.
Using the finite stability of the box dimension, the dimension of the whole curve
will be one.
We can also prove a partial converse of the above theorem.
\begin{thm}
Let $f:[0,1]\rightarrow I\!\!R$ be a continuous function.
a) Suppose
\begin{eqnarray}
\vert f(x)- f(y) \vert \leq c\vert x-y \vert ^{\alpha}, \nonumber
\end{eqnarray}
where $c>0$, $0<\alpha <1$ and $|x-y|< \delta$ for some $\delta >0$.
Then
\begin{eqnarray}
\lim_{x\rightarrow y} {d^q(f(x)-f(y)) \over{[d(x-y)]^q}}=0,\;\;\;
{\rm for}\;\; q<\alpha,\;\;
\nonumber
\end{eqnarray}
for all $y\in (0,1)$.
b) Suppose that for each $y\in (0,1)$ and for each $\delta >0$ there exists x such that
$|x-y| \leq \delta $ and
\begin{eqnarray}
\vert f(x)- f(y) \vert \geq c{\delta}^{\alpha}, \nonumber
\end{eqnarray}
where $c>0$, $\delta \leq {\delta}_0$ for some ${\delta}_0 >0$ and $0<\alpha<1$.
Then there exists a sequence $x_n \rightarrow y$ as $n\rightarrow \infty$
such that
\begin{eqnarray}
\lim_{n\rightarrow \infty} {d^q(f(x_n)-f(y)) \over{[d(x_n-y)]^q}}=\pm \infty,\;\;\;
{\rm for}\;\; q>\alpha,\;\;
\nonumber
\end{eqnarray}
for all $y$.
\end{thm}
\noindent
{\bf Proof}
a) Assume that there exists a sequence $x_n \rightarrow y$ as
$n \rightarrow \infty$ such that
\begin{eqnarray}
\lim_{n\rightarrow \infty} {d^q(f(x_n)-f(y)) \over{[d(x_n-y)]^q}}=\pm \infty,\;\;\;
{\rm for}\;\; q<\alpha\;\;,\nonumber
\end{eqnarray}
for some $y$. Then by arguments between Eq. (\ref{k0}) and Eq. (\ref{k1}) of the second part of the previous theorem it is a
contradiction.
Therefore
\begin{eqnarray}
\lim_{x\rightarrow y} {d^q(f(x)-f(y)) \over{[d(x-y)]^q}}={\rm const}\;\;
{\rm or}\;\; 0,\;\;\;
{\rm for}\;\; q<\alpha.
\nonumber
\end{eqnarray}
Now if
\begin{eqnarray}
\lim_{x\rightarrow y} {d^q(f(x)-f(y)) \over{[d(x-y)]^q}}={\rm const},\;\;\;
{\rm for}\;\; q<\alpha,\;\;
\nonumber
\end{eqnarray}
then we can write
\begin{eqnarray}
{d^q(f(x)-f(y)) \over{[d(x-y)]^q}}=K+\eta(x,y),\;\;\;
\nonumber
\end{eqnarray}
where $K={\rm const}$ and $\eta(x,y) \rightarrow 0$
sufficiently fast as $x\rightarrow y$. Now
taking the $\epsilon$ derivative of both sides,
for sufficiently small $\epsilon$ we get
\begin{eqnarray}
{d^{q+\epsilon}(f(x)-f(y)) \over{[d(x-y)]^{q+\epsilon}}}={{K(x-y)^{-\epsilon}}
\over {\Gamma(1-\epsilon)}} + {d^{\epsilon}{\eta(x,y)}\over{[d(x-y)]^{\epsilon}}}
\;\;\;{\rm for}\;\; q+\epsilon <\alpha. \nonumber
\end{eqnarray}
As $x\rightarrow y$ the right hand side of the above equation goes
to infinity (the term involving $\eta$ does not matter since $\eta$ goes to 0
sufficiently fast)
which again is a contradiction. Hence the proof.
b)The proof follows by the method used in the previous section to show that
the fractional derivative of order greater than $2-\alpha$ of the Weierstrass
function does not exist.
These two theorems give an equivalence between the H\"older exponent and the critical
order of fractional differentiability.
\section{Local Fractional Derivative as a tool to study pointwise regularity of functions}
Motivation for studying pointwise behavior of irregular functions
and its relevance in physical
processes was given in the Introduction.
There are several approaches to
studying the pointwise behavior of functions. Recently wavelet transforms \cite{29,38}
were used for this purpose and have met with some success.
In this section we argue that LFDs is a tool that can be used to characterize
irregular functions and has certain advantages over its
counterpart using wavelet transforms in aspects explained below.
Various authors \cite{27,24} have used the following general definition
of H\"older exponent. The H\"older exponent $\alpha(y)$ of a function $f$
at $y$ is defined as the largest exponent such that there exists a polynomial
$P_n(x)$ of order $n$ that satisfies
\begin{eqnarray}
\vert f(x) - P_n(x-y) \vert = O(\vert x-y \vert^{\alpha}),
\end{eqnarray}
for $x$ in the neighborhood of $y$. This definition is equivalent to
equation (\ref{holder}), for $0<\alpha<1$, the range of interest in this work.
It is clear from theorem I that
LFDs provide an algorithm to calculate H\"older exponents and
dimensions. It may be noted that since there is a clear change
in behavior when order $q$ of the derivative crosses the critical order
of the function
it should be easy to determine the H\"older exponent numerically.
Previous methods using autocorrelations for fractal signals \cite{10}
involve an additional step of finding an autocorrelation.
\subsection{Isolated singularities and masked singularities}
Let us first consider the case of isolated
singularities. We choose the simplest example $f(x)=ax^{\alpha},\;\;\;0<\alpha
<1,\;\;\;x>0$. The critical order at $x=0$ gives the order of
singularity at that point whereas
the value of the LFD $I\!\!D^{q=\alpha}f(0)$, viz
$a\Gamma(\alpha+1)$, gives the strength of the singularity.
Using LFD we can detect a weaker singularity masked by a stronger singularity.
As demonstrated below, we can estimate and subtract the contribution due to
the stronger singularity from the
function and find out the critical order of the remaining function.
Consider, for example, the function
\begin{eqnarray}
f(x)=ax^{\alpha}+bx^{\beta},\;\;\;\;\;\;0<\alpha <\beta <1,\;\;\;x>0.
\label{masked}
\end{eqnarray}
The LFD of this function at $x=0$ of the order $\alpha$ is
$I\!\!D^{\alpha}f(0)=a\Gamma(\alpha+1)$.
Using this estimate of stronger singularity we now write
$$
G(x;\alpha)=f(x)-f(0)-{I\!\!D^{\alpha}f(0)\over\Gamma(\alpha+1)}x^{\alpha},
$$
which for the function $f$ in Eq. (\ref{masked}) is
\begin{eqnarray}
{ {d^q G(x'\alpha) }
\over{d x^q}} = {b\Gamma(\beta+1)\over{\Gamma(\beta-q+1)}}x^{\beta-q}.
\end{eqnarray}
Therefore the critical order of the function $G$, at $x=0$, is $\beta$.
Notice that the estimation of the weaker singularity was possible in the
above calculation just because the LFD gave the coefficient of $x^{\alpha}/
{\Gamma(\alpha+1)}$. This suggests that using LFD, one should be able to extract the secondary singularity spectrum
masked by the primary singularity spectrum of strong singularities. Hence one
can gain more insight into the processes giving rise to irregular
behavior. Also, one may note that this procedure can be used to detect
singularities masked by regular polynomial behavior. In this way one can extend
the present analysis beyond the range $0<\alpha<1$, where $\alpha$ is a H\"older
exponent.
A comparison of the two methods of studying pointwise behavior
of functions, one using wavelets and the other using LFD,
shows that characterization of H\"older classes of
functions using LFD is direct and involves fewer assumptions.
The characterization of a H\"older class of functions with
oscillating singularity,
e.g. $f(x)=x^{\alpha}\sin(1/x^{\beta})$ ($x>0$, $0< \alpha <1$ and $\beta>0$),
using wavelets needs two exponents \cite{25}.
Using LFD, owing to theorem I and II critical order
directly gives the H\"older exponent for such a function.
It has
been shown in the context of wavelet transforms that
one can detect singularities masked by regular polynomial
behavior \cite{27} by choosing the analyzing wavelet with
its first $n$ (for suitable $n$)
moments vanishing. If one has to extend the wavelet method
for the unmasking of weaker singularities,
one would then require analyzing wavelets with fractional moments vanishing.
Notice that
one may require this condition along with the condition
on the first $n$ moments. Further the class of functions to be analyzed is in
general restricted in these analyses. These restrictions essentially arise
from the asymptotic properties of the wavelets used.
On the other hand, with the truly
local nature of LFD one does not have to bother about the behavior of functions
outside our range of interest.
\subsection{Treatment of multifractal function}
Multifractal measures have been the object of many investigations
\cite{32,33,34,35,40}. This
formalism has met with many applications. Its importance also stems
from the fact such measures are natural measures to be used in the
analysis of many phenomenon \cite{36,37}. It may however happen that the object
one wants to understand is a function (e.g., a fractal or multifractal signal)
rather than a set or a measure. For instance one would like to
characterize the velocity of fully developed turbulence. We now proceed
with the analysis of such multifractal functions using LFD.
Now we consider the case
of multifractal functions. Since LFD gives the local
and pointwise behavior of the function, conclusions of theorem I will
carry over even in the case of multifractal functions where we have
different H\"older exponents at different points.
Multifractal functions have been defined by Jaffard \cite{24}
and Benzi et al. \cite{28}.
However as noted by Benzi et al. their functions are random in nature and
the pointwise behavior
can not be studied. Since we are dealing with non-random
functions in this paper,
we shall consider a specific (but non-trivial) example of a function
constructed by Jaffard to illustrate the procedure. This function is a
solution $F$ of the functional equation
\begin{eqnarray}
F(x)=\sum_{i=1}^d {\lambda}_iF(S_i^{-1}(x)) + g(x),
\end{eqnarray}
where $S_i$'s are the affine transformations of the kind
$S_i(x)={\mu}_ix+b_i$ (with $\vert \mu_i \vert < 1$ and $b_i$'s real)
and $\lambda_i$'s
are some real numbers and $g$ is any sufficiently smooth function ($g$ and its
derivatives should have a fast decay). For the sake of illustration
we choose ${\mu}_1={\mu}_2=1/3$, $b_1=0$, $b_2=2/3$,
${\lambda}_1=3^{-\alpha}$, ${\lambda}_2=3^{-\beta}$ ($0<\alpha<\beta<1$) and
\begin{eqnarray}
g(x)&=& \sin(2\pi x),\;\;\;\;\;\;{\rm if}\;\;\;\; x\in [0,1],\nonumber\\
&=&0,\;\;\;\;\;\;\;\;\;{\rm otherwise}. \nonumber
\end{eqnarray}
Such functions are studied in detail in Ref. \cite{24} using wavelet transforms
where it has been shown that the above functional equation (with the
parameters we have chosen)
has a unique solution $F$ and at any point
$F$ either has H\"older exponents ranging from
$\alpha$ to $\beta$ or is smooth. A sequence of points $S_{i_1}(0),\;\;$
$\;S_{i_2}S_{i_1}(0),\;\;$
$\cdots,\;\;\; S_{i_n}\cdotp \cdotp \cdotp S_{i_1}(0), \;\;\cdots$,
where $i_k$ takes values 1 or 2,
tends to a point in $[0,1]$ (in fact to a point of a triadic
cantor set) and for the values of
${\mu}_i$s we have chosen this correspondence between sequences and limits
is one to one.
The solution of the above functional equation is given by Ref. \cite{24} as
\begin{eqnarray}
F(x)=\sum_{n=0}^{\infty}\;\;\;\sum_{i_1,\cdots,i_n=1}^2{\lambda}_{i_1}\cdots{\lambda}_{i_n}
g(S_{i_n}^{-1}\cdots S_{i_1}^{-1}(x)). \label{soln}
\end{eqnarray}
Note that with the above choice of parameters the inner sum in (\ref{soln})
reduces to a single term. Jaffard \cite{24} has shown that
\begin{eqnarray}
h(y)=\liminf_{n\rightarrow \infty}{{\log{({\lambda}_{{i_1}(y)}\cdots{\lambda}_{{i_n}(y)})}}
\over{\log{({\mu}_{{i_1}(y)}\cdots{\mu}_{{i_n}(y)})}}},
\end{eqnarray}
where $\{i_1(y)\cdot\cdot\cdot i_n(y)\}$ is a sequence of integers appearing in
the sum in equation (\ref{soln}) at a point $y$,
and is the local H\"older exponent at $y$.
It is clear that $h_{min}=\alpha$ and
$h_{max}=\beta$. The function $F$ at the points of a triadic cantor
set have $h(x) \in [\alpha , \beta]$
and at other points it is smooth ( where $F$ is as smooth as $g$).
Now
\begin{eqnarray}
{d^q(F(x)-F(y))\over{[d(x-y)]^q}}&=&\sum_{n=0}^{\infty}\;\;\;\sum_{i_1,\cdots,i_n=1}^2{\lambda}_{i_1}\cdots{\lambda}_{i_n}\nonumber\\
&&\;\;\;\;\;\;\;\;\;{d^q[g(S_{i_n}^{-1}\cdots S_{i_1}^{-1}(x))-g(S_{i_n}^{-1}\cdots S_{i_1}^{-1}(y))]
\over{[d(x-y)]^q}}\nonumber\\
&=&\sum_{n=0}^{\infty}\;\;\;\sum_{i_1,\cdots,i_n=1}^2{\lambda}_{i_1}\cdots{\lambda}_{i_n}
({\mu}_{i_1}\cdots{\mu}_{i_n})^{-q} \nonumber\\
&&\;\;\;\;\;\;\;\;\;\;{d^q[g(S_{i_n}^{-1}\cdots S_{i_1}^{-1}(x))-g(S_{i_n}^{-1}\cdots S_{i_1}^{-1}(y))]
\over{[d(S_{i_n}^{-1}\cdots S_{i_1}^{-1}(x-y))]^q}}, \label{fdj}
\end{eqnarray}
provided the RHS is uniformly bounded.
Following the procedure described in section III the fractional
derivative on the RHS can easily be seen to be uniformly bounded and
the series is convergent if $q<\min\{h(x),h(y)\}$.
Further it vanishes in the limit as $x\rightarrow y$. Therefore if $q<h(y)$ $I\!\!D^qF(y)=0$, as
in the case of the Weierstrass function, showing that $h(y)$ is a lower
bound on the critical order.
The procedure of finding an upper bound is technical and lengthy.
It is carried out in the Appendix below.
In this way an intricate analysis of finding out the lower bound on the
H\"older exponent has been replaced by a calculation involving few steps. This
calculation can easily be generalized for more general functions $g(x)$.
Summarizing, the LFD enables one to calculate the local H\"older exponent even
for the case of multifractal functions. This fact, proved in theorems I and II
is demonstrated with a concrete illustration.
\section{Conclusion}
In this paper we have introduced the notion of a local fractional derivative
using Riemann-Liouville formulation (or equivalents such as Grunwald's)
of fractional calculus. This definition was found to appear naturally
in the Taylor expansion (with a remainder) of functions and thus is
suitable for approximating scaling functions. In particular
we have pointed out a possibility of replacing the notion of a
tangent as an equivalence class of curves passing through the same point
and having the same derivative with a more general one.
This more general notion is in terms of an equivalence class of curves
passing through the same point and having the same critical order and
the same LFD.
This generalization has the advantage
of being applicable to non-differentiable functions also.
We have established that (for sufficiently large $\lambda$ ) the critical order of the
Weierstrass function is related to the box dimension of its graph. If the dimension of
the graph of such a function is $1+\gamma$, the critical order is $1-\gamma$. When
$\gamma$ approaches unity the function becomes more and more irregular and local fractional
differentiability is lost accordingly. Thus there is a direct quantitative connection between the
dimension of the graph and the fractional differentiability property of the function.
This is one of the main conclusions of the present work.
A consequence of our result is that a classification of continuous paths
(e.g., fractional Brownian paths) or
functions according to local fractional differentiability properties is also
a classification according to dimensions (or H\"older exponents).
Also the L\'evy index of a L\'evy flight on a one dimensional
lattice is identified as
the critical order of the characteristic function of the walk. More generally,
the L\'evy index of a L\'evy distribution is identified as
the critical order of its characteristic function at the origin.
We have argued and demonstrated that LFDs are useful for studying isolated singularities and singularities masked by the stronger singularity (not just by
regular behavior). We have further shown that the pointwise
behavior of irregular, fractal or multifractal functions can be studied
using the methods of this paper.
We hope that future study in this direction will make random irregular
functions as well as multivariable irregular functions
amenable to analytic treatment, which is badly needed at this
juncture. Work is in progress in this direction.
\section*{Acknowledgments}
We acknowledge helpful discussions with Dr. H. Bhate and Dr. A. Athavale.
One of the authors (KMK) is grateful to CSIR (India) for financial assistance and the other author
(ADG) is grateful to UGC (India) financial assistance during initial stages of the work.
| proofpile-arXiv_065-388 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\input{intro}
\section{KPZ equation}
\input{kpze}
\section{Ideal MBE}
\input{idmbe}
\section{Ballistic deposition}
\input{bd}
\section{Summary and discussion}
\input{sd}
\noindent
{\Large{\bf Acknowledgment}}
The author gratefully acknowledges useful correspondence with S. Pal,
J. Krug, H.W. Diehl, and D.P. Landau.
\setcounter{section}{0}
\renewcommand{\thesection}{Appendix \Alph{section}:}
\renewcommand{\theequation}{\Alph{section}.\arabic{equation}}
\section{Gaussian theory}
\input{gt}
\section{Perturbation theory}
\input{pt}
\section{Response and correlation functions}
\input{rcf}
\input{mkbibl}
\end{document}
| proofpile-arXiv_065-389 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The Standard Model(SM) has provided a remarkably successful description of
almost all available data involving the strong and electroweak interactions.
In particular, the discovery of the top quark at the Tevatron with a
mass{\cite {tev}}, $m_t=175\pm 6$ GeV, close to that anticipated by fits to
precision electroweak data{\cite {blondel}} is indeed a great triumph.
However, we know that new physics beyond the SM must exist for many reasons
particularly those associated with the fermion mass generating process.
Since the top is the most massive
fermion, it is believed by many that the detailed physics of the top quark
may be significantly different than what is predicted by the SM. In this
scenario, the top provides a window into the new physics which lies beyond the
electroweak scale. This suggestion makes precision measurements of all of the
top quark's properties absolutely mandatory and will require the existence of
top quark factories.
One of the most obvious and easily imagined scenarios is one in which
the top's couplings to the SM gauge bosons, {\it i.e.}, the $W$, $Z$, $\gamma$,
and $g$, are altered.
In the case of the electroweak interactions involved in
top pair production in $e^+e^-$ collisions, the lowest dimensional
gauge-invariant operators representing new physics that we can introduce take
the form of dipole moment-type couplings to the $\gamma$ and $Z$.
In the case of strong interactions, the subject of the present work,
the corresponding lowest dimensional operator conserving $CP$
that we can introduce is the anomalous chromomagnetic moment
$\kappa${\cite {big,tgr}}. On the otherhand, the corresponding chromoelectric
moment, $\tilde \kappa$, violates $CP$.
In this modified version of QCD for the top quark the $t\bar tg$ interaction
Lagrangian takes the form
\begin{equation}
{\cal L}=g_s\bar t T_a \left( \gamma_\mu+{i\over {2m_t}}\sigma_{\mu\nu}
(\kappa-i\tilde \kappa \gamma_5)q^\nu\right)t G_a^\mu \,,
\end{equation}
where $g_s$ is the strong coupling constant, $m_t$ is the top quark mass,
$T_a$ are the color generators, $G_a^\mu$ is the gluon field and
$q$ is the outgoing gluon momentum. Due to the non-Abelian nature of
QCD, a corresponding four-point $t\bar tgg$ interaction, proportional to
$\kappa$ and/or $\tilde \kappa$, is by necessity also generated.
Perhaps the most obvious place to probe for anomalous top couplings is at
hadron colliders. It is clear that the existence of a non-zero value for
$\kappa$ (and/or $\tilde \kappa$) would lead to a modification in {\it both}
the $gg\rightarrow t\bar t$ and $q\bar q \rightarrow t\bar t$ subprocess cross sections at
these machines. The
general expressions for these parton level cross sections are given in
Atwood {\it et al.}{\cite {big}}. Here we note only that the $q\bar q$ subprocess has
a quadratic $\kappa$ dependence while that for the corresponding $gg$
subprocess has a quartic dependence on $\kappa$. In our discussion of
anomalous top couplings at the
LHC, we will ignore for brevity the possibility of a non-zero $\tilde \kappa$.
Obviously, the observation of the $CP$-violation induced by non-zero
$\tilde \kappa$ is a more
sensitive probe for the anomalous chromoelectric moment of the top than the
kinematic distributions we consider below.
\vspace*{-0.5cm}
\noindent
\begin{figure}[htbp]
\centerline{
\psfig{figure=newtopkap.ps,height=14cm,width=14cm,angle=-90}}
\vspace*{-1cm}
\caption{\small Cross section for $t\bar t$ production as a function of
$\kappa$ at the Tevatron for
$m_t=175$ GeV. The dotted(dash-dotted) curve is the $q\bar q(gg)$ contribution
and the solid line is their sum. MRSA$'$ parton densities were assumed. The
horizontal dashed bands correspond to the $1\sigma$ world average top pair
cross section obtained by CDF and D0.}
\label{figtev}
\end{figure}
\vspace*{0.4mm}
\section{Effects of Anomalous Couplings}
At the Tevatron, it has been shown{\cite {big}} that for small values of
$|\kappa| \leq 0.25$, a range consistent with the current total cross
section measurements{\cite {tev}} by both CDF and D0, the dominant effect of
anomalous chromomagnetic moment couplings is to modify the total cross
section for top pair production with little
influence on the shape of the various distributions. Figure~\ref{figtev}
compares the $\kappa$-dependent cross section with the world average of that
obtained by the CDF and D0 Collaborations.
The essential reason why the various top quark kinematical distributions are
not much influenced
is that top pair production at the Tevatron is dominated by the invariant
mass region near threshold. Since, as is well known, the effects of
anomalous couplings grow
with the parton center of mass energy one sees little influence at these
energies. The significantly larger partonic center of mass energies accessible
at the LHC allows us to probe beyond this threshold region so that much higher
sensitivities to a possible non-zero $\kappa$ can be obtained. This is
particularly true for the various kinematic distributions.
\vspace*{-0.5cm}
\noindent
\begin{figure}[htbp]
\centerline{
\psfig{figure=singtop_fign1.ps,height=14cm,width=14cm,angle=-90}}
\vspace*{-1cm}
\caption{\small Cross section for $t\bar t$ production as a function of
$\kappa$ at the LHC for
$m_t=180$ GeV. The dotted(dash-dotted) curve is the $q\bar q(gg)$ contribution
and the solid line is their sum. MRSA$'$ parton densities were assumed.}
\label{figlhc}
\end{figure}
\vspace*{0.4mm}
As a result of the subprocess dependencies on $\kappa$ it is
clear than any of the
(unnormalized!) differential distributions for a generic observable, $\cal O$,
can then be written as
\begin{equation}
{d\sigma \over {d\cal O}}= \sum_{n=0}^{4} \kappa^n g_n(\cal O)
\end{equation}
where $g_n(\cal O)$ are a set of calculable functions which have been
completely determined to lowest order in QCD by Atwood {\it et al.}{\cite {big}}.
The QCD/SM result is just the familiar term with $n=0$. Of
course, the {\it total} cross section is also a quartic polynomial in
$\kappa$. The behaviour of the two individual contributing subprocess as
well as the total cross sections under
variations of $\kappa$ at the LHC are shown in Fig.~\ref{figlhc}. Unlike the
Tevatron, the $gg$ initial state dominates the top pair production cross
section at the LHC. A reasonable sensitivity to $\kappa$ is again observed
in the total cross section as it was for the Tevatron. However,
as discussed in Ref.{\cite {big}}, unless the theoretical and systematic
uncertainties are well under control, a measurement of $\sigma_{t\bar t}$ at
the LHC will never do much better than to constrain $|\kappa|\leq 0.10-0.15$.
To further improve on this limit we must turn to the various top quark
kinematical distributions.
\vspace*{-0.5cm}
\noindent
\begin{figure}[htbp]
\centerline{
\psfig{figure=singtop_fign2a.ps,height=9.1cm,width=9.1cm,angle=-90}
\hspace*{-5mm}
\psfig{figure=singtop_fign2b.ps,height=9.1cm,width=9.1cm,angle=-90}}
\vspace*{0.1cm}
\centerline{
\psfig{figure=singtop_fign2c.ps,height=9.1cm,width=9.1cm,angle=-90}
\hspace*{-5mm}
\psfig{figure=singtop_fign2d.ps,height=9.1cm,width=9.1cm,angle=-90}}
\vspace*{-0.5cm}
\caption{\small (a) $t\bar t$ invariant mass distribution at the LHC for
various values of $\kappa$ assuming $m_t=180$ GeV. (b) The same distribution
scaled to the SM result. (c) $t\bar t$ $p_t$ distribution at the LHC and (d)
the same distribution scaled to the SM. In all cases, the SM is represented
by the solid curve whereas the upper(lower) pairs of dotted(dashed,
dash-dotted) curves
corresponds to $\kappa=$0.5(-0.5), 0.25(-0.25), and 0.125(~-0.125),
respectively.}
\label{dislhc}
\end{figure}
\vspace*{0.4mm}
\section{Analysis}
As has been shown elsewhere{\cite {big}}, the $p_t$ and pair invariant
mass ($M_{tt}$) distributions for top quark pair production at the LHC are
highly sensitive to non-zero values of $\kappa$. Figures~\ref{dislhc}a
and ~\ref{dislhc}c show the
modifications in the SM expectations for both $d\sigma/dM_{tt}$ and
$d\sigma/dp_t$, respectively, for different values of $\kappa$.
Perhaps more revealingly, Figures~\ref{dislhc}b and~\ref{dislhc}d show the
ratio of the modified distributions to the corresponding SM ones. We see the
important results that a non-zero $\kappa$ leads to ($i$) enhanced cross
sections at large $p_t$ and $M_{tt}$ and ($ii$) the {\it shapes} of the
distributions are altered, {\it i.e.}, the effect is not just an overall change in
normalization. This is contrary to what was observed in the Tevatron case
where both $d\sigma/dM_{tt}$ and $d\sigma/dp_t$ were essentially just rescaled
by the ratio of the total cross sections. Clearly, data on these two
distributions at the LHC can lead to significant constraints on $\kappa$ or
observe a non-zero effect if $\kappa$ is sufficiently large. In
Ref.~{\cite {big}}, the $cos \theta^*$ and rapidity($\eta$) distributions
were also examined but they were found to be less sensitive to non-zero
$\kappa$ than the dramatic effects shown in Fig.~\ref{dislhc}.
\vspace*{-0.5cm}
\noindent
\begin{figure}[htbp]
\centerline{
\psfig{figure=histmtt.ps,height=9.1cm,width=9.1cm,angle=-90}
\hspace*{-5mm}
\psfig{figure=histpt.ps,height=9.1cm,width=9.1cm,angle=-90}}
\vspace*{-0.1cm}
\caption{Sample histograms of top quark data generated for the LHC, assuming
100 $fb^{-1}$ of integrated luminosity. On the left(right) is the top pair
invariant mass ($p_t$) distribution. MRSA$'$ parton densities and $m_t=175$
GeV have been assumed.}
\label{histlhc}
\end{figure}
\vspace*{0.4mm}
How sensitive are these distributions to non-zero $\kappa$ and what bounds can
be obtained at the LHC?
In order to answer these questions, we follow a Monte Carlo approach. We begin
by generating 100 $fb^{-1}$ `data' samples for both distributions
{\it assuming} the SM is correct. To be specific, since the next to leading
order(NLO) expressions for these distributions in the presence of anomalous
couplings do not yet exist, we use the leading order results rescaled by the
NLO/LO cross section ratios for both subprocesses as effective $K$-factors to
obtain a rough estimate of these higher order effects.
Sample histograms of this appropriately rescaled `data' are shown
in Fig.~\ref{histlhc}. Note that
there are 37 bins in $M_{tt}$ and 22 bins in $p_t$ of varying sizes
essentially covering the entire kinematically allowed ranges. Bin sizes are
adjusted to partially conform to changes in resolution and declining
statistics as we go to larger values of either kinematic variable. In addition
to the usual statistical errors, we attempted to include some estimate of the
systematic point-to-point errors. These were added in quadrature to the
statistical errors. Thus,
neglecting the overall normalization uncertainties, the
error in the number of events($N_i$) in a given bin($i$) was assumed to be
given by
\begin{equation}
\delta N_i= [N_i+aN_i^2]^{1/2}
\end{equation}
with the parameter $a$ setting the {\it a priori} unknown size of the
systematic error. Note that we
have made the simplifying assumption that the magnitude of $a$ is bin
independent. The total
error is thus generally systematics dominated. With these errors the Monte
Carlo generated data
was then fit to the known functional form of the relevant distribution:
\begin{equation}
{d\sigma \over {d\cal O}}= f\sum_{n=0}^{4} \kappa^n g_n(\cal O)
\end{equation}
where $f$ allows the overall normalization to float in the fit and the $g_n$
were those appropriate to either the $p_t$ or $M_{tt}$ distributions.
The results of this analysis are thus a set of $95\%$ CL allowed regions in the
$f-\kappa$ plane for
various assumed values of the anticipated size of the systematic errors. These
can be seen in Figure~\ref{reslhc}. Here we see that for systematic errors of
reasonable magnitude the value of $\kappa$ is constrained to lie in the range
$-0.09 \leq \kappa \leq 0.10$ from the $M_{tt}$ distribution and
$-0.06 \leq \kappa \leq 0.06$ from the corresponding $p_t$ distribution. Note
that the correlation between $f$ and $\kappa$ is much stronger in the case of
the $M_{tt}$ distribution. Increasing the integrated luminosity by a factor of
two will not greatly affect our results since the errors are systematics
dominated. Combining the results of multiple distributions in a global fit to
$\kappa$ will most likely result in even strong bounds.
\vspace*{-0.5cm}
\noindent
\begin{figure}[htbp]
\centerline{
\psfig{figure=mttanom.ps,height=9.1cm,width=9.1cm,angle=-90}
\hspace*{-5mm}
\psfig{figure=ptanom.ps,height=9.1cm,width=9.1cm,angle=-90}}
\vspace*{-0.1cm}
\caption{$95\%$ CL two parameter $(f,\kappa)$ fits to the invariant mass(left)
and $p_t$(right) distributions at the LHC for a 175 GeV top quark assuming the
MRSA$'$ parton densities for different assumed values of the systematic errors
parameterized by $a$. From inside out the curves correspond to $a$= 0.03,
0.05, 0.10, 0.15, 0.20, and 0.30, respectively. }
\label{reslhc}
\end{figure}
\vspace*{0.4mm}
\vspace*{-0.5cm}
\noindent
\begin{figure}[htbp]
\centerline{
\psfig{figure=ttnewg.res1ps,height=9.1cm,width=9.1cm,angle=-90}
\hspace*{-5mm}
\psfig{figure=ttnewg.res2ps,height=9.1cm,width=9.1cm,angle=-90}}
\vspace*{-0.1cm}
\caption{$95\%$ CL allowed regions in the $\kappa-\tilde \kappa$ plane
obtained from fitting the gluon spectrum. On the left the fit is for
gluon jets above $E_g^{min}$=25 GeV at a
500 GeV NLC assuming an integrated luminosity of 50(solid) or 100(dotted)
$fb^{-1}$. On the right is the case of a 1 TeV collider with $E_g^{min}$=50
GeV and luminosities of 100(solid) and 200(dotted) $fb^{-1}$. Note that the
allowed region has been significantly compressed downward in comparison to
the 500 GeV case.}
\label{resnlc}
\end{figure}
\vspace*{0.4mm}
Using Figure~\ref{resnlc} we can make a direct comparison of the bounds
obtainable on $\kappa$ at the NLC by using the process
$e^+e^- \rightarrow t\bar tg$ as discussed in Ref.{\cite {tgr}} with those from the
LHC analysis above. In these Figures the influence of $\tilde \kappa$ is
also shown. These NLC results were obtained by fitting the
spectrum of very high
energy gluon jets produced in association with top pairs (above some cut,
$E_g^{min}$, used to avoid contamination from the radiation off final state
$b$-quarks in top decay). Only statistical errors were included in the
analysis. The resulting bounds are essentially statistics limited. We see
from these Figures that the constraints on $\kappa$ from the $\sqrt s$=500 GeV
NLC with an integrated luminosity of 50 $fb^{-1}$ are only slightly better
than what is achievable at the LHC from the top pair's $p_t$ distribution.
The constraints tighten at the 1 TeV NLC. Clearly the LHC and NLC have
comparable sensitivities to the anomalous chromomagnetic moment of the top.
\section{Acknowledgements}
The author would like to thank J. Hewett, A. Kagan, P. Burrows, L. Orr, and
R. Harris for discussions related to this work.
\def\MPL #1 #2 #3 {Mod.~Phys.~Lett.~{\bf#1},\ #2 (#3)}
\def\NPB #1 #2 #3 {Nucl.~Phys.~{\bf#1},\ #2 (#3)}
\def\PLB #1 #2 #3 {Phys.~Lett.~{\bf#1},\ #2 (#3)}
\def\PR #1 #2 #3 {Phys.~Rep.~{\bf#1},\ #2 (#3)}
\def\PRD #1 #2 #3 {Phys.~Rev.~{\bf#1},\ #2 (#3)}
\def\PRL #1 #2 #3 {Phys.~Rev.~Lett.~{\bf#1},\ #2 (#3)}
\def\RMP #1 #2 #3 {Rev.~Mod.~Phys.~{\bf#1},\ #2 (#3)}
\def\ZP #1 #2 #3 {Z.~Phys.~{\bf#1},\ #2 (#3)}
\def\IJMP #1 #2 #3 {Int.~J.~Mod.~Phys.~{\bf#1},\ #2 (#3)}
| proofpile-arXiv_065-390 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{#1}\indent}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\textwidth 159mm
\textheight 220mm
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\vs}[1]{\vspace{#1 mm}}
\newcommand{\hs}[1]{\hspace{#1 mm}}
\renewcommand{\a}{\alpha}
\renewcommand{\b}{\beta}
\renewcommand{\c}{\gamma}
\renewcommand{\d}{\delta}
\newcommand{\epsilon}{\epsilon}
\newcommand{\omega}{\omega}
\newcommand{\Gamma}{\Gamma}
\renewcommand{\Im}{{\rm Im}\,}
\newcommand{\frac{1}{2}}{\frac{1}{2}}
\newcommand{\partial}{\partial}
\newcommand{\frac{dz}{2\pi i}}{\frac{dz}{2\pi i}}
\newcommand{\nearrow \kern-1em \searrow}{\nearrow \kern-1em \searrow}
\newcommand{{1 \over2}}{{1 \over2}}
\newcommand{\NP}[1]{Nucl.\ Phys.\ {\bf #1}}
\newcommand{\PL}[1]{Phys.\ Lett.\ {\bf #1}}
\newcommand{\CMP}[1]{Comm.\ Math.\ Phys.\ {\bf #1}}
\newcommand{\PR}[1]{Phys.\ Rev.\ {\bf #1}}
\newcommand{\PRL}[1]{Phys.\ Rev.\ Lett.\ {\bf #1}}
\newcommand{\PTP}[1]{Prog.\ Theor.\ Phys.\ {\bf #1}}
\newcommand{\MPL}[1]{Mod.\ Phys.\ Lett.\ {\bf #1}}
\newcommand{\IJMP}[1]{Int.\ Jour.\ Mod.\ Phys.\ {\bf #1}}
\newcommand{\AJ}[1]{Astorophys. \ J.\ {\bf #1}}
\newcommand{\JMP}[1]{J.\ Math.\ Phys.\ {\bf #1}}
\newcommand{\ZETF}[1]{Zh.\ Eksp.\ Teor.\ Fiz.\ {\bf #1}}
\newcommand{\GRG }[1]{ Gen.\Rel.\ and Grav.\ { \bf #1 } }
\newcommand{\wedge}{\wedge}
\newcommand{{\cal{Z}}}{{\cal{Z}}}
\newcommand{{\cal{G}}}{{\cal{G}}}
\newcommand{{\cal{M}}}{{\cal{M}}}
\newcommand{\lambda}{\lambda}
\newcommand{\Lambda}{\Lambda}
\newcommand{\displaystyle}{\displaystyle}
\makeatletter
\def\eqnarray{%
\stepcounter{equation}%
\let\@currentlabel=\theequation
\global\@eqnswtrue
\global\@eqcnt\omega@
\tabskip\@centering
\let\\=\@eqncr
$$\halign to \displaywidth\bgroup\@eqnsel\hskip\@centering
$\displaystyle\tabskip\omega@{##}$&\global\@eqcnt\@ne
\hfil$\displaystyle{{}##{}}$\hfil
&\global\@eqcnt\tw@$\displaystyle\tabskip\omega@{##}$\hfil
\tabskip\@centering&\llap{##}\tabskip\omega@\cr}
\makeatother
\begin{document}
\begin{titlepage}
\setcounter{page}{0}
\begin{flushright}
EPHOU 96-005\\
September 1996\\
\end{flushright}
\vs{6}
\begin{center}
{\Large Periods and Prepotential of N=2 SU(2) Supersymmetric
Yang-Mills
Theory with Massive Hypermultiplets}
\vs{6}
{\large
Takahiro
Masuda
\footnote{e-mail address: masuda@phys.hokudai.ac.jp}
\\ and \\
Hisao Suzuki\footnote{e-mail address: hsuzuki@phys.hokudai.ac.jp}}\\
\vs{6}
{\em Department of Physics, \\
Hokkaido
University \\ Sapporo, Hokkaido 060 Japan} \\
\end{center}
\vs{6}
\centerline{{\bf{Abstract}}}
We derive a simple formula for the periods associated with
the low energy effective action of $N=2$ supersymmetric $SU(2)$
Yang-Mills theory with massive $N_f\le 3$ hypermultiplets.
This is given by evaluating explicitly the integral associated to the
elliptic curve using various identities of hypergeometric functions.
Following this formalism,
we can calculate the prepotential with massive hypermultiplets both in the weak coupling region and in the strong coupling region.
In particular, we show how the Higgs field and its dual field are expressed as generalized
hypergeometric functions when the theory has a conformal point.
\end{titlepage}
\newpage
\renewcommand{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\sect{Introduction}
Since Seiberg and Witten discovered how to determine exactly
the low-energy effective
theory of $N=2$ supersymmetric $SU(2)$ Yang-Mills theory by
using the elliptic curve\cite{SW},
many subsequent works have been made on the basis of their analysis
by extending the gauge group and
by introducing matter hypermultiplets\cite{KLTY,APS,HO,Hanany,DS,AS}.
The exact solution for
the prepotential which controls the low energy effective action,
can be obtained from the period
integrals on the elliptic curve. Associated with singularities of
the theories coming from the
massless states, these curves for
various kinds of $N=2$ supersymmetric Yang-Mills theories have been
studied extensively\cite{HO}.
Usual approach to obtain the period is to solve
the differential equation which the periods obey, so called
Pichard-Fuchs equation\cite{CDF,KLT}. When the theory is
pure Yang-Mills with massless $N_f\le 3$ hypermultipletsthe with
gauge group $SU(2)$, this approach works
successfully to solve the periods\cite{IY} because these theories
have three singularity points if we use appropreate
variables. Other more direct approach
is known to be valid only in these cases\cite{Matone}.
However when hypermultiplets are massive, the
situation changes drastically;
additional massless states appear in the theory
and the number of singularities
becomes more than three. Therefore, the Picard-Fuchs equation
can no longer be solved by any special function
and the known solution is a perturbative
solution in the weak coupling region\cite{Ohta}.
In this article we derive a simple formula for the periods
from which we can obtain the prepotential both in the weak coupling region and in the strong coupling region;
we can evaluate the period integral of
holomorphic one-form on the elliptic curve
by using various identities of hypergeometric functions.
As a result, the periods
are represented as
hypergeometric functions in terms of the function of $u$, $\Lambda$ and masses,
which are known from the form of the elliptic curve. We show that the resulting expression agrees with the results for massless case\cite{IY} and also have a power to handle the theories with conformal points.
\
\sect{Period Integrals}
We begin with reviewing some properties of the low-energy effective
action of the $N=2$ supersymmetric $SU(2)$ QCD.
In $N=1$ superfields formulation\cite{SW},
the theory contains chiral multiplets
$\Phi^a$ and chiral field strength $W^a$ $(a=1,2,3)$ both in the adjoint
representation of $SU(2)$, and chiral superfield $Q^i$ in ${\bf 2}$ and
$\tilde{Q^i}$ $(i=1,\cdots, N_f)$ in $\bar{\bf 2}$ representation of
$SU(2)$. In $N=2$ formulation $Q^i$ and $\tilde{Q^i}$ are called
hypermultiplets. Along the flat direction, the scalar field $\phi$ of $\Phi$
get vacuum expectation values which break $SU(2)$ to $U(1)$, so that
the low-energy effective theory contains $U(1)$ vector multiplets
$(A,W_{\alpha})$, where $A^i$ are $N=1$ chiral superfields and $W_{\alpha}$
are $N=1$ vector superfields. The quantum low-energy effective theory
is characterized by effective Lagrangian $\cal L$ with
the holomorphic function ${\cal F}(A)$ called prepotential,
\begin{eqnarray}
{\cal L}={1\over 4\pi}\Im \left(\int d^2\theta
d^2\bar{\theta} {\partial {\cal F}\over \partial A}\bar{A}
+{1\over 2}\int d^2\theta {\partial^2{\cal F}\over \partial A^2}W_{\alpha}W^{\alpha}
\right).
\end{eqnarray}
The scalar component of $A$ is denoted by $a$, and $A_D={\partial {\cal F}
\over \partial A}$ which is dual to $A$
by $a_D$. The pair $(a_D,a)$ is a section of
$SL(2,{\bf Z})$ and is obtained as the period integrals of
the elliptic curve parameterized by $u,\Lambda$ and $m_i$ $(0\le i\le N_f)$, where
$u=<$tr$\phi^2>$ is a gauge invariant moduli parameter,
$\Lambda$ is a dynamical scale and $m_i$ are bare masses of
hypermultiplets. Once we know $a$ and $a_D$ as a holomorphic
function of $u$, we can calculate the prepotential ${\cal F}(a)$
by using the relation
\begin{eqnarray}
a_D={\partial {\cal F}(a)\over \partial a}.
\end{eqnarray}
General elliptic curves of $SU(2)$ Yang-Mills theories with
massive $N_f \le 3$ hypermultiplets are\cite{HO}
\begin{eqnarray}
y^2=C^2(x)-G(x)
\end{eqnarray}
\begin{tabular}{lll}
$C(x)=x^2-u$,&$G(x)=\Lambda^4$,&$(N_f=0)$\\
$C(x)=x^2-u$,&$G(x)=\Lambda^3(x+m_1)$,&$(N_f=1)$\\
$C(x)=x^2-u+{\Lambda^2\over 8}$,&$G(x)=\Lambda^2(x+m_1)(x+m_2)$,&$(N_f=2)$\\
$C(x)=x^2-u+{\Lambda\over 4}(x+{m_1+m_2+m_3\over 2})$,&$
G(x)=\Lambda(x+m_1)(x+m_2)(x+m_3)$,&$(N_f=3)$\\
\end{tabular}
\
\noindent
These curves are formally denoted by
\begin{eqnarray}
y^2=C^2(x)-G(x)=(x-e_1)(x-e_2)(x-e_3)(x-e_4),
\end{eqnarray}
where $e_1=e_4,\ e_2=e_3$ in the classical limit.
In order to calculate the
prepotential, we consider $a$ and $a_D$ as the integrals of
the meromorphic differential $\lambda$ over two
independent cycles of these curves,
\begin{eqnarray}
a&=&\oint_{\alpha}\lambda, \ \
a_{D}=\oint_{\beta}\lambda,\\
\lambda&=&{x\over 2\pi i}
\hbox{d} \ln\left({C(x)-y\over C(x)+y}\right).
\end{eqnarray}
where $\alpha$ cycle encloses $e_2$ and $e_3$, $\beta$ cycle encloses
$e_1$ and $e_3$, $\lambda$ is related
to the holomorphic one-form as
\begin{eqnarray}
{\partial\lambda\over \partial u}={1\over 2\pi i}{dx\over y}
+d(*).
\end{eqnarray}
Since there are poles coming from mass parameters in the
integrant of $a$ and $a_D$,
we instead evaluate the period integrals of holomorphic one-form;
\begin{eqnarray}
{\partial a\over \partial u}=\oint_{\alpha}{dx\over y},\ \
{\partial a_D\over \partial u}=\oint_{\beta}{dx\over y}.
\end{eqnarray}
First of all, we consider ${\partial a\over \partial u}$;
\begin{eqnarray}
{\partial a\over \partial u}={\sqrt 2\over 2\pi}
\int^{e_3}_{e_2}{dx\over y}={\sqrt 2\over 2\pi}
\int^{e_3}_{e_2}{dx\over \sqrt{(x-e_1)
(x-e_2)(x-e_3)(x-e_4)}},
\end{eqnarray}
where the normalization is fixed so as to be
compatible with the asymptotic behavior of $a$ and $a_D$
in the weak coupling region
\begin{eqnarray}
a&=&{\sqrt {2u}\over 2}+\cdots,\nonumber \\
a_D&=&i{4-N_f\over 2\pi}a\ln a+\cdots.\label{eq:asym}
\end{eqnarray}
After changing the variable and using the integral representation of hypergeometric function;
\begin{eqnarray}
F(a,b;c;x)={\Gamma(a)\Gamma(b)\over \Gamma(c)}\int_0^1 ds s^{b-1}(1-s)^{c-b-1}
(1-sx)^{-a}
\end{eqnarray}
where
\begin{eqnarray}
F(a,b;c;z)=\sum_{n=0}^{\infty}{(a)_n(b)_n\over (c)_n}{z^n\over n!},
\ \ \ \ (a)_n={\Gamma(a+n)\over \Gamma(a)},
\end{eqnarray}
we obtain ${\partial a\over \partial u}$ as
\begin{eqnarray}
{\partial a\over \partial u}={\sqrt 2\over 2} (e_2-e_1)^{-1/2}(e_4-e_3)^{-1/2}
F\left({1\over 2},{1\over 2};1;z\right),\label{eq:a1}
\end{eqnarray}
where
\begin{eqnarray}
z={(e_1-e_4)(e_3-e_2)\over (e_2-e_1)(e_4-e_3)}.
\end{eqnarray}
Similarly we get the following expression for ${\partial a_D\over \partial u}$;
\begin{eqnarray}
{\partial a_{D}\over \partial u}&=&{\sqrt 2\over 2\pi}
\int^{e_3}_{e_1}{dy\over y}\nonumber \\
&=&{\sqrt 2\over 2}
\left[(e_1-e_2)(e_4-e_3)\right]^{-1/2}
F\left({1\over 2},{1\over 2},1;1-z\right).
\label{eq:aD1}
\end{eqnarray}
In this case $a_D$ is obtained as a hypergeometric function around
$z=1$, so we have to do the analytic continuation which gives
the logarithmic asymptotic in the weak coupling region.
Since elliptic curves are not factorized in general,
it is difficult to obtain their roots in a simple form. Even if we know the form of
roots, the variable $z$ in (\ref{eq:a1}) and (\ref{eq:aD1})
is very complicated in terms of $u$ in these
representations.
So we will transform the variable to the symmetric form
with respect to roots, by using the identity of the
hypergeometric functions, so that the
new variable is given easily from the curve directly without
knowing the form of roots.
\
\sect{Quadratic and cubic transformation}
\subsection{Quadratic transformation}
Before we treat a variety of $SU(2)$ Yang-Mills theory
with hypermultiplets, we consider the case
where the elliptic curve is of the form
\begin{eqnarray}
y^2&=&(x^2+a_1x+b_1)(x^2+a_2x+b_2).\label{eq:curve1}
\end{eqnarray}
There are two possibilities that
$e_1$ and $e_2$ are roots of the
first quadratic polynomial or $e_1$ and $e_4$ are .
First of all, we consider the former case.
If the variable of the hypergeometric function become
symmetric about $e_1$, $e_2$, and $e_3$, $e_4$, it is quite easy to
read the variable from the form of this curve. To this end, we
use the quadratic transformation\cite{HTF}
for the hypergeometric functions to (\ref{eq:a1})
\begin{eqnarray}
F\left(2a,2b;a+b+1/2;z\right)=F\left(
a,b;a+b+1/2;4z(1-z)\right),\label{eq:quad1}
\end{eqnarray}
where $a=b=1/4$, so that the new
variable $z'=4z(1-z)$ of hypergeometric function is symmetric with respect to
$e_1$, $e_2$ and $e_3$, $e_4$;
\begin{eqnarray}
z'=4z(1-z)={(e_1-e_3)(e_2-e_4)(e_1-e_4)(e_3-e_2)\over
(e_2-e_1)^2(e_4-e_3)^2},
\end{eqnarray}
and $z'$ can be easily expressed by $a_1,\ b_1,\ a_2,\ b_2$ as
\begin{eqnarray}
z'=-4{(b_1-b_2)^2-(b_1+b_2)a_1a_2+a_1^2b_2+a_2^2b_1
\over (a_1^2-4b_1)(a_2^2-4b_2)}.
\end{eqnarray}
Therefore, ${\partial a\over \partial u}$ can be written as
\begin{eqnarray}
{\partial a\over \partial u}={\sqrt 2\over 2}
[(e_2-e_1)(e_4-e_3)]^{-1/2} F\left({1\over 4},{1\over 4},1;z'\right).
\label{eq:a2}
\end{eqnarray}
Similarly for ${\partial a_D\over \partial u} $, after using the analytic continuation and the
quadratic transformation, we get
\begin{eqnarray}
{\partial a_D\over \partial u}={\sqrt 2\over 2}
[(e_1-e_2)(e_4-e_3)]^{-1/2} \left[{6\ln 4\over 2\pi}
F\left({1\over 4},{1\over 4},1;z'\right)-{1\over \pi}
F^{*}\left({1\over 4},{1\over 4},1;z'\right)\right],
\label{eq:aD2}
\end{eqnarray}
where
$F^{*}(\alpha,\beta;1;z)$ is
another independent solution around $z=0$ of the differential equation
which $F(\alpha,\beta;1;z)$ obeys, which is expressed as
\begin{eqnarray}
F^{*}\left(\alpha,\beta,1,z\right)=F(\alpha,&\beta&,1,z)\ln z
\nonumber \\
&+&\sum_{n=1}^{\infty}{(\alpha)_n (\beta)_n\over (n!)^2}
z^n\sum_{r=1}^{n-1}\left[
{1\over \alpha+n}+{1\over \beta+n}-{2\over n+1}\right].
\end{eqnarray}
Therefore, we obtain the general expression for ${\partial a\over \partial u}$ and
${\partial a_D\over \partial u}$ in the
weak coupling region valid in the case of the elliptic curve
(\ref{eq:curve1}).
Notice that the quadratic transformation (\ref{eq:quad1}) is valid if
$|z'|\le 1$. The region of $z$-plane
which satisfies this condition consists of
two parts; one is around $z=0$, one is around $z=1$.
The region around $z=0$ corresponds to the
weak coupling region, and the regions around $z=1$
corresponds to the strong coupling region where
monopole condensates. So we can
construct the formula valid in the strong coupling region by
continuing the expression (\ref{eq:a1}) and (\ref{eq:aD1})
analytically to around $z=1$ and by applying the quadratic
transformaion (\ref{eq:quad1}).
Similarly if we consider the latter case where $e_1$ and $e_3$ are roots of
first quadratic polynomial of the curve (\ref{eq:curve1}),
we have to do the transformation which make the variable symmetric
about $e_1$, $e_4$ and $e_2,\ e_3$. Thus we use another
quadratic transformaion\cite{HTF}
\begin{eqnarray}
F\left(a,b;2b;z\right)=(1-z)^{-a/2}F\left({a\over 2},
b-{a\over 2};b+{1\over 2};{z^2\over 4(1-z)}\right),\label{eq:quad2}
\end{eqnarray}
where $a=1/2$. The new variable $\tilde{z}'=z^2/4(1-z)$
is symmetric about $e_1,e_4$ and $e_2,e_3$
as follows;
\begin{eqnarray}
\tilde{z}'&=&{z^2\over 4(1-z)}={(e_1-e_4)^2(e_3-e_2)^2\over
4(e_2-e_1)(e_4-e_3)(e_1-e_3)(e_4-e_2)}\nonumber \\
&=&-{(a_1^2-4b_1)(a_2^2-4b_2)\over
4[(b_1-b_2)^2-(b_1+b_2)a_1a_2+a_1^2b_2+a_2^2b_1]}
\end{eqnarray}
By applying this transformation to (\ref{eq:a1}) and (\ref{eq:aD1}),
we get ${\partial a\over \partial u},\ {\partial a_D\over \partial u}$ as
\begin{eqnarray}
{\partial a\over \partial u}&=&{\sqrt 2\over 2}
[(e_3-e_1)(e_4-e_2)(e_2-e_1)(e_4-e_3)]^{-1/4}
F\left({1\over 4},{1\over 4},1;\tilde{z}'\right),
\label{eq:a3}\\
{\partial a_D\over \partial u}&=&i{\sqrt 2\over 2}
[(e_1-e_3)(e_4-e_2)(e_2-e_1)(e_4-e_3)]^{-1/4} \nonumber \\
& & \hspace{1cm}\times \left[{3\ln 4-i\pi\over 2\pi}
F\left({1\over 4},{1\over 4},1;\tilde{z}'\right)-{1\over 2\pi}
F^{*}\left({1\over 4},{1\over 4},1;\tilde{z}'\right)\right].
\label{eq:aD3}
\end{eqnarray}
In both cases we can read the variable directly from the coefficients
of the curve.
In the next subsection, we generalize the formalism of this subsection
to all kinds of $SU(2)$ Yang-Mills theory
with massive $N_f\le 3$ hypermultiplets.
\subsection{Cubic transformation}
We denote the curve as
\begin{eqnarray}
y^2=x^4+ax^3+bx^2+cx+d.
\end{eqnarray}
In general,
the variable of the hypergeometric function is still very complicated even after the quadratic transformation.
So in addition to the quadratic transformation, we must use
the the following cubic transformation\cite{HTF}
subsequently
\begin{eqnarray}
F\left(3a,a+{1\over 6};4a+{2\over 3};z'\right)=
\left(1-{z'\over 4}\right)^{-3a}F\left(a,a+{1\over 3};2a+{5\over 6};
-27 {z'^2\over (z'-4)^3}\right),\label{eq:cub}
\end{eqnarray}
or
\begin{eqnarray}
F\left(3a,{1\over 3}-a;2a+{5\over 6};\tilde{z}'\right)=
(1-4\tilde{z}')^{-3a}F\left(a,a+{1\over 3};2a+{5\over 6};
{27\tilde{z}'\over (4\tilde{z}'-1)^3}\right),
\label{eq:cub2}
\end{eqnarray}
where $a=1/12$,
so that the new variable $z''=-27z'^2/(z'-4)^3=27\tilde{z}'/
(4\tilde{z}'-1)^3$
become completely symmetric in $e_i$.
Notice that $z''$ is represented by coefficients of the elliptic curve
\begin{eqnarray}
z''=-27 {z'^2\over (z'-4)^3}=27 {\tilde{z}'\over (
4\tilde{z}'-1)^3}={27 z^2 (1-z)^2\over 4 (z^2-z+1)^3}
=-{27\Delta\over 4D^3},
\end{eqnarray}
where $\Delta$ is the discriminant of the elliptic curve
\begin{eqnarray}
\Delta&=&\prod_{i<j}(e_i-e_j)^2\nonumber \\
&=&-[27 a^4 d^2+a^3c(4 c^2-18bd)+ac(-18bc^2+80 b^2 d+192 d^2)\nonumber \\
& &\ \ +
a^2(-b^2 c^2+4b^3d+6c^2d-144bd^2)+4b^3c^2+27c^4\\
& & \ \ \ \ -16b^4d-144bc^2d+
128b^2d^2-256d^3],\nonumber
\end{eqnarray}
and $D$ is given by
\begin{eqnarray}
D=\sum_{i<j}{1\over 2}(e_i-e_j)^2=-b^2+3ac-12d.
\end{eqnarray}
Applying (\ref{eq:cub}) to (\ref{eq:a2}) or (\ref{eq:cub2}) to (\ref{eq:a3}),
without knowing precise forms of $e_i$
we obtain a general expression for ${\partial a\over \partial u}$
in the weak coupling region
valid even in the theory with massive $N_f\le 3$ hypermultiplets,
\begin{eqnarray}
{\partial a\over \partial u}={\sqrt 2\over 2}(-D)^{-1/4}F\left(
{1\over 12},{5\over 12};1;-{27\Delta\over 4D^3}\right).\label{eq:a4}
\end{eqnarray}
Similarly, after the analytic continuation and
quadratic and cubic transformations,
we obtain an expression for ${\partial
a_D\over \partial u}$ as
\begin{eqnarray}
{\partial a_{D}\over \partial u}=i{\sqrt 2\over 2}(-D)^{-1/4}
\left[ {3\over 2\pi}\ln 12 \,\right. &F&\left({1\over 12},{5\over 12},1,
-{27\Delta\over 4D^3}\right) \nonumber \\
&-&\left. {1\over 2\pi}F^{*}\left(
{1\over 12},{5\over 12};1;-{27\Delta\over 4D^3}\right)\right].
\label{eq:aD4}
\end{eqnarray}
For the consistency check, we consider the asymptotic
behavior in the weak coupling region $u\rightarrow \infty$,
\begin{eqnarray}
\Delta&=&(-1)^{N_f}256u^{N_f+2}\Lambda^{2(4-N_f)}+\cdots, \nonumber \\
D&=&-16u^2+\cdots,\\
-{27\over 4}{\Delta\over D^3}&=&{27(-1)^{N_f}\over 64}\left({\Lambda^2\over
u}\right)^{4-N_f}+\cdots.\nonumber
\end{eqnarray}
Thus we have
\begin{eqnarray}
{\partial a\over \partial u}&=&{\sqrt 2\over 4\sqrt u}+\cdots,
\nonumber \\
{\partial a_D\over \partial u}&=&i{\sqrt 2\over 4\sqrt u}{4-N_f\over 2\pi}
\ln\left({\Lambda^2\over u}\right)+\cdots,
\end{eqnarray}
which is compatible with (\ref{eq:asym}).
The formula (\ref{eq:a4}) and (\ref{eq:aD4}) are useful in the case where
we cannot obtain any simple expression of roots, whereas
(\ref{eq:a2}) and (\ref{eq:aD2}) or (\ref{eq:a3}) and (\ref{eq:aD3}) can be
used when we have a factorized form for $y^2$ as (\ref{eq:curve1}).
Next we consider the periods in the strong coupling region.
The quadratic and cubic transformation are valid if
$|z''|\le 1$. The region of $z$-plane
which satisfies this condition consists of
three parts; one is around $z=0$ , one is around $z=1$ and the last is
around $z=\infty$. The region around $z=0$ corresponds to the
weak coupling region, and the region around $z=1$
corresponds to the strong coupling region where the monopoles condensate and $z=\infty$ is the dyonic point. So we can
construct the formula valid in the strong coupling region by
analytic continuation to around $z=1$ or $z=\infty$ and by
using the quadratic and cubic transformation subsequently.
For example, the formula around the strong coupling region $z=1$ is given by
\begin{eqnarray}
{\partial a\over \partial u}&=&{\sqrt 2\over 2}(-D)^{-1/4}
\left[ {3\over 2\pi}\ln 12 \,\right. F\left({1\over 12},{5\over 12},1,
-{27\Delta\over 4D^3}\right) \nonumber \\
& &\hspace{4cm}-\left. {1\over 2\pi}F^{*}\left(
{1\over 12},{5\over 12};1;-{27\Delta\over 4D^3}\right)\right],\\
\label{eq:a5}
{\partial a_D\over \partial u}&=&i{\sqrt 2\over 2}(-D)^{-1/4}F\left(
{1\over 12},{5\over 12};1;-{27\Delta\over 4D^3}\right).\label{eq:aD5}
\end{eqnarray}
The expression (\ref{eq:a4}), (\ref{eq:aD4}) and
(\ref{eq:a5}), (\ref{eq:aD5}) show a manifest duality of the periods.
Notice that the ratio of two period integrals is
the coupling constant of the theory,
\begin{eqnarray}
\tau={\partial^2 {\cal F}\over \partial a^2}={\partial a_D\over \partial a}=
\left.{\partial a_D\over \partial u}\right/{\partial a\over \partial u}=
{iF(1/2,1/2,1,1-z)\over F(1/2,1/2,1,z)}.
\end{eqnarray}
Though $z$ is not invariant under the modular transformation of $\tau$,
the argument $z''=-27\Delta/4D^3=27z^2(1-z)^2/(z^2-z+1)^3$ is
invariant completely. As a matter of fact, this variable can be written by the absolute invariant form $j(\tau)$ as $z''=1/j(\tau)$. Therefore it is quite
natural to represent the period in terms of $z''$.
\
\sect{Examples}
In this section we calculate the period, $a$, $a_D$ and the
prepotential of a variety of supersymmetric
$SU(2)$ Yang-Mills theory with massive hypermultiplets as
examples of our formula.
For consistency check we also consider massless case.
Moreover we consider the cases where the theory has
conformal points.
\subsection{$N_f=1$ theory}
We
consider the theory with a matter hypermultiplet whose curve is given by
\begin{eqnarray}
y^2=(x^2-u)^2-\Lambda^3(x+m),
\end{eqnarray}
from which $\Delta$ and $D$ is obtained as
\begin{eqnarray}
\Delta&=&-\Lambda^6(256u^3-256u^2m^2-288um\Lambda^3+256m^3\Lambda^3+27\Lambda^6),\\
D&=&-16u^2+12m\Lambda^3.
\end{eqnarray}
Substituting these to (\ref{eq:a4}) and (\ref{eq:aD4}), we
can obtain
$a,\ a_D$, by expanding (\ref{eq:a4}) and (\ref{eq:aD4}) at $u=\infty$ and
integrating with respect to $u$. Representing $u$ in terms of
$a$ inversely, and substituting $u$ to $a_D$, and finally integrating $a_D$
with respect to $a$, we can get the prepotencial in the
weak coupling region as
\begin{eqnarray}
{\cal F}(\tilde{a})&=&i{\tilde{a}^2\over \pi}\left
[{3\over 4}\ln \left({\tilde{a}^2\over \Lambda^2}\right)+{3\over 4}\left(
-3+4\ln 2-i\pi\right)-{\sqrt 2\pi\over 2i\tilde{a}}
(n'm)\right.\nonumber \\
& & \left.-\ln \left({\tilde{a}\over \Lambda}\right){m^2\over 4\tilde{a}^2}
+\sum_{i=2}^{\infty}{\cal F}_i\tilde{a}^{-2i}\right].
\label{eq:pre2}
\end{eqnarray}
where we introduce $\tilde{a}$ subtracted mass residues
from $a$. These ${\cal F}_i$
agree with the perturbative result up to the orders cited
in \cite{Ohta}. In principle we can calculate ${\cal F}_i$
to arbitrary order in our formalism. Quite similarly, we can obtain the prepotential in the strong coupling region.
To compare to the periods for massless case where the explicit form is known
by solving the Picard-Fuchs equation\cite{IY}, we start with our
expression for the massless theory,
\begin{eqnarray}
\Delta=-\Lambda^6(256u^3+27\Lambda^6),\
D=-16u^2,\ z''=-{27\over 4}{\Lambda^6(256u^3+27\Lambda^6)\over 16^3u^6}.
\end{eqnarray}
If we set $w=-27\Lambda^6/256u^3$ then $z''=4w(1-w)$, thus using
the quadratic transformations \\(\ref{eq:quad1}),
we get the expression for the massless case,
\begin{eqnarray}
{\partial a\over \partial u}&=&{\sqrt 2\over 2}{1\over 2\sqrt u}F\left(
{1\over 6},{5\over 6},1;w\right),\nonumber \\
{\partial a_D\over \partial u}&=&i{\sqrt 2\over 2}{1\over 2\sqrt u}
\left[{3\ln 3+2\ln 4\over 2\pi}F\left({1\over 6},{5\over 6}
,1,w\right)-{1\over 2\pi}F^{*}\left({1\over 6},{5\over 6}
,1,w\right) \right].
\end{eqnarray}
Integrating with respect to $u$, we can
get the expression given by Ito and Yang\cite{IY}.
The expression around the strong coupling region can be obtained from $(3.22)$ and $(3.22)$ by using the identity $(3.2)$ for $w=1-{27\Lambda^6\over 256u^3}$.
In general when $m\ne 0$, because of the
singularity comming from additional massless states\cite{SW},
we cannot represent $a$ and $a_D$ as any
special functions by integrating the
expression for ${\partial a\over \partial u}$ and ${\partial a_D\over \partial u}$.
However if masses take critical values with which the number of the
singularity goes down to the same number
as in the massless case, that is three,
$a$ and $a_D$ seem to be expressed by special functions.
The number of the singularity is the number of the root of the
equation $\Delta=0$ plus one, which is the singularity at $u=\infty$.
Since $\Delta$ is third order polynomial in terms of $u$ in $N_f=1$ theory,
$\Delta=0$ must have one double root when the mass takes
critical value.
This condition is satisfied if $m=3/4\Lambda$ where the parameters of the periods are given by
\begin{eqnarray}
\Delta&=&-\Lambda^6(16u+15\Lambda^2)(4u-3\Lambda^2)^2,\ \
D=-(4u+3\Lambda)(4u-3\Lambda),\\
z''&=&-{27\over 4}{\Lambda^6(16u+15\Lambda^2)\over (4u+3\Lambda^2)^3(4u-3\Lambda^2)}.
\end{eqnarray}
Such factorization of $\Delta$ means that
theory has a conformal point $u=3\Lambda^2/4$ where the curve become
\cite{APSW,EHIY}
\begin{eqnarray}
y^2=\left(x+{\Lambda\over 2}\right)^3\left(x-{3\Lambda\over 2}\right).
\end{eqnarray}
If we set
\begin{eqnarray}
w={27\Lambda^2\over 16u+15\Lambda^2},
\end{eqnarray}
then $z''=-64w^3/(w-9)^3(w-1)$.
In order to obtain $a$ and $a_D$
we need the quartic transformation which makes the variable
simple enuogh. We can prove the following transformation of fourth order;
\begin{eqnarray}
F\left({1\over 12},{5\over 12},1,-{64w^3\over (w-9)^3(w-1)}\right)=
\left(1-{w\over 9}\right)^{1/4}(1-w)^{1/12}F\left({1\over 3},
{1\over 3},1,w\right).
\end{eqnarray}
Using this identity and the identity
\begin{eqnarray}
F(a,b;c;z)=(1-z)^{-a}F(a,c-b;c;z/(z-1)),\label{eq:ide}
\end{eqnarray}
we get ${\partial a\over \partial u},\
{\partial a_D\over \partial u}$
\begin{eqnarray}
{\partial a\over \partial u}&=&{\sqrt2 \over 8}\left(-27\Lambda^2\right)^{1\over 2}
y^{1/2}F\left({1\over 3},
{2\over 3},1,y\right) \label{eq:paa2}\\
{\partial a_D\over \partial u}&=&i{\sqrt 2\over 8}\left(-27\Lambda^2\right)^{1\over 2}
y^{1/2}\left[{\left(3\ln 3-i\pi\right)\over 2\pi}
F\left({1\over 3},{2\over 3},1,y\right)\right.\nonumber \\
& &\hspace{7cm}-\left.{3\over 2\pi}F^{*}
\left({1\over 3},{2\over 3},1,y\right)\right],\label{eq:paaD2}
\end{eqnarray}
where
\begin{eqnarray}
y={27\Lambda^2\over -16u+12\Lambda^2}.
\end{eqnarray}
Integrating with respect $u$,
we get $a$ and $a_D$ in the weak coupling region as
\begin{eqnarray}
a&=&-{i\sqrt 2\over 8}3\sqrt{3}\Lambda
{y^{-{1\over 2}}}_3F_2\left({1\over 3},{2\over 3},-{1\over 2}
;1,{1\over 2};y\right),\label{eq:formula1}\\
a_D&=&+{\sqrt 2\over 8}3\sqrt{3}\Lambda
y^{-{1\over 2}}\left[
{3(3\ln 3-i\pi-2)\over 2\pi}\, _3F_2\left({1\over 3},{2\over 3},-{1\over 2}
;1,{1\over 2};y\right)\right.\nonumber \\
& &\hspace{4cm}
\left.-{3\over 2\pi}\, _3F^{*}_2\left({1\over 3},{2\over 3},-{1\over 2}
;1,{1\over 2};y\right)\right],\label{eq:formula1D}
\end{eqnarray}
where $_3F_2(a,b,c;1,d,y)$ is the generalised hypergeometric function\cite{HTF}
\begin{eqnarray}
_3F_2(a,b,c;1,d;y)
=\sum_{n=0}^{\infty}{(a)_n (b)_n
(c)_n\over (d)_n (n!)^2}y^n,
\end{eqnarray}
and
\begin{eqnarray}
_3F_2^{*}(a,b,c;1,d;y)
&=& _3F_2(a,b,c;1,d;y)\ln y\nonumber \\
&+&\sum_{n=0}^{\infty}{(a)_n (b)_n
(c)_n\over (d)_n (n!)^2}y^n\\
& &\times \sum_{r=0}^{n-1}\left[{1\over a+r}
+{1\over b+r}+{1\over c+r}-
{1\over d+r}-{2\over 1+r}\right],\nonumber
\end{eqnarray}
is other independent solutions of
a generalized hypergeometric equation\cite{HTF} around $y=0$;
\begin{eqnarray}
y^2(1-y){d^3 F\over dy^3}
&+&\{(d+2)y-(3+a+b+c)y^2\}{d^2 F\over dy^2}
\nonumber \\
+&\{&d-(1+a+b+c+ab+bc+ca)y\}{dF\over dy}-abc F=0,
\end{eqnarray}
which $_3F_2(a,b,c;1,d;y)$ obeys. Notice that Picard-Fuchs equation of
$N_f=1$ theory reduces to this equation when the theory has the conformal
point. This equation has three regular singularities at $y=0,1,\infty$.
This is the reason why $a$ and $a_D$ are expressed by the special functions.
In order to obtain the expression around the conformal point $u=3\Lambda/4$
from the expression (\ref{eq:formula1}) and (\ref{eq:formula1D}),
we have to perform the analytic continuation from
the weak coupling region. After that, the expressions
for $a$ and $a_D$
contain no logarithmic terms,
\begin{eqnarray}
a&=&-{i\sqrt 2\over 8}3\sqrt{3}\Lambda
y^{-1/2}\left[
{6\over 5}{\Gamma({1\over 3})\over \Gamma({2\over 3})\Gamma({2\over 3})}
(-y)^{-1/3}\, _3F_2\left({1\over 3},{1\over 3},{5\over 6};{2\over 3}
,{11\over 6};{1\over y}\right)
\right. \nonumber \\
& &\ \ +\left.{6\over 7}{\Gamma(-{1\over 3})\over \Gamma({1\over 3})
\Gamma({1\over 3})}
(-y)^{-2/3}\, _3F_2\left({2\over 3},{2\over 3},{7\over 6};{4\over 3}
,{13\over 6};{1\over y}\right)
\right],\\
a_D&=&{\sqrt 3\over 2}
{\sqrt 2\over 8}3\sqrt{3}\Lambda
y^{-1/2}\left[
{6\over 5}{\Gamma({1\over 3})\over \Gamma({2\over 3})\Gamma({2\over 3})}
(-y)^{-1/3}\, _3F_2\left({1\over 3},{1\over 3},{5\over 6};{2\over 3}
,{11\over 6};{1\over y}\right)
\right. \nonumber \\
& &\ \ -\left.{6\over 7}{\Gamma(-{1\over 3})\over \Gamma({1\over 3})
\Gamma({1\over 3})}
(-y)^{-2/3}\, _3F_2\left({2\over 3},{2\over 3},{7\over 6};{4\over 3}
,{13\over 6};{1\over y}\right)
\right],\end{eqnarray}
where $1/y= -16u/27\Lambda^{2} + 4/9$. Thus the coupling constant $\tau$ which is the ratio of ${\partial a_D\over \partial u}$
and ${\partial a \over \partial u}$
has no logarithmic term, and the beta function on this conformal point
vanishes.
Here we pause to discuss an interesting relation
between the moduli space of this theory and the moduli space of
2-D N=2 superconformal field theory with central charge $c=3$.
Consider the
complex projective space ${\bf P}^2$
with homogeneous coordinates $[x_0,x_1,x_2]$ and define the
hypersurface $X$ by the equation
\begin{eqnarray}
f=x_0^3+x_1^3+x_2^3-3\psi x_0x_1x_2=0.
\end{eqnarray}
Moduli space of the theory with $c=3$ is described
by $\tau$, the ratio of two independent period integrals
of holomorphic one-form $\Omega$ over the cycle on $X$,
\begin{eqnarray}
\tau=\left.\int_{\gamma}\Omega\right/\int_{\gamma'}\Omega.
\end{eqnarray}
It is known that this period satisfy the Picard-Fuchs equation
which reduce to
\begin{eqnarray}
\left(z{d\over dz}\right)^2f(z)-z\left(z{d\over dz}+{1\over 3}\right)
\left(z{d\over dz}+{2\over 3}\right)f(z)=0,
\end{eqnarray}
where $z=\psi^{-3}$. This is a hypergeometric differential equation and
$f(z)$ is obtained as a linear combination
of $F(1/3,2/3;1;z)$ and $F^{*}(1/3,2/3;1;z)$.
By comparing this solution to (\ref{eq:paa2}) and
(\ref{eq:paaD2}), we deduce
an identification $\psi^3 =
-16u/27\Lambda^2+ 4/9$, and that
the conformal point $(u=3\Lambda^2/4)$ of 4-D $SU(2)$ $N_f=1$ super QCD
corresponds to the Landau-Ginzburg point $(\psi=0)$ of
2-D SCFT with $c=3$. It seems interesting to use this identification to investigate the theory at the conformal fixed point.
\subsection{$N_f=2$ theory}
We consider the theory with $N_f=2,\ m_1=m_2=m$ whose curve and descriminant are given by
\begin{eqnarray}
y^2&=&\left(x^2-u+{\Lambda^2\over 8}\right)-\Lambda^2(x+m)^2\nonumber \\
&=&\left(x^2-\Lambda x-\Lambda m-u+{\Lambda^2\over 8}\right)
\left(x^2+\Lambda x+\Lambda m-u+{\Lambda^2\over 8}\right) \nonumber \\
\Delta&=&{\Lambda^2\over 16}\left(8u-8m^2-\Lambda^2\right)^2\left(8u+8\Lambda m+\Lambda^2\right)
\left(8u-8\Lambda m+\Lambda^2\right).
\end{eqnarray}
In this case, we can use the formula of section 3.1 because of the factorized form of the curve.
Reading $z'$, $e_2-e_1$, $e_4-e_3$ from the coefficients of the curve,
\begin{eqnarray}
(e_2-e_1)^2&=&4u+4\Lambda m+{\Lambda^2\over 2},\nonumber \\
(e_4-e_3)^2&=&4u-4\Lambda m+{\Lambda^2\over 2}, \\
z'&=&{\Lambda^2 (u-m^2-{\Lambda^2\over 8})\over (u+\Lambda m+{\Lambda^2\over 8})(
u-\Lambda m+{\Lambda^2\over 8})}.\nonumber
\end{eqnarray}
Substituting these to (\ref{eq:a2}) and (\ref{eq:aD2}), we
can obtain $a$ and $a_D$ after expansion around
$u=\infty$ and integration with respect $u$. The prepotential in the
weak coupling region is
\begin{eqnarray}
{\cal F}(\tilde{a})&=&i{\tilde{a}^2\over \pi}\left
[{1\over 2}\ln \left({\tilde{a}^2\over \Lambda^2}\right)+\left(
-1+{i\pi \over 2}+{5\ln 2\over 2}\right)-{\sqrt{2}\pi\over 2i\tilde{a}}
(n'm)\right.\nonumber \\
& & \left.-\ln \left({\tilde{a}\over \Lambda}\right){m^2\over 2\tilde{a}^2}
+\sum_{i=2}^{\infty}{\cal F}_i\tilde{a}^{-2i}\right].
\label{eq:pre1}\end{eqnarray}
These ${\cal F}_i$ agree with the result up to known orders \cite{Ohta}.
We can calculate the prepotential in $m_1\ne m_2$ case.
Let us compare the results of $a$ and $a_D$ for massless case to the previous results\cite{IY}.
The variable $z$ can be written in the form $z'=4w(1-w)$ if we set $x=\Lambda^2/8u$ and $w=2x/(x+1)$. Therefore, we transform
$z'$ to $w$ by using the identity (\ref{eq:quad1}) and
$w$ to $x^2$ by using the identity
\begin{eqnarray}
F(a,b;2b;w)=\left(1-{z\over 2}\right)^{-a}
F\left({a\over 2},{1\over 2}+{a\over 2};b+{1\over 2};
{w^2\over (w-2)^2}\right),
\end{eqnarray}
where $a=b=1/2$ and $w^2/(w-2)^2=x^2$. Thus we get the expression for
the massless case;
\begin{eqnarray}
{\partial a\over \partial u}&=&{\sqrt 2\over 2}{1\over 2\sqrt u}F\left(
{1\over 4},{3\over 4},1;x^2\right),\nonumber \\
{\partial a_D\over \partial u}&=&i{\sqrt 2\over 2}{1\over 2\sqrt u}
\left[{3\ln 4\over 2\pi}F\left({1\over 4},{3\over 4}
,1,x^2\right)-{1\over 2\pi}F^{*}\left({1\over 4},{3\over 4}
,1,x^2\right) \right].
\end{eqnarray}
Integrating with respect to $u$, we can recover the previous
result for $a$ and $a_D$ \cite{IY}.
Next we consider the case where
the same factor appears in the denominator and the numerator
of $z'$. This is satisfied if
$m=\Lambda/2$. In this case the theory has conformal point\cite{APSW,EHIY} at
$u=3\Lambda^2/8$ where the elliptic curve is factorized as
\begin{eqnarray}
y^2=\left(x+{\Lambda\over 2}\right)^3\left(x-{3\Lambda\over 2}\right).
\end{eqnarray}
Main defferenece between the massless theory and the massive theory is
the existence of this conformal point. Usual massive $N_f=2$ theory has
five singularity points where additional two singularity points are coming
from the two bare mass parameter.
In this subsection we set $m_1=m_2$, so the number
of the singularity is four. When the theory has conformal point,
two of four singularity point coincide and this point becomes a conformal
point\cite{APSW,EHIY}. On the other hand, in our representation since
the pole and the zero of the variable $z'$
of the hypergeometric function correspond to the singularity points,
the theory has
a conformal point when the pole and the zero of $z'$
coincide and this pole becomes a conformal point.
To obtain $a$ and $a_D$ of this theory,
we substitute $m=\Lambda/2$ to (\ref{eq:a2}) and (\ref{eq:aD2}), and
use the identity (\ref{eq:ide}),
we obtain ${\partial a\over \partial u},\ {\partial a_D\over \partial u}$
\begin{eqnarray}
{\partial a\over \partial u}&=&{\sqrt2 \over 2}(-\Lambda^2)^{-1/2}
y^{1/2}F\left({1\over 4},
{3\over 4},1,y\right)\label{eq:paa} \\
{\partial a_D\over \partial u}&=&i{\sqrt 2\over 2}(-\Lambda^2)^{-1/2}
y^{1/2} \left[{\left(6\ln 4-i\pi\right)\over 2\pi}
F\left({1\over 4},{3\over 4},1,y\right)\right.\\
& &\hspace{7cm}\left.-{1\over \pi}F^{*}
\left({1\over 4},{3\over 4},1,y\right)\right],\label{eq:paaD}
\end{eqnarray}
where
\begin{eqnarray}
y={8\Lambda^2\over -8u+3\Lambda^2}.
\end{eqnarray}
Integrate with respect to $u$, we get $a$ and $a_D$ in the weak coupling
region as
\begin{eqnarray}
a&=&-{\sqrt 2\over 2}(-1)^{1\over 2}\Lambda {y^{-{1\over 2}}}
_3F_2\left({1\over 4},{3\over 4},-{1\over 2};
1,{1\over 2};y\right),\label{eq:aw}\\
a_D&=&-i{\sqrt 2\over 2}(-1)^{1\over 2}\Lambda y^{-{1\over 2}}
\left[{(6\ln 4-i\pi-4)\over 2\pi}
\, _3F_2\left({1\over 4},{3\over 4},-{1\over 2};
1,{1\over 2};y\right)\right.\nonumber \\
& &\hspace{5cm} \left.-{1\over \pi}\,
_3F^{*}_2\left({1\over 4},{3\over 4},-{1\over 2};
1,{1\over 2};y\right)\right].\label{eq:aDw}
\end{eqnarray}
As $N_f=1$ theory, after the use of the analytic continuation from
the weak coupling region, we obtain $a$ and $a_D$
around conformal point $u=3\Lambda^2/8$ as follows;
\begin{eqnarray}
a&=&-{\sqrt 2\over 2}(-1)^{1\over 2}\Lambda
y^{-1/2}\left[
{4\over 3}{\Gamma({1\over 2})\over \Gamma({3\over 4})\Gamma({3\over 4})}
(-y)^{-1/4}\, _3F_2\left({1\over 4},{1\over 4},{3\over 4};{1\over 2}
,{7\over 4};{1\over y}\right)
\right. \nonumber \\
& &\ \ +\left.{4\over 5}{\Gamma(-{1\over 2})\over \Gamma({1\over 4})
\Gamma({1\over 4})}
(-y)^{-3/4}\, _3F_2\left({3\over 4},{3\over 4},{5\over 4};{3\over 2}
,{9\over 4};{1\over y}\right)
\right],\\
a_D&=&-i{\sqrt 2\over 2}(-1)^{1\over 2}\Lambda
y^{-1/2}\left[{4\over 3}
{\Gamma({1\over 2})\over \Gamma({3\over 4})\Gamma({3\over 4})}
(-y)^{-1/4}\, _3F_2\left({1\over 4},{1\over 4},{3\over 4}
;{1\over 2},{7\over 4};{1\over y}\right)
\right. \nonumber \\
& &\ \ -\left.{4\over 5}{\Gamma(-{1\over 2})\over \Gamma({1\over 4})
\Gamma({1\over 4})}
(-y)^{-3/4}\, _3F_2\left({3\over 4},{3\over 4},{5\over 4};{3\over 2}
,{9\over 4};{1\over y}\right)
\right].\end{eqnarray}
\subsection{$N_f=3$ theory}
As in $N_f=1$ theory we read $\Delta$ and $D$ from the curve
although they are much
more complicated because of many bare mass parameter $m_i$.
After substituting these to (\ref
{eq:a4}) and (\ref{eq:aD4}) and using similar manner as $N_f=1,2$ case, we
get the prepotentical in the weak coupling region as
\begin{eqnarray}
{\cal F}(\tilde{a})&=&{i\tilde{a}^2\over \pi}\left[
{1\over 4}\ln \left({\tilde{a}^2\over \Lambda^2}\right)+
{1\over 4}(9\ln 2-2-\pi i)-{\sqrt 2 \pi\over 4i\tilde{a}}
\sum_{i=1}^{3}n'_im_i\right.\nonumber \\
& & \left. -{1\over 4\tilde{a}^2}\ln\left({\tilde{a}\over \Lambda}\right)
\sum_{i=1}^3 m_i^2+\sum_{i=2}^{\infty}{\cal F}_i\tilde{a}^{-2i}\right].
\label{eq:pre3}
\end{eqnarray}
These ${\cal F}_i $ agree the result up to known orders \cite{Ohta}.
Let us consider massless case where $\Delta$ and
$D$ are given by
\begin{eqnarray}
\Delta&=&-\Lambda^2u^4(-\Lambda^2+256u),\ \ D={-\Lambda^4+256\Lambda^2u-4096u^2\over 256},
\nonumber \\
z''&=&{27(256)^3\Lambda^2u^4(\Lambda^2-256u)\over 4(\Lambda^4-256\Lambda^2u+4096)^2}.
\end{eqnarray}
We set $y=\Lambda^2/256u$ and $w=4y(1-y)$, then $z''=27w/(4w-1)^3$,
so we use the other cubic transformaion (\ref{eq:cub2})
and the quadratic transformation (\ref{eq:quad1}) subsequently,
we get the expression for the massless case,
\begin{eqnarray}
{\partial a\over \partial u}&=&{\sqrt 2\over 2}{1\over 2\sqrt u}F\left(
{1\over 2},{1\over 2};1;y\right),\nonumber \\
{\partial a_D\over \partial u}&=&i{\sqrt 2\over 2}{1\over 2\sqrt u}
\left[{2\ln 4-i\pi\over 2\pi}F\left({1\over 2},{1\over 2}
;1;y\right)-{1\over 2\pi}F^{*}\left({1\over 2},{1\over 2}
,1,y\right) \right].
\end{eqnarray}
Integrate with respect to $u$, we can recover the previous
result for $a$ and $a_D$ \cite{IY}. Expression in the strong coupling region can be obtained quite similarly.
As the example of the theory which has the conformal points,
we treat two cases where $\Delta$ become factorized multiple;
one is the theory with $m_1=m_2=m_3=\Lambda/8$\cite{APSW,EHIY} and another one is
$m_1=m_2=0,\ m_3=\Lambda/16$ case. Of course other possibilities
exist but we will not consider these possibilities for simplicity.
In $m_1=m_2=m_3=\Lambda/8$ case, $\Delta=0$ has a 4-fold root as
\begin{eqnarray}
\Delta&=&-{\Lambda^2\over 2^{20}}(32u-\Lambda^2)^4(256u+19\Lambda^2),\ \
D=-{(32u-\Lambda^2)^2\over 64},\\
z''&=&-{27\over 16}{\Lambda^2(256u+19\Lambda^2)\over (32u-\Lambda^2)^2}.
\end{eqnarray}
If we take
\begin{eqnarray}
y={-27\Lambda^2\over 256u-8\Lambda^2},
\end{eqnarray}
then $z''=4y(1-y)$. Using the quadratic transformation (\ref{eq:quad1})
, we get ${\partial a\over \partial u}$ and ${\partial a_D\over \partial u}$
\begin{eqnarray}
{\partial a\over \partial u}&=&{\sqrt2 \over 2}\left(-{27\Lambda^2\over 256}\right)^
{-{1\over 2}}y^{1/2}
F\left({1\over 6},
{5\over 6},1,y\right)\\
{\partial a_D\over \partial u}&=&i{\sqrt 2\over 2}
\left(-{27\Lambda^2\over 256}\right)^{-{1\over 2}}
y^{1/2} \left[{\left(3\ln 3+2\ln 4-i\pi\right)\over 2\pi}
F\left({1\over 6},{5\over 6},1,y\right)\right.\\
& &\hspace{6cm}\left.-{1\over 2\pi}F^{*}
\left({1\over 6},{5\over 6},1,y\right)\right].\nonumber
\end{eqnarray}
Integrating with respect to $u$, we get $a$ and $a_D$ in the weak coupling
region as
\begin{eqnarray}
a&=&-{\sqrt2 \over 2}\left(-{27\Lambda^2\over 256}\right)^{1\over 2}
{y^{-{1\over 2}}}
_3F_2\left({1\over 6},{5\over 6},-{1\over 2};
1,{1\over 2};y\right),\\
a_D&=&-i{\sqrt2 \over 2}\left(-{27\Lambda^2\over 256}\right)^{1\over 2}
y^{-{1\over 2}}
\left[{(3\ln 3+2\ln 4-i\pi-4)\over 2\pi}
\, _3F_2\left({1\over 6},{5\over 6},-{1\over 2};
1,{1\over 2};y\right)\right.\\
& &\hspace{5cm}\left.-{1\over 2\pi}\,
_3F^{*}_2\left({1\over 6},{5\over 6},-{1\over 2};
1,{1\over 2};y\right)\right].\nonumber
\end{eqnarray}
By using the analytic continuation from the weak coupling region, we obtain the
expression around the conformal point $u=\Lambda^2/32$ as follows;
\begin{eqnarray}
a&=&-{\sqrt2 \over 2}\left(-{27\Lambda^2\over 256}\right)^{1\over 2}
y^{-1/2}\left[{3\over 2}
{\Gamma({2\over 3})\over \Gamma({5\over 6})\Gamma({5\over 6})}
(-y)^{-1/6}\, _3F_2\left({1\over 6},{1\over 6},{2\over 3};{1\over 3}
,{5\over 3};{1\over y}\right)
\right. \nonumber \\
& &\ \ +\left.{3\over 4}{\Gamma(-{2\over 3})\over \Gamma({1\over 6})
\Gamma({1\over 6})}
(-y)^{-5/6}\, _3F_2\left({5\over 6},{5\over 6},{4\over 3};{5\over 3},
{7\over 3};{1\over y}\right)
\right],\\
a_D&=&-i{\sqrt 3\over 2}
{\sqrt2 \over 2}\left(-{27\Lambda^2\over 256}\right)^{1\over 2}
y^{-1/2}\left[{3\over 2}
{\Gamma({2\over 3})\over \Gamma({5\over 6})\Gamma({5\over 6})}
(-y)^{-1/6}\, _3F_2\left({1\over 6},{1\over 6},{2\over 3};{1\over 3}
,{5\over 3};{1\over y}\right)
\right. \nonumber \\
& &\ \ -\left.{3\over 4}{\Gamma(-{2\over 3})\over \Gamma({1\over 6})
\Gamma({1\over 6})}
(-y)^{-5/6}\, _3F_2\left({5\over 6},{5\over 6},{4\over 3};{5\over 3}
,{7\over 3};{1\over y}\right)
\right].\end{eqnarray}
Next we consider $m_1=m_2=0,\ m_3=\Lambda/16$ case. In this case
$\Delta=0$ has one triple root and one double root as
\begin{eqnarray}
\Delta&=&{\Lambda^2\over 2^{27}}(\Lambda^2-128u)^3(\Lambda^2+128u)^2,\ \
D=-{(7\Lambda^2-128u)(\Lambda^2-128u)\over 1024},\\
z''&=&54{\Lambda^2(\Lambda^2+128u)^2\over (7\Lambda^2-128u)^3}.
\end{eqnarray}
If we take
\begin{eqnarray}
w={2\Lambda^2\over 128u+\Lambda^2},
\end{eqnarray}
then $z''=27w/(4w-1)^3$. So we use the cubic transformation (\ref{eq:cub2}),
and use the identity (\ref{eq:ide}) ,
we obtain ${\partial a\over \partial u}$ and ${\partial a_D\over \partial u}$
\begin{eqnarray}
{\partial a\over \partial u}&=&{\sqrt2 \over 2}
\left(-{\Lambda^2\over 64}\right)^{-{1\over 2}}y^{1/2}
F\left({1\over 4},
{3\over 4},1,y\right)\nonumber \\
{\partial a_D\over \partial u}&=&i{\sqrt 2\over 2}
\left(-{\Lambda^2\over 64}\right)^{-{1\over 2}}
y^{1/2} \left[{\left(3\ln 4-i\pi\right)\over 2\pi}
F\left({1\over 4},{3\over 4},1,y\right)\right.\\
& &\hspace{6cm}\left.-{1\over 2\pi}F^{*}
\left({1\over 4},{3\over 4},1,y\right)\right],\nonumber
\end{eqnarray}
where
\begin{eqnarray}
y={2\Lambda^2\over -128u+\Lambda^2}.
\end{eqnarray}
Integrate with respect to $u$,
we obtain
$a$ and $a_D$ in the weak coupling region as
\begin{eqnarray}
a&=&-{\sqrt 2\over 2}\left(-{\Lambda^2\over 64}\right)^{1\over 2}{y^{-{1\over 2}}}
_3F_2\left({1\over 4},{3\over 4},-{1\over 2};1,{1\over 2};y\right),\\
a_D&=&-i{\sqrt 2\over 2}\left(-{\Lambda^2\over 64}\right)^{1\over 2}y^{-{1\over 2}}
\left[{(3\ln 4-i\pi-4)\over 2\pi}\, _3F_2\left(
{1\over 4},{3\over 4},-{1\over 2};1,{1\over 2};y\right)\right.\\
& &\hspace{4cm}\left.-{1\over 2\pi}\, _3F^{*}_2\left(
{1\over 4},{3\over 4},-{1\over 2};1,{1\over 2};y\right)
\right].\nonumber
\end{eqnarray}
Since this expression is the same as $N_f=2$ case (\ref{eq:aw})
and (\ref{eq:aDw} except the argument $y$,
we obtain the expression around the conformal point $u=\Lambda^2/128$
by replacing the
argument of (\ref{eq:paa}) and (\ref{eq:paaD}) to $y=2\Lambda^2/(-128u+\Lambda^2)$.
\
\sect{Summary}
We have derived a formula for the periods of
$N=2$ supersymmetric $SU(2)$ Yang-Mills theory with
massive hypermultiplets both in the weak coupling region and in the
strong coupling region by using the identities of the hypergeometric
functions. We also show how to deal with the theories with conformal
points by using the formula.
The approach to evaluate the integral
is useful when Picard-Fuchs equation is not solved
by any special functions.
Similar situation occurs when we consider the theories
having higher rank gauge groups. In these case, we no longer
expect that the similar transformations exist. We should know how to
evaluate the dual pair of fields by another method, which will be
reported in a separate paper\cite{MS}.
\newpage
| proofpile-arXiv_065-391 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction \label{sec-intro}}
In order to understand the structure of the {\em QCD}
vacuum \cite{shuryak}
one should analyse possible mechanisms for chiral symmetry breaking
and the formation of fermion condensates.
The existence of such correlators can be
understood as the result of
condensation of pairs of particles and holes
and it can have interesting implicancies
in particle physics and cosmology. For example, a color nonsinglet
condensate may be related to superfluidity and color superconductivity
of cold quark matter at high fermion densities \cite{bl}.
In this respect the results of Deryagin, Grigoriev and Rubakov \cite{dgr}
are of particular importance. Analysing the large $N_c$ limit
of ${\em QCD}$ these authors have shown that the order parameter for
chiral symmetry, the quark condensate $\langle\bar \psi \psi\rangle$,
is at high quark densities inhomogeneous and anisotropic so that,
regarding the order parameter, the ground state of quark matter has
the structure of a standing wave.
Two-dimensional models like the Schwinger model and ${\em QCD}_2$
provide a natural laboratory to test these phenomena since,
although simplified, the basic aspects (chiral symmetry
features, non-trivial topological sectors, etc) are
still present and exact calculations can be in many cases performed.
An analysis of two-dimensional $QED$ at
finite density was originally presented in \cite{fks}-\cite{ara}.
More recently, studies on this theory \cite{Kao}-\cite{hf}
showed that inhomogeneous chiral condensates do exist as a result
of the contribution of non-trivial topological sectors.
Extending our work on $QED_2$ \cite{hf}
we analyse in the present paper vacuum expectation values
of products of local bilinears $\bar \psi(x) \psi(x)$, at finite
density for two-dimensional Quantum Chromodynamics with flavor.
Using a path-integral approach which is very appropriate
to handle non-Abelian gauge theories, we show that the
multipoint chiral condensates exhibit an oscillatory inhomogenous
behavior depending on a chemical potential matrix.
Our results are exact and, remarkably, go in the same direction
as those revealed in four dimensions using the $1/N_c$ approximation
to ${\em QCD}$ \cite{dgr}.
To study the effect of finite fermion density in ${\em QCD}_2$
a chemical potential may be introduced. Within the path-integral
approach this ammounts to consider a classical background
charge distribution in addition to that produced by
topologically non-trivial gauge configurations.
Concerning this last point, it is well-known that in two
space-time dimensions the role of
instantons is played by vortices. In the Abelian case, these vortices
are identified with the Nielsen-Olesen solutions
of the spontaneously broken Abelian Higgs model \cite{NO}.
Also in the non-Abelian case,
regular solutions with topological charge exist
when symmetry breaking is appropriately achieved via Higgs fields
\cite{dVS}-\cite{LMS}.
In both cases the associated fermion zero modes have been
found \cite{JR}-\cite{CL}.
Properties of the vortex solutions and the corresponding
Dirac equation zero-modes are summarized in section 2. We then
describe in sections 3 and 4 how topological effects can be
taken into account within the path-integral formulation
leading to a compact form for the partition function in the
presence of a chemical potential. Our approach,
following ref.
\cite{bc}, starts by decomposing a given gauge field belonging to
the $n^{th}$ topological sector in the form
\begin{equation}
A_\mu(x) = A_\mu^{(n)} + A_\mu^{ext} + a_\mu
\label{pi}
\end{equation}
Here $A_\mu^{(n)}$ is a (classical) fixed gauge
field configuration belonging
to the $n^{th}$ class, $ A_\mu^{ext}$ is the background
charge field taking account of the chemical potential, and $a_\mu$
is the path-integral variable which represents
quantum fluctuations. Both $ A_\mu^{ext}$ and $a_\mu$ belong
to the trivial topological sector and can be then decoupled
by a chiral rotation with the sole evaluation of a Fujikawa jacobian
\cite{Fuj}. This last calculation can be easily performed
since it is to be done in the trivial topological sector.
The complete calculation leading to the minimal non-trivial
correlation functions of fermion bilinears is first presented
for multiflavour $QED_2$ (Section 3) and then extended to multiflavour
$QCD_2$ (Section 4).
In both cases the oscillatory behavior of
correlators as a function of the chemical
potential is computed, the result showing a striking resemblance
with the $QCD_4$ answer obtained within the large $N_c$ approximation
\cite{dgr}. We summarize our results and conclusions
in section 5.
\section{Zero Modes}
Topological gauge field configurations and
the corresponding zero-modes of the Dirac equation play a central
role in calculations involving
fermion composites. We sumarize
in this section the main properties of vortices, the relevant topological
objects in the model we shall consider,
both for the Abelian and non-Abelian cases. We also
present the corresponding Dirac operator zero-modes.
\subsection{The Abelian case}
In two-dimensional Euclidean space-time, topologically non-trivial
configurations are available since Nielsen and
Olesen \cite{NO}
presented their static $z$-independent vortex. In the $U(1)$
case the topological charge for such a configuration,
working in an arbitrary compact surface
(like a sphere or a torus) is defined as
\begin{equation}
\frac{1}{4\pi}\int d^2x \, \epsilon_{\mu \nu} \,
F_{\mu \nu}^{(n)} = n \in Z
\label{q}
\end{equation}
A representative gauge field configuration
carrying topological charge $n$ can be written as
\begin{equation}
A_\mu^{(n)} = n\, \epsilon_{\mu \nu} \frac{x_\nu}{\vert x \vert}
A(\vert x \vert)
\label{v}
\end{equation}
with $A (\vert x \vert)$ a function which can be calculated
numerically
(an exact solution exists under certain conditions on coupling constants,
\cite{dVS}). The adequate boundary conditions are
\begin{equation}
A(0) = 0 ~~~,~~~
\lim_{\vert x \vert \to \infty} A(\vert x \vert) = -1
\label{c}
\end{equation}
There are $\vert n \vert$ zero-modes associated with the Dirac operator
in the background of an $A_\mu^{(n)}$ configuration in a suitable
compactified space-time \cite{bc}. (For the non-compact case see
\cite{JR}). For $n>0$ ($n<0$) they correspond
to right-handed (left-handed) solutions $\eta_R$ ($\eta_L$)
which in terms of light-cone
variables $ z = x_0 + i x_1$ and $ \bar z = x_0 - i x_1$
can be written in the form
\begin{equation}
\eta_R^m = \left(\begin{array}{c} z^m h(z,\bar z) \\ 0
\end{array} \right)
\end{equation}
\begin{equation}
\eta_L^m = \left(\begin{array}{c} 0 \\{\bar z}^{-m} h^{-1}(z,\bar z)
\end{array} \right)
\end{equation}
where $m = 0,1, \ldots , \vert n \vert -1$,
\begin{equation}
h(z,\bar z) = \exp[\phi^{(n)}(\vert z\vert )]
\label{f}
\end{equation}
and
\begin{equation}
\frac{d}{d\vert z\vert }\phi^{(n)}(\vert z \vert) = n A (\vert z \vert).
\label{ff}
\end{equation}
\subsection{The non-Abelian case}
As in the Abelian case, two-dimensional gauge field configurations
$A_\mu^{(n)}$ carrying a topological charge $n \in Z_N$ can be found
for the $SU(N)$ case. As explained in ref.\cite{dVS2}
the relevant homotopy group is in this case
$Z_N$ and not $Z$ as in the $U(1)$ case.
Calling $\varphi$ the angle characterizing the direction at infinity, a
mapping $g_n(\varphi) \in SU(N)$ belonging to the $n^{th}$ homotopy
class ($n = 0,1, \ldots, N-1$) satisfies, when one turns around a
close contour,
\begin{equation}
g_n(2\pi) = \exp(\frac{2\pi i n}{N}) g_n(0)
\label{su1}
\end{equation}
Such a behavior can be achieved just by taking $g_n$ in the
Cartan subgroup of the gauge group. For example, in the $SU(2)$ case
one can take
\begin{equation}
g_n(\varphi) = \exp [\frac{i}{2} \sigma^3 \Omega_n(\varphi)]
\label{222}
\end{equation}
with
\begin{equation}
\Omega_n(2\pi) - \Omega_n(0) = 2\pi (2 k + n)
\label{2c}
\end{equation}
Here $n=0,1$ labels the topological charge and $k \in Z$ is a second
integer which conects the topological charge with the vortex
magnetic flux (Only for abelian vortices both quantities coincide).
We can then write a gauge
field configuration belonging to the $n^{th}$ topological
sector in the form
\begin{equation}
A_\mu^{(n)} = i A(\vert x\vert )\ g_n^{-1} \partial_\mu g_n
\label{gf}
\end{equation}
with the boundary conditions
\begin{equation}
A(0) = 0 ~~~,~~~
\lim_{\vert x \vert \to \infty} A(\vert x \vert) = -1
\label{cna}
\end{equation}
These and more general vortex configurations have been
thoroughfully studied in \cite{LMS}-\cite{dVS2}.
Concerning zero-modes of the Dirac operator in the background of
non-Abelian vortices, they have been analysed in refs.\cite{dV}-\cite{CL}.
The outcome is that for topological charge $n>0$ ($n<0$) there are $Nn$
($N\vert n \vert$)
square-integrable zero modes $\eta_L$ ($\eta_R$) analogous to those
arising in the Abelian case. Indeed, one has
\begin{equation}
\eta_R^{(m,i) j} = \left(\begin{array}{c} z^m h_{ij}(z,\bar z) \\ 0
\end{array} \right)
\label{naz}
\end{equation}
\begin{equation}
\eta_L^{(m,i) j} =
\left(\begin{array}{c} 0 \\{\bar z}^{-m} h_{ij}^{-1}(z,\bar z)
\end{array} \right)
\label{naz2}
\end{equation}
with
\begin{equation}
h(z,\bar z) = \exp[\phi^{(n)}(\vert z\vert) M]
\label{cui}
\end{equation}
and
\begin{equation}
M = \frac{1}{N} {\rm diag} (1,1, \ldots, 1-N)
\label{M}
\end{equation}
Here $i,j= 1,2, \ldots, N$ and $m = 0,1, \ldots, \vert n \vert - 1$.
The pair $(m,i)$ labels the $N\vert n \vert$ different zero modes while
$j$ corresponds to a color index.
Due to the ansatz discussed in refs.\cite{LMS}-\cite{dVS2}
for the non-Abelian vortex, the function $\phi^{(n)}(\vert z\vert)$
appearing in eq.(\ref{cui})
coincides with that arising in eqs.(\ref{f})-(\ref{ff}) for the abelian
vortex.
As it happens in the abelian case, the partition function of two
dimensional Quantum Chromodynamics
only picks a contribution from the trivial sector because
$\det(\not\!\! D[A^{(n)}])=0$ for $n\neq 0$ (see eq.(\ref{z1}) below).
In contrast, various correlation functions become non-trivial precisely
for $n\neq 0$ thanks to the ``absortion'' of zero-mode contributions when
Grassman integration is performed.
It is our aim to see how these non-trivial correlators are modified
when a fermion finite density constraint is introduced,
comparing the results with those of the unconstrained
(zero chemical potential) case. As explained in the introduction,
we are motivated by the results of Deryagin, Grigoriev
and Rubakov \cite{dgr} in four dimensional $QCD$. They were able
to show,
in the large $N_c$ and high fermion density limits, the existence of
oscillatory condensates (the frequency given by the
chemical potential) which are spatially inhomogeneous.
For $QED_2$ the same oscillatory behavior was found approximately
in \cite{Kao} and confirmed analytically in
\cite{hf}, by examining an arbitrary number of fermion bilinears
for which the exact $\mu$-dependence of
fermionic correlators was computed. In order to improve our
understanding of the
large $N_c$ results found in $QCD_4$, we shall extend in what follows
our two-dimensional approach to the non-Abelian case but before,
we shall consider the case of
flavored $QED_2$ as a clarifying step towards multiflavor $QCD_2$.
\section{Multiflavour $QED_2$}
We developed in ref.\cite{hf} a path-integral method to compute
fermion composites for Abelian gauge theories including
chemical potential effects. In this section we briefly describe
our approach while extending our treatment so as to include flavour.
We then leave for section 4 the analysis of the non-Abelian
multiflavour $QCD_2$ model at finite density.
\subsection*{(i) Handling the chemical potential in the Abelian case}
We start from the Lagrangian
\begin{equation}
L= -\frac{1}{4e^2} F_{\mu\nu} F_{\mu\nu}+
\bar\psi (i\ds +\not\!\! A -i{\cal M}\gamma_0)\psi
\label{abef}
\end{equation}
\noindent
where $\psi$ is the fermion field isospinor. A
chemical potential term has been included by considering
the diagonal matrix ${\cal M}$ defined as
\begin{equation}
{\cal M}= \makebox[0.7cm] {diag}( \mu_1\dots \mu_{N_f})
\label{Mu}
\end{equation}
where $N_f $ is the total number of flavors and $\mu_{k}$ are
Lagrange multipliers carrying a flavour index,
so that each $k$-fermion number is independently conserved.
The corresponding partition function is defined as
\begin{equation}
Z[\mu_1\dots \mu_{N_f}] = \int {\cal{D}}\bar\psi {\cal{D}} \psi {\cal{D}} A_\mu
\exp (- \int d^2x\ L).
\label{par}
\end{equation}
Since our interest is the computation of fermionic correlators,
we have to carefully
treat non-trivial topological configurations of the gauge fields
which have been seen to be crucial in the obtention of
non-vanishing condensates, see refs.\cite{mnt}-\cite{cmst}.
Then, following the approach of refs.\cite{bc}-\cite{cmst}, we
decompose gauge field configurations belonging to the $n^{th}$
topological sector in the form
\begin{equation}
A_{\mu}(x) = { A}_{\mu}^{(n)}(x) + a_{\mu}(x)
\end{equation}
where $A_{\mu}^{(n)}$ is a fixed classical configuration carrying all
the topological charge $n$, and $a_{\mu}$, the path integral variable,
accounts for the quantum ``fluctuations'' and belongs to the trivial
sector $n=0$.
As it is well-known \cite{Actor}, the chemical potential term can be
represented by a vector field $A_\mu^{ext}$ describing
an {\em external} charge density acting on the quantum system.
Indeed, taking $A_\mu^{ext}$ as $i$ times the chemical potential
matrix (see eqs.(\ref{Mu}) and (\ref{achem})) it corresponds
to a uniform charge background for each fermionic flavor.
As explained in \cite{hf}, it is convenient
to first consider a finite length ($2 l$)
box and then take the $l\rightarrow\infty$ limit. In this
way translation symmetry breaking associated to the
chemical potential becomes apparent and simultaneously,
ambiguities in the definition of the finite density theory
are avoided. When necessary, we shall follow this prescription
(see ref.\cite{fks} for a discussion on this issue).
We start by defining
\begin{equation}
A_{\nu}^{ext}=-i{\cal M}\ \delta_{\nu 0},
\label{achem}
\end{equation}
so that the Dirac operator
\begin{equation}
i\ds+\not\!\! A-i{\cal M}\gamma_0
\end{equation}
can be compactly written as
\begin{equation}
i\ds +\not\!\! A'
\label{a'}
\end{equation}
with
\begin{equation}
A'_\mu = A_\mu + A_{\mu}^{ext}
\label{bol}
\end{equation}
We shall now proceed to a decoupling of fermions from the chemical
potential and the $a_\mu$ fluctuations following the steps
described in \cite{hf} for the case of only one flavor.
In that case, we wrote
\begin{equation}
a_\mu = -\epsilon_{\mu \nu} \partial_\nu \phi+\partial_{\mu}\eta
\label{viejo}
\end{equation}
and made a chiral rotation to decouple both the $\phi-\eta$ fields
together with the chemical potential. In order to include $N_f$
flavors in the analysis, one has
to replace $(\phi,\eta) \rightarrow (\phi,\eta){\bf 1}_f$ and
$\mu \rightarrow {\cal M}$ as we shall see below.
Then, we can straightforwardly apply what we have learnt for one
flavor \cite{hf} in the multiflavor case.
The change of variables accounting for the decoupling
of fermions from the $a_{\mu}$ field together with the
chemical potential is given by
\begin{eqnarray}
& & \psi = \exp[\gamma_5\ (\phi(x){\bf 1}_f+i{\cal M} x_1)+i\eta(x) {\bf 1}_f]\
\chi \nonumber\\
\label{change}\\
& & \bar\psi = \bar\chi\ \exp[{ \gamma_5\ (\phi(x){\bf 1}_f+i{\cal M} x_1)
-i\eta(x){\bf 1}_f}]\nonumber
\end{eqnarray}
\begin{equation}
\not\! a=(-i\ds U)\ U^{-1}
\label{decou}
\end{equation}
where
\begin{equation}
U=\exp[{\gamma_5\ (\phi{\bf 1}_f+i{\cal M} x_1)+i\eta{\bf 1}_f}]
\end{equation}
For notation compacteness we have included in
$ \not\!\! a $ the external field $A^{ext}_{\mu}$ describing the chemical
potential term. From here on we choose the Lorentz gauge to work in
(which in our notation corresponds to $\eta=0$).
After transformation (\ref{change}) the resulting Dirac operator
takes the form
\begin{equation}
i\not\!\! D=i\ds+\not\!\! A^{(n)}+\not\! a\ \ \rightarrow\ \ i\ds+\not\!\! A^{(n)}.
\end{equation}
The jacobian associated with the chiral rotation of
the fermion variables can be easily seen to be \cite{hf}
\begin{equation}
J = \exp \left(\frac{tr_f}{2\pi}\int d^2x ~(\phi{\bf 1}_f+i{\cal M} x_1)
\Box (\phi + 2 \phi^{(n)})
\right)
\end{equation}
where $\phi^{(n)}$ is defined by
\[
A_\mu^{(n)} = - \epsilon_{\mu \nu} \partial_\nu \phi^{(n)} {\bf 1}_f
\]
Together with eq.(\ref{change}) we consider the change in the
gauge-field variables $a_\mu$ so that
\begin{equation}
{\cal{D}} a_\mu = \Delta_{FP}\delta(\eta) {\cal{D}}\phi {\cal{D}}\eta
\label{ca}
\end{equation}
with $ \Delta_{FP} = \det \Box$
As thoroughly analysed by Actor \cite{Actor},
$A_\mu^{ext}$ does not correspond to a pure gauge. Were it not so,
the introduction of a chemical potential would not have physical
consequences and this would be the case in any
space-time dimensions. In fact, one cannot gauge away $L_{chem}$
by means of a bounded gauge
transformation. As explained in \cite{hf}, the chiral rotation
which decouples the chemical potential,
although unbounded can be properly handled by
putting the system in a spatial
box, then introducing adequate counterterms and
finally taking the infinite volume limit.
After the decoupling transformation, the partition function,
can be written in the form
\begin{equation}
Z = {\cal N}\sum_n
\int {\cal{D}} \bar\chi {\cal{D}} \chi {\cal{D}}\phi ~ \exp (- S_{eff}^{(n)})
\label{ef}
\end{equation}
where $ S_{eff}^{(n)}$ is the effective action in the $n^{th}$
topological sector,
\begin{eqnarray}
& & S_{eff}^{(n)} = \int d^2x ~ \bar\chi (i\!\!\not\!\partial +
{\not\!\! A}^{(n)})\chi -
\frac{N_f}{2e^2}\int d^2x \left(
(\Box\phi)^2 + \epsilon_{\mu\nu} { F}_{\mu\nu}^{(n)} \Box
\phi \right ) \nonumber \\*[2 mm]
& & -\frac{N_f}{4e^2} \int d^2x {(F_{\mu \nu}^{(n)})}^2
-\frac{tr_f}{2\pi}\int d^2x ~ (\phi{\bf 1}_f+i{\cal M} x_1) \Box
(\phi + 2 \phi^{(n)}) + S_c
\label{lar}
\end{eqnarray}
The usual divergency associated to the electromagnetic energy
carried by fermions has to be eliminated by a
counterterm $S_c$ \cite{fks}.
In our approach the divergency manifests through the term
$i{\cal M} x_1 \Box \phi^{(n)}$ in eq.(\ref{lar}). Putting the
model in a finite length box and appropriately adjusting
$S_c$ yields to a finite answer.
The counterterm is the Lagrangian counterpart of the one
usually employed in the Hamiltonian approach to handle this problem
\cite{fks}.
In the canonical formulation of QFT this is equivalent to
a redefinition of creation and annihilation operators which
amounts to a shift in the scale used to measure excitations.
As we have mentioned, a fermionic chemical potential ammounts to
introducing a finite {\em external} charge (i.e. at the spatial
boundaries) into the theory. In these conditions,
it can be proved that massless $QED_2$ (and $QCD_2$) at finite density
remains in the Higgs phase. To show this one may compute the string
tension following for instance the procedure described in ref.\cite{Gsm}:
one starts by integrating out the fermion fields
(or, equivalently, the bosons in the bosonized version)
in order to derive the effective action for the gauge fields. One
can then compute the Wilson loop to calculate the energy of a couple
of (static) external charges for a theory containing also dynamical
`quarks'.
Now, since zero modes kill the contributions of non-trivial
topological sectors to the partition function,
screening can be tested using the effective action in
the trivial topological sector. In fact, one can see
that for vanishing fermion masses the string tension vanishes.
In order to discuss these issues in multiflavour $QED_2$
at finite density, let us note that
after integration of fermions in eq.(\ref{lar}), the
resulting effective action for
the gauge field can be written as \cite{RS}
\begin{equation}
S_{eff} = \int d^2x
(\frac{1}{4e^2} F_{\mu \nu}^2 + \frac{1}{2\pi} a_\mu^2
- tr_f\frac{i{\cal M}}{2 \pi} \int d^2x\ x^1 F_{01}),
\label{13}
\end{equation}
where $S_c$ has cancelled the divergent
term, as explained above. Then, at this stage there is no
divergency to deal with and we can perform our calculation in the whole
Euclidean space.
Choosing the Coulomb gauge $a_1 = 0$, appropriate to derive
the static potential between external charges, we obtain from (\ref{13}),
(after integration by parts using zero boundary conditions) the following
effective Lagrangian
\begin{equation}
{\cal L}_{eff} = \frac{1}{2e^2}(\partial_1 a_0)^2 + \frac{1}{2\pi} a_0^2
- tr_f\frac{i{\cal M}}{2\pi} a_0
\label{15}
\end{equation}
In order to analize the force between charges, let us pass to
Minkowski space-time, making $a_0 \to i a_0$ so that
the corresponding effective Lagrangian in Minkowski space reads
\begin{equation}
{\cal L}_{M} = -\frac{1}{2e^2}(\partial_1 a_0)^2 - \frac{1}{2\pi} a_0^2
+ tr_f\frac{{\cal M}}{2\pi} a_0
\label{16}
\end{equation}
To determine the electrostatic potential between two external charges
$\pm e'$, we may couple to the gauge field the proper charge
density
\begin{equation}
\rho(x^1) = e'(\delta (x^1+ l) - \delta (x^1 - l))
\label{5}
\end{equation}
so that the complete effective Lagrangian becomes
\begin{equation}
{\cal L} = {\cal L}_M - \rho a_0
\label{6}
\end{equation}
The resulting equation of motion takes the form
\begin{equation}
\partial_1^2 a_0 - \frac{e^2}{\pi} a_0 + \frac{e^2}{2\pi} tr_f\cal{M} =
\rho
\label{17}
\end{equation}
and its solution reads
\begin{equation}
a_0(x^1) = \frac{e'}{2m} ( \exp(-m \vert x^1+l \vert ) -
\exp(-m \vert x^1 - l \vert ) ) +tr_f\cal{M}.
\label{18}
\end{equation}
where $m=e\sqrt{N_f/\pi}$.
The energy of the two test charges at a distance $2 l$
each other, is given by
\begin{equation}
V(l) = \frac{1}{2} \int dx^1 \rho(x^1) a_0(x^1)
\label{9}
\end{equation}
and we obtain, at finite fermion density, the usual screening potential
\begin{equation}
V(l) = \frac{{e'}^2}{2m} (1 - \exp(- 2 m l))
\label{10}
\end{equation}
with no modification due to the presence of the chemical potential
whose contribution trivially cancels.
We then conclude that in massless multiflavour $QED_2$
at finite density, any fractional charge $e'$ is screened
by integer massless charges.
To get a deeper insight into these results,
let us note that, as it is well-known, only the trivial topological sector
contributes to the partition function for massless fermions
(the contribution of non-trivial sectors being killed by zero-modes).
Then, deriving the potential between external charges
as we did above or computing the Wilson loop ${\cal W}$ as well,
one finds that screening
is not affected by the presence of the chemical potential term
(the Wilson loop calculation yields ${\cal W} = 1$ \cite{Gsm}).
One could argue that, as it happened with the two test
charges, the external charge background associated
to the chemical potential term is itself also screened by massless
dynamical fermions.
After all it is only in the properties of fermion condensates
that topological sectors enter into play and it is through
its contributions that the chemical potential manifests itself.
One can understand this issue as follows:
The topological structure of the theory is determined at the boundaries
(recall that in order to calculate the
topological charge one just uses $\oint A^{n}_{\mu} dx_{\mu}=2\pi n$)
and the corresponding $n$-charge configurations
are responsible for the non-triviallity of the
correlators but not affecting the partition function. It is then precisely
when computing condensates, that the charges at the boundaries
associated with the chemical potential manifest.
To rephrase this analysis, note that
with the choice of the counterterm discussed above,
the effective action written in terms of the decoupled fermions
does not depend on the chemical potentials $\mu_k$. This does not mean
that this term has no physical consequences. In fact,
${\cal M}$ reappears when computing correlation functions of fermion
fields, once $\bar \psi$ and $\psi$ are written in terms of
the decoupled fields $\bar \chi$ and $\chi$ through eq.(\ref{trafo}).
We shall see in the following sections how fermionic correlators are
changed, exhibiting oscillatory inhomogeneities in the spatial axes which
depend on ${\cal M}$.
The fact that zero modes make certain v.e.v.'s not to vanish,
leads to a highly non-trivial dependence on the chemical potentials.
\subsection*{(ii) The Correlation Functions}
The introduction of a flavor index implies additional degrees of freedom
which result in $N_f$ independent fermionic field variables. Consequently,
the growing number of Grassman (numeric) differentials calls for additional
Fourier coeficients in the integrand.
It is well known that each coeficient
is related to the quantum numbers of the chosen basis, which is normally
builded up from the eigenfunctions of the Dirac operator. As we have
for one flavor, one has that $n$ of
these eigenfunctions are zero-modes, implying a vanishing
fermionic exponential. Hence, in order to make Grassman integrals
non-trivial, one has to insert several bilinears depending on the
number of zero modes. When the path-integral measure
contains $N_f$ independent fermionic fields instead of one, the
number of composite insertions is multiplied by $N_f$ in order to saturate
Grassman integration algebra, with some selection rules which will become
apparent below.
For the sake of brevity let us readily give the result for
general correlation functions of $p$ points with
arbitrary right and left insertions
\begin{equation}
C(w_1,w_2,\ldots) = \langle\prod_{k=1}^{N_f}\prod_{i=1}^{r_k}s_+^k(w^i)
\prod_{j=1'}^{s_k}s_-^k(w^j)\rangle
\label{arbab}
\end{equation}
where
\begin{equation}
s_{\pm}^k(w^i)\equiv \bar\psi^k_{\pm}(w^i) \psi^k_{\pm}(w^i) ~,
\end{equation}
\[ p=\sum_{k=1}^{N_f}p_k \]
and
\[ r_k+s_k=p_k \]
is the total number of insertions in the flavor sector $k$.
After the abelian decoupling, eq.(\ref{arbab}) results in
\begin{eqnarray}
& & C(w_1,w_2,\ldots) = \frac{1}{Z}
\sum_{n=1}^{\infty} \! \int {\cal{D}}\phi\
\exp[\frac{N_f}{2\pi}
\int d^2x\ \phi\Box (\phi+\phi^{(n)})] \times \nonumber\\
& & \exp[-\frac{1}{e^2} \int d^2x\ (\phi+\phi^{(n)})\Box\Box
(\phi+\phi^{(n)})\ ] \times \nonumber\\
& & \exp[ 2\sum_{k=1}^{N_f} (\sum_{i=1}^{r_k}\phi(w^i)
-\sum_{j=1'}^{s_k}\phi(w^j))] \times \nonumber \\
& &
\prod_{k=1}^{N_f} \exp[ 2 i\mu_k( \sum_{i=1}^{r_k} w^i_1
-\sum_{j=1'}^{s_k} w^j_1)] \int {\cal{D}}\bar\chi^k {\cal{D}}\chi^k \prod_{i=1}^{r_k}
\bar\chi^k_+(w^i) \chi^k_+(w^i) \times
\nonumber\\
& &
\prod_{j=1'}^{s_k}\bar\chi^k_-(w^j) \chi^k_-(w^j)
\exp[-\int d^2x\ \bar\chi^k (i\ds+\not\!\! A^{(n)})\chi^k ]
\label{corte}
\end{eqnarray}
where
$w^i_1$ is the space component of $w^i$.
We see from eq.(\ref{corte}) that the chemical potential contribution
is, as expected, completely factorized. Concerning
the bosonic integral, it can be written as
\begin{eqnarray}
& & B= \! \exp[{N_f/2\pi \int d^2x\ \phi^{(n)}\Box \phi^{(n)} }] \
\exp[{ -2\sum_{k=1}^{N_f} (\sum_{i=1}^{r_k}\phi^{(n)}(w^i)
-\sum_{j=1'}^{s_k}\phi(w^j)) }] \nonumber \\
& & \times \exp[{ -2\sum_{k,k'=1}^{N_f} \sum_{i=1}^{p_k}\sum_{i=1}^{p_{k'}}
e_ie_j O^{-1}(w^i,w^j)}]
\label{B}
\end{eqnarray}
with
\[ O^{-1}(w^i,w^j)=K_0(m|w^i-w^j|)+\ln(c|w^i-w^j|). \]
The fermionic path-integral determines the topological sectors
contributing to equation (\ref{arbab}).
More precisely, once the correlator to be computed has
been chosen, Grassman
integration leads to a non-zero answer only when the number of right
insertions minus the number of left insertions is the same
in every flavor sector. It means that
$r_k-s_k=t\ \ \forall k$, where $t$ is the only topological
flux number surviving the leading sumatory in eq.(\ref{corte}).
(Notice that mixed flavor indices in the elementary bilinear
are avoided, i.e.
we are not including flavor-violating vertices,
in accordance with $QED_4$ interactions).
It is important to stress that each term
explicitly including the classical configuration of the flux sector
cancels out. Consequently, classical configurations
only appear through by means of their
global (topological) properties, namely, through the
difference in the number of
right and left handed bilinears \cite{bc}.
To conclude, we give the final result for the general correlator
defined in eq.(\ref{arbab}) making use of the explicit form of
Abelian zero modes
\begin{eqnarray}
& & \langle\prod_{k=1}^{N_f}\prod_{i=1}^{r_k}s_+^k(w^i)
\prod_{j=1'}^{s_k}s_-^k(w^j)\rangle=
(-\frac{m e^{\gamma}}{4\pi})^p
\nonumber\\
& &
\exp[ 2 i\sum_{k=1}^{N_f}\mu_k(\sum_{i=1}^{r_k} w^i_1-
\sum_{j=1'}^{s_k} w^j_1)] \prod_{k>k'=1}^{N_f}
\exp [ -4\sum_{i=1}^{p_k}\sum_{j=1}^{p_{k'}} e_i e_j \ln(c|w^i-w^j|)]
\nonumber\\
& &
\exp [ -\sum_{k,k'}^{N_f}\sum_{i=1}^{p_k}\sum_{j=1}^{p_{k'}}
e_i e_j K_0(m|w^i-w^j|)]
\label{arbab2}
\end{eqnarray}
(see Ref.\cite{hf} and \cite{steele} for details).
In order to clearly see the meaning of this expression,
let us show the result for
the simplest non-trivial flavored correlation functions
including mixed right and left handed insertions
\begin{eqnarray}
& & \sum_n\langle\bar\psi^1\psi^1(x)\bar\psi^1\psi^1(y)
\bar\psi^1\psi^1(z)
\bar\psi^2\psi^2(w)\rangle{}_n=\nonumber\\
& & 2\cos[\mu_1(z_1-x_1-y_1)-\mu_2 w_1] \langle s_+^1(x)s_+^1(y)
s_-^1(z) s_+^2(w) \rangle{}_1+
\nonumber\\
& & 2\cos[\mu_1(y_1-x_1-z_1)-\mu_2 w_1] \langle s_+^1(x)s_-^1(y)
s_+^1(z)s_+^2(w) \rangle{}_1+
\nonumber\\
& & 2\cos[\mu_1(x_1-z_1-y_1)-\mu_2 w_1] \langle s_-^1(x)s_+^1(y)
s_+^1(z)s_+^2(w) \rangle{}_1,
\end{eqnarray}
\begin{eqnarray}
& & \sum_n\langle\bar\psi^1\psi^1(x)\bar\psi^1
\psi^1(y)\bar\psi^2\psi^2(z)
\bar\psi^2\psi^2(w)\rangle{}_n=\nonumber\\
& & 2\cos[\mu_1(x_1-y_1)-\mu_2 (z_1-w_1)] \langle s_+^1(x)s_-^1(y)s_-^2(z)
s_+^2(w) \rangle{}_0+
\nonumber\\
& & 2\cos[\mu_1(x_1-y_1)+\mu_2 (z_1-w_1)] \langle s_+^1(x)s_-^1(y)s_+^2(z)
s_-^2(w) \rangle{}_0+
\nonumber\\
& & 2\cos[\mu_1(x_1+y_1)+\mu_2 (z_1+w_1)] \langle s_+^1(x)s_-^1(y)s_-^2(z)
s_+^2(w) \rangle{}_2
\end{eqnarray}
These expresions make apparent: (i) How the topological
structure of the theory exhibits itself through
the existence of non-trivial vacuum
expectation values of fermionic bilinears. (Notice that those on the
right hand side are the only surviving terms of the whole sumatory).
(ii) In the multiflavor case, the path-integrals are non-zero
only when the number of right insertions minus the number of
left insertions are identical in every flavor sector.
(iii) The sum over spatial coordinates dramatically exhibits
the translation symmetry breaking discussed above.
(iv) The fixing of various fermion densities
implies a somehow reacher spatial inhomogeneity of the results
with respect to the one flavor case that we have analyzed in \cite{hf},
in the sense that now the ``angles'' depend on various chemical
potentials.
(v) Another difference with respect
to the one flavor case, concerns the trivial cancellation of logarithms
coming from bosonic and fermionic integration respectively.
Now, this cancellation occuring for one flavor, does not take place
anymore, see eq.(\ref{arbab2}).
\section{Multiflavour ${\em QCD}_2$}
In the present section we consider two dimensional
$SU(N_c)$ Yang-Mills gauge fields coupled to
massless Dirac fermions in the fundamental representation.
Due to the non-Abelian character of the gauge
symmetry, gluons are charged fields that preserve color flux at each
vertex. Since a colored
quark density is not a quantity to be kept constant,
no chemical potential related to color should be considered but
only that associated with the global symmetry
that yields fermion number conservation. Hence, we
first include one chemical potential
term and then consider a different lagrange multiplier
for each fermionic flavor.
Let us stress that once the topological effects
arising from vortices are taken into account
and the chemical potential behavior of fermion correlators
is identified, we do not pursue calculations in the bosonic
sector (neither we consider the inclusion of Higgs scalars,
necessary at the classical level for the existence of regular
vortex solutions).
As we shall see, the boson contribution to the
fermion condensate just factorizes and all the
chemical potential effects can be controlled by
calculations just performed within the fermionic sector.
\subsection*{(i) Handling the Chemical Potential in $QCD_2$}
We start from the massless ${\em QCD}_2$ (Euclidean) Lagrangian
\begin{equation}
L=\bar\psi^{q}(i\partial_{\mu} \gamma_{\mu} \delta^{qq'}+A_{\mu,a}
t_a^{qq'}\gamma_{\mu}-i\mu\gamma_0\delta^{qq'})\psi^{q'}+
\frac{1}{4g^2} F_{\mu\nu}^a F_{\mu\nu}^a.
\label{lag}
\end{equation}
where we have included a chemical potential term in the form
\begin{equation}
L_{chem}=-i\mu\psi^{\dagger}\psi
\label{lchem}
\end{equation}
in order to take care of the fermion density constraint.
Here $a=1\dots N_c^2-1,\ $ and $q=1\dots N_c$.
The partition function reads
\begin{equation}
Z[\mu] = \int {\cal{D}} \bar\psi {\cal{D}} \psi{\cal{D}} A_\mu \exp[-\int d^2x\, \exp L].
\label{Z}
\end{equation}
Again, one can decouple the chemical potential
by performing an appropriate chiral rotation for the
fermion variables. Indeed, under the transformation
\begin{eqnarray}
& & \psi = \exp(i\mu \gamma_5 x_1) \chi
\nonumber\\
& & \bar\psi = \bar\chi\ \exp(i \mu \gamma_5 x_1)
\label{chang}
\end{eqnarray}
the fermion Lagrangian becomes
\begin{equation}
L = \bar \psi \not\!\! D [A, \mu] \psi \to \bar \chi \not\!\! D[A] \chi
\label{trafo}
\end{equation}
so that the chemical potential completely disappears from the
fermion Lagrangian.
As we have seen, chiral transformations
may generate a Fujikawa jacobian which has to be computed
using some regularization procedure.
For example, using the heat-kernel regularization one introduces
a resolution of the identity of the form
\begin{equation}
1 = \lim_{M \to \infty} \exp(- \not\!\! D(\alpha)^2 /M^2).
\label{yi}
\end{equation}
where $D_\mu(\alpha)$ ($\alpha \in (0,1)$)
is an interpolating Dirac operator such
that $D_\mu(\alpha = 0) = D_\mu[A, \mu]$ and
$D_\mu(\alpha = 1) = D_\mu[A]$.
After some standard calculation \cite{gmss} one ends with a
Jacobian of the form
\begin{equation}
J=\exp \left({i\epsilon_{\mu\nu}/4\pi \int_0^1\, d^2x\ d\alpha\
tr^c[\mu x_1 F_{\mu\nu}(\alpha)]}\right)
\label{jac3}
\end{equation}
where tr$^c$ is the trace with respect to color indices and
\begin{equation}
F_{\mu\nu}(\alpha) = F_{\mu\nu}^a(\alpha) t^a, \ \ \ a=1,2,\ldots,N_c^2 - 1
\label{FF}
\end{equation}
Now, the color trace in eq.(\ref{jac3}) vanishes and then
the chiral Jacobian is in fact trivial,
\begin{equation}
J = 1
\label{J}
\end{equation}
We can then write the partition function (\ref{Z}) after the
fermion rotation defined in eq.(\ref{change}) in the form
\begin{equation}
Z[\mu] = \int {\cal{D}} A_\mu {\cal{D}} \bar \chi {\cal{D}} \chi \exp(-\int d^2x L)
\label{ZZ}
\end{equation}
As we have seen in the Abelian case, although $\mu$ is
absent from the r.h.s. of eq.(\ref{ZZ})
one should not conclude that physics is independent of the chemical
potential. For correlation
functions of composite operators which are not chiral invariant,
the chemical potential will reappear when rotating the fermion
variables in the fermionic bilinears. As in the Abelian case,
this happens when computing v.e.v.'s of products $\bar \psi(x)
\psi(x)$
\subsection*{(ii)
Correlation functions in $QCD_2$ with chemical potential}
Our main interest is the computation of fermionic correlators
containing products of local bilinears $\bar \psi \psi(x)$
for which
non-trivial topological gauge field configurations,
and the associated Dirac operator zero-modes, will be
crucial to the obtention of non-vanishing results as explained in
refs.\cite{hf},\cite{bc}-\cite{cmst}.
As in section 3, we start by writing a gauge field belonging to
the $n^{th}$ topological sector, in the form
\begin{equation}
A^a_\mu(x) = A^{a (n)}_\mu(x) + a^{a}_\mu(x)
\label{form}
\end{equation}
where $A^{a (n)}_{\mu}$ is a fixed classical configuration (as
described in section 2.2)
carrying all the topological charge $n$, and $a_{\mu}^a$, will be
the actual integration variable which belongs to the trivial
sector $n=0$.
Then, we decouple
the $a_{\mu}$ field from the fermions
through an appropriate rotation
(the calculation of the
Fujikawa Jacobian being standard since the decoupling
corresponds to the topologically trivial sector).
Now, it will be convenient to choose the background so that
\begin{equation}
A^{a (n)}_+ = 0
\label{ba}
\end{equation}
In this way, the Dirac operator takes the form\footnote{
We are using $\gamma_0=\sigma_1$ and $\gamma_1=-\sigma_2.$}
\begin{equation}
\not\!\! D[A^{(n)} + a] = \left( \begin{array}{cc} 0 & \partial_+ + a_+ \\
\partial_- + A^{(n)}_- + a_- & 0 \end{array} \right)
\label{vierbein}
\end{equation}
and we are left with the determinant of this operator once
fermions are integrated out
\begin{equation}
Z[\mu] = \sum_n\int {\cal{D}} a_\mu
\exp[\frac{1}{4g^2} F_{\mu\nu}^2[A^{(n)}+a]]
\det \not\!\! D[A^{(n)} + a].
\label{ZZss}
\end{equation}
As before, we have introduced a sum over different topological sectors.
Now,
we shall factor out the determinant in the classical background so as
to control the zero mode problem. Let us start by introducing
group valued fields to represent $A^{(n)}$ and $a_\mu$
\begin{equation}
a_+ = i u^{-1} \partial_+ u
\label{u}
\end{equation}
\begin{equation}
a_- = i d(v \partial_- v^{-1}) d^{-1}
\label{vv}
\end{equation}
\begin{equation}
A^{(n)}_- = i d \partial_- d^{-1}.
\label{a}
\end{equation}
Consider first the light-cone like gauge choice
\cite{pol}
\begin{equation}
A_- = A^{(n)}_-
\label{Pol}
\end{equation}
implying
\begin{equation}
v = I .
\label{aa}
\end{equation}
In this gauge the Dirac operator (\ref{vierbein})
reads
\begin{equation}
\not\!\! D[A^{(n)} + a]\vert_{lc} =
\left( \begin{array}{cc} 0 & \partial_+ +iu^{-1} \partial_+ u\\
\partial_- + A^{(n)}_- & 0 \end{array} \right)
\label{vier}
\end{equation}
where subscript $lc$ means that we have used the gauge condition
(\ref{Pol}).
One can easily see (for example by rotating the $+$ sector with
$u^{-1}$ while leaving the $-$ sector unchanged) that
\begin{equation}
\det \not\!\! D[A^{(n)} + a]\vert_{lc} =
{\cal N} \det
\not\!\! D[A^{(n)}] \times
\exp(W[u, A^{(n)}]).
\label{pri}
\end{equation}
Here $W[u,A^{(n)}]$ is the gauged Wess-Zumino-Witten action which
in this case takes the form
\begin{equation}
W[u, A^{(n)}] = W[u] + \frac{1}{4\pi}tr_c\int d^2x (u^{-1}
\partial_+ u) (d \partial_- d^{-1})
\label{sa}
\end{equation}
and $W[u]$ is the Wess-Zumino-Witten action
\begin{equation}
W[u] = \frac{1}{2\pi} tr_c
\int d^2 x \partial_{\mu}u^{-1}\partial_{\mu}u+
\frac{e^{ijk}}{4\pi}
tr_c \int_B\! d^3y\,
(u^{-1}\partial_{i}u)(u^{-1}\partial_{j}u) (u^{-1}\partial_{k}u).
\end{equation}
\label{ww}
Note that in writing the fermion
determinant in the form (\ref{pri}),
the zero-mode problem has been circumscribed
to the classical background fermion determinant.
One can easily extend the result
(\ref{pri}) to an arbitrary gauge, in terms
of the group-valued fields $u$ and $v$ defined by
eqs.(\ref{u})-(\ref{vv}),
by repeated use of the Polyakov-Wiegmann identity
\cite{polw}
\begin{equation}
W[pq] = W[p] + W[q] + \frac{1}{4\pi} tr_c \int d^2x
(p^{-1}\partial_+p) \, (q \partial_- q^{-1})
\label{PW}
\end{equation}
The answer is
\begin{equation}
\det \not\!\! D[A^{(n)} + a] =
{\cal N} \det \not\!\! D[A^{(n)}] \times \exp(S_{eff}[u,v; A^{(n)}])
\label{sui}
\end{equation}
\begin{eqnarray}
S_{eff}[u,v; A^{(n)}] & = &
W[u, A^{(n)}]+ W[v] + \frac{1}{4\pi}tr_c\!\int\! d^2x\,
(u^{-1} \partial_+ u) d (v \partial_- v^{-1}) d^{-1}
\nonumber \\
& & +\frac{1}{4\pi}tr_c\!\int\! d^2x\, (d^{-1} \partial_+ d)
(v \partial_- v^{-1}).
\label{pris}
\end{eqnarray}
Once one has the determinant in the form (\ref{sui}), one can work
with any gauge fixing condition. The gauge choice (\ref{Pol}) is
in principle not safe since the corresponding Faddeev-Popov
determinant is $\Delta = \det D_-^{adj}[A^{(n)}]$
implying the possibility of new zero-modes. A more appropriate
choice would be for example $A_+ = 0$, having
a trivial FP determinant. In any case one ends with a partition
function showing the following structure
\begin{eqnarray}
Z &=& \sum_n \det(\not\!\! D[A^{(n)}]) \int {\cal{D}} a_\mu\,
\Delta\, \delta(F[a])\nonumber\\
& & \exp \left( -S_{eff}[A^{(n)}, a_\mu] - \frac{1}{4g^2}
\int d^2x F^2_{\mu\nu}[A^{(n)}, a_\mu] \right)
\label{z1}
\end{eqnarray}
Concerning the divergency associated to the external charge distribution,
we have learnt from the Abelian case that one has to carefully handle
this term in order to define excitations with respect to the external
background. In section 3 we have seen that it came from
the interaction of $A^{ext}$ with $F_{\mu\nu}^{(n)}$, appearing in
the fermionic jacobian. Performing a similar calculation in the
present case we would find the non-Abelian analogue of this term
with $tr^c$ acting on it. As we have mentioned above,
this color trace operation implies the vanishing of the corresponding
divergency so that no counterterm might be added in $QCD_2$, meaning that
the relevant vacuum is properly defined.
As we have seen, the Lagrangian for $QCD_2$ at finite density
can be written in terms of $\mu$-rotated fields
which hide the
chemical potential from the partition function. This result however,
does not exhausts the physics of the theory in the sense that
correlation functions {\cal do} depend on $\mu$. Actually, it will
be shown that the chemical potential dependence appears
as a factor multiplying
the result for correlators of the unconstrained theory.
For this reason, we shall first describe the computation
of vacuum expectation values of fermion bilinears in the
$\mu = 0$ case and then
consider how this result is modified at finite fermion density.
Hence, we proceed with the analysis of v.e.v's of
products of bilinears like $\bar\chi\chi$. Let us start by
noting that with the choice (\ref{ba}) for the classical
field configuration, the Dirac equation takes the form
\begin{equation}
\not\!\! D[A^{(n)} + a]\left( \begin{array}{c}
\chi_+ \\
\chi_-
\end{array} \right) =
\left( \begin{array}{cc} 0 & u^{-1}i\partial_+ \\
dvd^{-1}D_-[A^{(n)}] & 0 \end{array} \right)
\left( \begin{array}{c}
\zeta_+ \\
\zeta_-
\end{array} \right)
\label{matrix2}
\end{equation}
where $\zeta$ is defined as
\begin{eqnarray}
& & \chi_+=dvd^{-1}\zeta_+\nonumber\\
& & \chi_-=u^{-1}\zeta_-
\label{lasttrafo}
\end{eqnarray}
so that the Lagrangian in the $n^{th}$ flux sector
can be written as
\begin{eqnarray}
L & & =\bar\chi\not\!\! D[a+A^{(n)}]\chi=\zeta_-^*i\partial_+\zeta_- +
\zeta_+^*\not\!\! D_-[A^{(n)}]\zeta_+\nonumber\\
& & \equiv\bar\zeta\ \widetilde D[A^{(n)}]\zeta.
\label{Lzeta}
\end{eqnarray}
In terms of these new fields, the bilinears $\bar\chi\chi$
take the form
\begin{equation}
\bar\chi\chi=\zeta_-^*u dvd^{-1}\zeta_+ +\zeta_+^*
dv^{-1}d^{-1}u^{-1}\zeta_- .
\end{equation}
We observe that the jacobian associated to (\ref{lasttrafo})
is nothing else but
the effective action defined in the previous section
by eq.(\ref{pris}).
Hence, an explicit expression for the non-Abelian correlators
reads
\begin{eqnarray}
& & \langle \bar\chi\chi(x^1)\dots \bar\chi\chi(x^l)\rangle=
\sum_n\int {\cal{D}} a_{\mu}\ \Delta\ \delta (F[a_{\mu}])\,
\exp[ -S_{eff}(A^{(n)}, a) ]
\nonumber\\
& &
\int {\cal{D}}\bar\zeta {\cal{D}}\zeta\ \exp( \bar\zeta
\left( \begin{array}{cc} 0 & i\partial_+ \\
D_-[A^{(n)}] & 0 \end{array} \right)\zeta )\nonumber\\
& &
B^{q_1p_1}(x^1)\dots B^{q_lp_l}(x^l)\
\zeta_-^{*q_1}\zeta_+^{p_1}(x^1)\dots\zeta_-^{*q_l}\zeta_+^{p_l}(x^l)
+B^{q_1p_1}(x^1)\dots \nonumber\\
& &
B^{-1 q_lp_l}(x^l)\ \zeta_-^{*q_1}\zeta_+^{p_1}(x^1)\dots
\zeta_+^{*q_l}\zeta_-^{p_l}(x^l)
+B^{q_1p_1}(x^1)\dots \nonumber\\
& &
B^{-1 q_{l-1}p_{l-1}}(x^{l-1})B^{-1 q_lp_l}(x^l)\
\zeta_-^{*q_1}\zeta_+^{p_1}(x^1)\dots
\zeta_+^{*q_{l-1}}\zeta_-^{p_{l-1}}(x^{l-1})
\zeta_+^{*q_l}\zeta_-^{p_l}(x^l)
\nonumber\\
& &
+\dots
\end{eqnarray}
where the group-valued field $B$ is given by
\[ B=u dvd^{-1}. \]
For brevity we have written the gauge field measure in terms
of the original fields $a_\mu$ although for actual calculations
in the bosonic sector one has to work using $u$ and $v$ variables
and proceed to a definite gauge fixing. That is, the measure should
be written according to
\[ {\cal D}a_{\mu}\rightarrow {\cal{D}} u {\cal{D}} v J_B(u,v,d)\]
and then the gauge condition and Faddeev-Popov determinant
should be included (For example, in the light-cone
gauge $a_+ = 0$, $u = 1$ and the FP determinant is trivial).
Finally, notice that we have obtained a general and completely
decoupled result, from which one sees that due to color degrees
of freedom, the simple product that one
finds in the Abelian case becomes here an involved summatory.
Now that we have an expression for correlators in the unconstrained case,
let us include the chemical potential in our results. Recall that in
this theory the partition function is (see eq.(\ref{ZZ}))
\begin{equation}
Z = \int {\cal{D}} A_\mu {\cal{D}} \bar \chi {\cal{D}} \chi
\exp \left (-\int d^2x\, \bar\chi (i\ds +\not\!\! A) \chi
+\frac{1}{4g^2} F_{\mu\nu} F_{\mu\nu} \right)
\label{ZZs}
\end{equation}
where $ \bar\chi, \chi$ represent the fermion fields after
the chiral rotation (\ref{chang}) which eliminated the
chemical potential from the Lagrangian.
Since fermionic bilinears can be written as
\[ \bar\psi\psi =\bar\psi_+\psi_+ +\bar\psi_-\psi_-,\]
one has
\begin{equation}
\langle\bar\psi\psi\rangle =\exp(2i\mu x_1)\langle\bar\chi_+
\chi_+\rangle +
\exp(-2i\mu x_1)\langle\bar\chi_-\chi_-\rangle.
\end{equation}
It can be easily seen that the same factorization occurs
when flavor is introduced.
The corresponding transformation for the fermion field isospinor
is now
\begin{eqnarray}
& & \psi = \exp(i{\cal M}{\bf 1}_c \gamma_5 x_1) \chi
\nonumber\\
& & \bar\psi = \bar\chi\ \exp(i {\cal M}{\bf 1}_c \gamma_5 x_1)
\label{changes}
\end{eqnarray}
and the bilinear v.e.v takes in this case the form
\begin{equation}
\langle\bar\psi\psi\rangle =\exp(2i{\cal M}{\bf 1}_c x_1)\langle\bar\chi_+
\chi_+\rangle +
\exp(-2i{\cal M}{\bf 1}_c x_1)\langle\bar\chi_-\chi_-\rangle.
\end{equation}
We shall then include from here on flavor degrees of freedom
with the corresponding constraint on each fermion density.
Since in this case one deals with
$N_f$ fermions coupled to the gauge field,
we can use the fermionic jacobian we have computed for one
flavor to the power $N_f$ while
the bosonic measure remains untouched.
In the light-cone gauge it can be easily seen that
the effective bosonic sector now involves $N_c-1$ massive scalars,
their mass depending on flavor and color numbers by means of a
factor $(2N_c+N_f)^{1/2}$ with respect to the abelian counterpart
(There is also the same number of unphysical massless
particles \cite{lws}).
As we have previously explained, the Dirac operator
has $|n|N_c$ zero modes in the
$n^{th}$ topological sector, this implying that
more fermion bilinears are needed in order to obtain a non-zero
fermionic path-integral. Moreover,
since the flavor index implies a factor $N_f$
on the number of Grassman coeficients, the minimal non-zero
product of fermion bilinears
in the $n^{th}$ sector requires $|n|N_cN_f$ insertions.
Since the properties of the topological
configurations are dictated by those of the torus
of $SU(N_c)$, one can easily extend the results already
obtained for $QED_2$. In particular,
the chirality of the zero modes is dictated by the same index
theorem found in the Abelian theory, this implying that in sector $n>0$
($n<0$) every zero mode has positive (negative) chirality. In this way,
the right (left) chiral projections of the minimal non-zero
fermionic correlators can be easily computed. One gets
\begin{eqnarray}
& & \langle \prod_k^{N_f}\prod_q^{N_c}\prod_i^{|n|}
\bar\psi^{q,k}_+\psi^{q,k}_+(x^{q,k}_{i})
\rangle{}_n= \frac{1}{Z^{(0)}}
\!\! \int_{GF} {\cal{D}} u {\cal{D}} v J_B\;
e^{-S_{Beff}^{(n)}(u,v,d)}\nonumber\\
& & \prod_k^{N_f}\prod_q^{N_c}\prod_i^{|n|}
\sum_{p_i,l_i}^{N_c}B'{}_k^{q, p_i l_i}(x^{q,k}_{i})\left(
\int {\cal{D}}\bar\zeta
{\cal{D}}\zeta\; e^{\int\bar\zeta\; \!\!\not \widetilde D[A^{(n)}]\zeta}\;
\bar\zeta_+^{p_i}\zeta_+^{l_i} (x^{q}_{i}) \right)_k
\label{gennoab}
\end{eqnarray}
where
\begin{equation}
B'{}_k^{q, p_i l_i}(x)=\exp(2i\mu^k x_1)\, u^{p_i q}(x)
(dvd^{-1})^{q l_i}(x),
\end{equation}
$\bar\zeta_+=\zeta^*_-$ and $\widetilde D[A^{(n)}]$ stands
for the Dirac operator in the r.h.s of eq.(\ref{Lzeta}).
We have used the notation $Z^{(0)}$ for the partition function since
it is completely determined within the $n=0$ sector, see
eq.(\ref{z1}). We have showed every color and flavor indices
explicitly indicating sum and product operations. The $GF$
label stands for the gauge fixing.
The action
$S^{(n)}_{Beff}(u,v,d)=N_f S_{WZW}(u,v,d)+S_{Maxwell}(u,v,d)$
is given by the
full gluon field $A^{(n)}(d)+a(u,v)$, and yields a high order
Skyrme-type lagrangian \cite{fns}.
Let us consider $N_c=2$ and $N_f=2$ in order to present the simplest
illustration for the last expression. The minimal fermionic correlator
then looks
\begin{eqnarray}
& & \sum_n\langle\bar\psi^{1,1}_+\psi^{1,1}_+(x^1)
\bar\psi^{1,2}_+\psi^{1,2}_+(x^2)
\bar\psi^{2,1}_+\psi^{2,1}_+(y^1)
\bar\psi^{2,2}_+\psi^{2,2}_+(y^2)\rangle{}_n=\nonumber\\
& & \frac{1}{Z^{(0)}}\sum_{p,q,r,s}^{N_c=2}
\prod_{k=1}^2\exp[2i\mu^k (x_1+y_1)^k]
\! \int_{GF}\!\!\!\! {\cal{D}} u {\cal{D}} v J_B\; e^{-S_{Beff}^{(1)}(u,v,d)}\,
\times \nonumber\\
& &
B^{1, p q}_k(x^k) B^{2, r s}_k(y^k) \int {\cal{D}}\bar\zeta_k {\cal{D}}\zeta_k\,
e^{\int \bar\zeta_k \widetilde D[A^{(1)}]\zeta_k}\
\bar\zeta_+^{p,k}\zeta_+^{q,k}(x^k)
\ \bar\zeta_+^{r,k}\zeta_+^{s,k}(y^k).
\label{nabex}
\end{eqnarray}
The fermionic path-integral can be easily done, resulting
in the product of eigenfunctions discussed in the sections above,
as follows
\begin{eqnarray}
& & \int {\cal{D}}\bar\zeta_k {\cal{D}}\zeta_k\
e^{\int \bar\zeta_k \widetilde D[A^{(1)}]\zeta_k}\
\bar\zeta_+^{p,k}\zeta_+^{q,k}(x^k)
\ \bar\zeta_+^{r,k}\zeta_+^{s,k}(y^k)=
\det\prime(\widetilde D[A^{(1)}])\times\nonumber\\
& & \left( -\bar\eta_+^{(0,1)p,k}\eta_+^{(0,1)q,k}(x^k)
\bar\eta_+^{(0,2)r,k}\eta_+^{(0,2)s,k}(y^k)
+\bar\eta_+^{(0,1)p,k}\eta_+^{(0,2)q,k}(x^k)\right.
\nonumber\\
& &
\bar\eta_+^{(0,2)r,k}\eta_+^{(0,1)s,k}(y^k)
-\bar\eta_+^{(0,2)p,k}\eta_+^{(0,1)q,k}(x^k)
\bar\eta_+^{(0,1)r,k}\eta_+^{(0,2)s,k}(y^k)\nonumber\\
& & \left.
+\bar\eta_+^{(0,2)p,k}\eta_+^{(0,2)q,k}(x^k)
\bar\eta_+^{(0,1)r,k}\eta_+^{(0,1)s,k}(y^k)\right).
\label{ultima}
\end{eqnarray}
Here $\det\prime(\widetilde D[A^{(1)}])$ is the determinat of the
Dirac operator defined in eq.(\ref{Lzeta}) omitting zero-modes and (e.g.)
$\eta^{(0,1)q,k}(x^k)$ is a non-Abelian zero-mode as defined
in section 2, with an additional flavor index $k$.
Concerning the bosonic sector, the presence of the $F_{\mu\nu}^2$
(Maxwell) term crucially changes the effective dynamics with respect
to that of a pure Wess-Zumino model. One then has to perform
approximate calculations to compute the bosonic factor,
for example, linearizing the $U$ transformation, see \cite{fns}.
In any case,
once this task is achieved for the $\mu=0$ model,
the modified (finite density) result can be obtained in an exact way.
\section{Summary}
We have presented the correlation functions of
fermion bilinears in multiflavour $QED_2$ and $QCD_2$
at finite fermion density, using a path-integral approach
which is particularly appropriate
to identify the contributions arising from different topological
sectors. Analysing correlation functions
for an arbitrary number of fermionic bilinears, we have been
able to determine exactly its dependence
with the chemical potentials associated to different flavor indices.
As stressed in
the introduction, our work was
prompted by recent results by Deryagin,
Grigoriev and Rubakov \cite{dgr}
showing that in the large $N_c$ limit, condensates of $QCD$ in four
dimensions are inhomogeneous and anisotropic at high
fermion density.
Two-dimensional models are a favorite laboratory to test
phenomena which are expected to happen in $QCD_4$.
In fact, an oscillatory inhomogeneous behavior in
$\langle\bar \psi \psi\rangle$ was found in the Schwinger model
\cite{Kao} using operator bosonization and then the
analysis was completed by finding the exact behavior of
fermion bilinear correlators in \cite{hf}.
Here we have extended this analysis in order
to include flavor and color degrees of freedom within a
path-integral scheme
which makes apparent how topological effects
give rise to the non-triviality of
correlators.
Remarkably, the oscillatory behavior
related to the chemical potential that we
have found with no approximation, coincides exactly
with that described in \cite{dgr} for $QCD_4$ within the large
$N_c$ approximation (appart from the anisotropy that of course
cannot be tested in one spatial dimension).
In particular, the
structure of the multipoint correlation functions, given by
eqs.(\ref{arbab2}) and (\ref{gennoab}), shows a non-trivial
dependence on spatial coordinates. This makes apparent that
the ground state has, at finite density, an
involved structure which is a superposition of standing
waves with respect to the order parameter.
Being our model two-dimensional, we were able to control the
chemical potential matrix behavior in an {\it exact} way so that
we can discard the possibility that the formation of
the standing wave is a byproduct of some approximation. This
should be considered when analysing the results
of ref.\cite{dgr} in $d=4$ dimensions, where one could argue that
use of a ladder approximation as well as the fact
of neglecting effects subleading in $1/N_c$
play an important role in obtaining such a behavior.
Several interesting issues are open for further investigation
using our approach.
One can in particular study in a very simple way the
behavior of condensates at finite temperature. The chiral anomaly
is independent of temperature and plays a central role
in the behavior of condensates
through its connection with the index theorem. Therefore,
one should expect that formulae like (\ref{arbab2}) or (\ref{gennoab})
are valid also for $T > 0$. Of
course, v.e.v.'s at $\mu = 0$ in the r.h.s.
of this equation, should be replaced
by those computed at finite temperature and hence the issue of
zero-modes in a toroidal manifold should be
carefully examined (see e.g. \cite{steele}). At the
light of recent results concerning $QCD_2$
with adjoint fermions \cite{Gsm,Sm}-\cite{Sm2} it should be of interest
to extend our calculation so as to consider
adjoint multiplets of fermions.
Finally, it should be worthwhile to consider massive fermions and
compute fermion correlation functions at finite density,
via a perturbative
expansion in the fermion mass following the approach of
\cite{naon}. We hope to report on these problems in a future work.
\section*{Acknowledgements} The authors would like to thank
Centro Brasileiro de Pesquisas Fisicas of Rio de Janeiro
(CBPF) and CLAF-CNPq,
Brazil, for warm hospitality and financial support.
H.R.C. wish to acknoledge J. Stephany for helpful discussions.
F.A.S. is partially supported
by Fundacion Antorchas, Argentina and a
Commission of the European Communities
contract No. C11*-CT93-0315.
| proofpile-arXiv_065-392 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
proofpile-arXiv_065-393 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
|
\section{Introduction}
\label{sec:intro}
\setcounter{equation}{0}
The fine tuning required to accommodate the
observed CP invariance of the strong interactions, known as the
strong CP problem \cite{cheng},
suggests that the strong CP parameter $\bar{\theta}$ is
a dynamical field.
If some colored fields are charged
under a spontaneously broken Peccei-Quinn (PQ) symmetry \cite{pq},
then $\bar{\theta}$ is replaced by a shifted axion field.
The PQ symmetry is explicitly broken by QCD instantons, so that
a potential for the axion is generated with a minimum at $\bar{\theta}
= 0$. In the low energy theory, besides solving the strong CP problem,
this mechanism predicts nonderivative couplings
of the axion to the gauge bosons and model dependent derivative
couplings of the axion to hadrons and leptons \cite{eff,hadronic}.
There are two important issues that have to be addressed
by axion models. First, Planck scale effects may break explicitly the
PQ symmetry, shifting $\bar{\theta}$ from the origin
\cite{planck,ten,effect}.
Since only the gauge symmetries are expected to be preserved by Planck
scale physics \cite{grav},
the PQ symmetry should be a consequence of a gauge symmetry.
Second, an axion model should produce naturally the PQ symmetry
breaking scale, \pq.
Astrophysics and cosmology \cite{astro} constrain the axion mass
to lie between $10^{-5}$ and $10^{-3}$ eV \cite{data},
which translates in a range $10^{10}-10^{12}$ GeV for \pq.
The small ratio between the PQ scale and the Planck scale, $M_{\rm P}
\sim 10^{19}$ GeV,
can be naturally explained if the PQ symmetry is broken dynamically
in a theory with only fermions and gauge fields \cite{comp, model}.
Alternatively, if the PQ symmetry is broken by the vacuum expectation
value (vev) of a fundamental scalar, then supersymmetry (susy)
is required to protect \pq\ against quadratic divergences.
In this paper we study phenomenological constraints on axion models
and point out potential problems of the models
constructed so far.
In section \ref{sec:prot}
we discuss under what conditions a gauge symmetry can protect
$\bar{\theta}$ from Planck scale effects. We also list
theoretical and phenomenological requirements that should be imposed
on axion models.
These conditions are illustrated in the case of
non-supersymmetric composite axion models
in section \ref{sec:comp}. In section \ref{sec:susy}
it is shown that previous attempts of preventing harmful
PQ breaking operators in supersymmetric theories have failed.
A discussion of the PQ scale in supersymmetric models is also included.
Conclusions and a summary of results are presented
in section \ref{sec:conc}.
\vfil
\newpage
\section{Constraints on axion models}
\label{sec:prot}
\setcounter{equation}{0}
\subsection{Protecting the axion against Planck scale effects}
Gravitational interactions are expected to break any continuous
or discrete global symmetries \cite{grav},
so that gauge invariant nonrenormalizable operators suppressed by
powers of $M_{\rm P}$ are likely to have coefficients of order
one.
In refs.~\cite{ten,effect} it is argued that, under these
circumstances, a solution to the strong CP problem requires any gauge
invariant operator of dimension less than 10 to preserve the PQ
symmetry.
The reason is that the PQ-breaking operators change the potential
for the axion such that the minimum moves
away from $\bar{\theta} = 0$.
However, this condition can be relaxed. If a PQ-breaking operator
involves fields which do not have vevs, then its effect is an
interaction of the axion with these fields.
The exchange of these fields will lead to a potential for the
axion which is suppressed by at least as many powers of $M_{\rm P}$
as the lowest dimensional PQ-breaking operator formed by fields
which have vevs. Therefore, a natural solution to the strong CP
problem requires that {\it gauge symmetries forbid any PQ-breaking
operator of dimension less than 10 involving only fields which
acquire vevs of order \pq.}
This relaxed form is still strong enough to raise the question
of whether we should worry that much about Planck scale effects
which are mostly unknown.
Furthermore, in ref.~\cite{wormhole} it is argued that although
the idea of wormhole-induced global symmetry breaking is robust,
some modifications of the theory of gravity at a scale of $10^{-1}
M_{\rm P}$ or topological effects in string theory
could lead to exponentially suppressed coefficients
of the dangerous operators. There are also arguments that strongly
coupled heterotic string theory may contain PQ symmetries which are
adequately preserved \cite{schstring}.
Nevertheless, since the theory of quantum gravity still eludes us,
assigning exponentially small coefficients to all the gauge invariant
PQ-breaking operators in the low energy theory can be seen as a
worse fine-tuning than setting $\bar{\theta} < 10^{-9}$.
To show this consider a scalar $\Phi$, charged under a global
U(1)$_{\rm PQ}$ which has a QCD anomaly, with a vev equal to \pq,
and a dimension-$k$ gauge invariant operator
\begin{equation}
\frac{c}{k!}
\frac{1}{M_{\rm P}^{k - 4}} \Phi^k ~,
\end{equation}
where $c$ is a dimensionless coefficient.
Solving the strong CP problem requires
\begin{equation}
\frac{c}{k!}
\frac{f^k_{\rm PQ}}{M_{\rm P}^{k - 4}} < \bar{\theta} \,
M^2_{\pi} f^2_{\pi} ~.
\label{first}
\end{equation}
Here $M_{\pi}$ is the pion mass, and $f_{\pi} \approx 93$ MeV is
the pion decay constant.
Therefore, the condition on $c$ is
\begin{equation}
|c| \begin{array}{c}\,\sim\vspace{-21pt}\\< \end{array} \bar{\theta} \, k!\, 10^{8 (k - 10)}
\left(\frac{10^{11} \, {\rm GeV}}{\pq}\right)^{\!\! k}~,
\end{equation}
which means that
$c$ is less finely tuned than $\bar{\theta}$ only if $k \ge 9$
($k \ge 11$)
for $\pq = 10^{10}$ GeV ($\pq = 10^{12}$ GeV).
\subsection{General conditions}
Even in its relaxed form, the condition of avoiding Planck scale
effects is hard to satisfy simultaneously with the other
requirements of particle physics, cosmology and astrophysics.
In the remainder of this section we list some of the important
issues in axion model building.
\noindent
i) Gauge anomaly cancellation.
\noindent
ii) The colored fields carrying PQ charges should not acquire
vevs which break SU(3)$_C$ color.
\noindent
iii) The stability of \pq\ requires either susy or
the absence of fundamental scalars at this scale.
Furthermore, any mass parameter except $M_{\rm P}$ should arise
from the dynamics. Otherwise, fine-tuning
the ratio $\pq/M_{\rm P}$ is as troublesome as imposing
$\bar{\theta} < 10^{-9}$, and the motivation for axion models
is lost. Note that the usual DFSZ \cite{dfsz} and KSVZ \cite{ksvz}
models do not satisfy this condition.
\noindent
iv) The strong coupling constant should remain small above \pq,
until $M_{\rm P}$ or some grand unification scale.
The one-loop renormalization group evolution for the strong
coupling constant, starting with 5 flavors from
$\alpha_s(M_Z) = 0.115$, then at 175 GeV including the top quark,
gives
\begin{equation}
\frac{1}{\alpha_s(\pq)} \approx
32.0 + \frac{7}{2 \pi} \log\left(\frac{\pq}
{10^{11} \, {\rm GeV}}\right) ~.
\label{rge}
\end{equation}
Running then $\alpha_s$ from \pq\ to $M_{\rm P}$ gives
$\alpha_s(M_{\rm P}) < 1$ if the coefficient of the $\beta$ function
is $b_0 \begin{array}{c}\,\sim\vspace{-21pt}\\< \end{array} 10.6$. This corresponds to a maximum of 26 new flavors.
In supersymmetric theories the above computation gives
$\alpha_s(\pq) \approx 1/19$, and $\alpha_s(M_{\rm P})
< 1$ if $b_0 \begin{array}{c}\,\sim\vspace{-21pt}\\< \end{array}
6$, i.e. there can be at most 10 new flavors with masses of order
\pq. If there are additional flavors below \pq, the total number of
flavors allowed is reduced. In the case of composite axion models
there are non-perturbative effects, due to the fields carrying
the confining gauge interactions and the usual color,
which change the running of $\alpha_s$ at scales close to \pq\ and
can be only roughly estimated.
\noindent
v) Composite stable particles with masses $M_{\rm comp}$
larger than about $10^5$ GeV
lead too early to a matter dominated universe \cite{bbf}.
It is then necessary that all stable
particles with masses of order \pq\ to be short
lived. Their energy density remains smaller than the critical one
provided their lifetime is smaller than of order $10^{-8}$
seconds \cite{decay2}.
However, if there is inflation any unwanted relic is
wiped out, and if the reheating temperature is lower than \pq,
then the heavy stable particles are not produced again and
the above condition is no longer necessary.
\noindent
vi) Domain walls may arise in many axion models \cite{wall},
and they should
disappear before dominating the matter density of the universe.
Inflation takes care of this requirement too, but there are
also other mechanisms for allowing the domain walls to evaporate
\cite{cheng,effect,anomaly}.
\noindent
vii) Any new colored particle should be heavier than about the
electroweak scale \cite{data}. Etc.
\section{Composite axion}
\label{sec:comp}
\setcounter{equation}{0}
The PQ scale is about 9 orders of magnitude smaller than the Planck
scale,
which is unnatural unless the spontaneous breaking of the PQ symmetry
is a consequence of non-perturbative effects of some non-Abelian gauge
symmetry. In this section we concentrate on non-supersymmetric
theories, and therefore we do not allow light (compared to $M_{\rm
P}$) fundamental scalars.
We have to consider then theories with fermions transforming
non-trivially under a gauge group. From QCD it is known that
the strong dynamics break the chiral symmetry of the quarks.
Thus, if the PQ symmetry is a subgroup of a chiral symmetry
in a QCD-like theory, then \pq\ will be of the order of the
scale where the gauge interactions become strong.
As a result the axion will be a composite state, formed of the
fermions charged under the confining gauge interactions.
\subsection{Kim's model}
The idea of a composite axion is explicitly realized in
the model presented in ref.~\cite{comp},
which contains fermions carrying color and the charges of an SU(N)
gauge interaction,
called axicolor. The left-handed fermions are in the
following representations of the SU(N)$\times$SU(3)$_C$ gauge group:
\begin{equation}
\psi: (N,3) \, , \; \phi: (N,1) \, , \; \chi:
(\overline{N},\bar{3}) \, , \; \omega: (\overline{N},1) ~.
\end{equation}
SU(N) becomes strong at a scale $\Lambda_{\rm a}$ of order \pq\
and the fermions condense.
This is a QCD-like theory with $N$ axicolors and 4 flavors,
and from QCD we know that the condensates will preserve SU(N).
In the limit where the SU(3)$_C$ coupling constant, $\alpha_s$,
is zero, the channels of condensation which preserve color are
equally attractive as the ones which break color.
Thus, although $\alpha_s$ is small at the scale $\Lambda_{\rm a}$,
its non-zero value will force the condensates to preserve color,
which implies that only the $\langle \psi\chi \rangle$ and
$\langle \phi\omega \rangle$ condensates will form.
In the limit $\alpha_s \rightarrow 0$, Kim's model has an
SU(4)$_L\times$SU(4)$_{\overline{R}}\times$U(1)$_{\rm V}$
global symmetry which is spontaneously broken down to
SU(4)$_{L-\overline{R}}\times$U(1)$_{\rm V}$ by the condensates.
The resulting 15 Goldstone bosons transform as $1 + 3 + \bar{3} + 8$
under SU(3)$_C$.
The color singlet is the composite axion, with a ($\psi\chi - 3
\phi\omega$) content, and U(1)$_{\rm PQ}$ corresponds to the
$[(Q_{\rm PQ})_L \times 1_{\overline{R}} + 1_L \times (Q_{\rm
PQ})_{\overline{R}}]/\sqrt{2}$
broken generator of
SU(4)$_L\times$SU(4)$_{\overline{R}}$, where
\begin{equation}
Q_{\rm PQ} = \frac{1}{2\sqrt{6}} {\rm diag} (1,1,1,-3) ~.
\end{equation}
When $\alpha_s$ is turned on, the SU(4)$_{L-\overline{R}}$
global symmetry is explicitly broken down to the gauged SU(3)$_C$
and the global U(1)$_{{\rm axi-} B-L}$ generated by
$(Q_{\rm PQ})_{L-\overline{R}}$.
The axion gets a tiny mass from QCD
instantons, while the other (pseudo) Goldstone bosons get masses
from gluon exchange.
Although the normalization of the PQ symmetry breaking scale, \pq, is
ambiguous,
the axion mass, $m_{\rm a}$, is non-ambiguously related to the
axicolor scale, $\Lambda_{\rm a}$,
because U(1)$_{\rm PQ}$ is a subgroup of the chiral symmetry
of axicolor. To find this relation note first that
the axion mass is determined by the ``axi-pion'' decay constant,
$f_a$ (the analog of $f_{\pi}$ from QCD), by \cite{hadronic}
\begin{equation}
m_{\rm a} = \frac{4 A_{\rm PQ}^C}{f_{\rm a}} M_{\pi} f_{\pi}
\frac{Z^{1/2}}{1 + Z} ~,
\end{equation}
where $Z \approx 0.5$ is the up to down quark mass ratio,
and $A_{\rm PQ}^C$ is the color anomaly of U(1)$_{\rm PQ}$:
\begin{equation}
\delta_{ab} A_{\rm PQ}^C = N \, {\rm Tr} (T_a T_b Q_{\rm PQ}) ~.
\end{equation}
The normalization of the SU(3)$_C$ generators [embedded in
SU(4)$_{L-\overline{R}}$] is ${\rm Tr} (T_a T_b)
= \delta_{ab}/2$, and we find
\begin{equation}
f_{\rm a} = 2.4 \times 10^9 \,{\rm GeV}
\left(\frac{10^{-3} \,{\rm eV}}{m_{\rm a}}\right) N ~.
\end{equation}
In the large-$N$ limit the relation between $f_{\rm a}$ and
$\Lambda_{\rm a}$ is
\begin{equation}
\frac{\Lambda_{\rm a}}{\Lambda_{\rm QCD}} = \frac{f_{\rm a}}{f_{\pi}}
\sqrt{\frac{3}{N}} ~,
\label{largeN}
\end{equation}
where $\Lambda_{\rm QCD} \sim 200$ MeV.
This model suffers from the energy density problem of
stable composite particles \cite{decay1} [see point v) in section 2].
The reason is that the global U(1)$_{\rm V}$
(the analog of the baryon number symmetry in QCD)
is an exact symmetry such that the lightest axibaryon is stable.
Its mass is larger than $f_{\rm a}$ and
can be evaluated as in ref.~\cite{tcbar} by scaling from QCD:
\begin{equation}
M_{\rm aB} =
m_p \left(\frac{f_{\rm a}}{f_{\pi}}\right) \sqrt{\frac{N}{3}} ~,
\label{abar}
\end{equation}
where $m_p$ is the proton mass.
If axicolor can be unified with a standard model gauge group, then
the heavy gauge bosons would mediate the decay of the axibaryons
into standard model fermions and the model would be cosmologically safe
\cite{decay2}. However, it will be highly non-trivial to
achieve such a unification. The only attempt so far of avoiding the
axibaryon cosmological problem
involves scalars \cite{decay1},
so it is unsatisfactory unless one shows that
these scalars can be composite states.
We point out that the axibaryons are not the only heavy stable
particles: the color triplet
pseudo Goldstone bosons (PGB's) have also a too large energy density.
Their masses can be estimated by scaling
the contribution from electromagnetic interactions to the pion mass,
which is related to the difference between the squared masses of
$\pi^{\pm}$ and $\pi^0$. Since $\alpha_s(\Lambda_{\rm a})$
is small, the bulk of the colored PGB's masses comes from
one gluon exchange \cite{tc}:
\begin{equation}
M^2_{(R)} \approx C^2(R)
\frac{\alpha_s(\Lambda_{\rm a})}{\alpha(\Lambda_{\rm QCD})}
\frac{\Lambda_{\rm a}^2}{ \Lambda_{\rm QCD}^2}
\left(M^2_{\pi^{\pm}} - M^2_{\pi^0}\right) ~.
\label{pgb}
\end{equation}
Here\footnote{Eq.~(\ref{pgb}) improves the estimate given
in \cite{decay1,tcs} by eliminating the dependence on $N$
shown in eq.~(\ref{largeN}).}
$R$ is the SU(3)$_C$ representation,
$C^2(R)$ is the quadratic Casimir, equal to 3 for the color
octet and 4/3 for the triplet, and
$\alpha$ is the electromagnetic coupling constant.
Therefore, the color triplet PGB's , which are
$\psi\omega$ and $\phi\chi$ bound states, have a mass
\begin{equation}
M_{(3, \bar{3})} \approx 0.9 f_{\rm a} \sqrt{\frac{3}{N}}
\label{triplet}
\end{equation}
and, except for the axion, are the lightest ``axihadrons''.
These are
absolutely stable due to the exact global U(1)$_{{\rm axi-}(B-L)}$
symmetry.
One may choose though not to worry about stable axihadrons
by assuming a period of inflation with reheating temperature
below the PGB mass.
The model discussed so far does not attempt to avoid the
Planck scale induced operators which violate the PQ symmetry.
In fact, Kim's model is vector-like: the $\psi$ and $\chi$,
as well as the $\phi$ and $\omega$, will pair to form Dirac
fermions. Their mass is likely to be of order $M_{\rm P}$
and then fermion condensation does not take place and
the model becomes useless.
Even if Planck scale masses for the fermions are not generated,
there are dimension 6 operators which violate U(1)$_{\rm PQ}$:
\begin{equation}
\frac{c_1}{M_{\rm P}^2}(\psi\chi)^2 \;\; , \;\;
\frac{c_2}{M_{\rm P}^2}(\phi\omega)^2 ~,
\end{equation}
where $c_j$, $j = 1, 2$, are dimensionless coefficients.
These operators will shift the vev of the axion such that
$\bar{\theta}$ will remain within the experimental bound
only if
\begin{equation}
\frac{9|c_1| + |c_2|}{M_{\rm P}^2}
\left(4 \pi f_{\rm a}^3 \sqrt{\frac{3}{N}}\, \right)^{\! 2}
< 10^{-9} M^2_{\pi} f^2_{\pi} ~,
\end{equation}
implying $|c_j| < {\cal O}(10^{-47})$.
It is hard to accept this tiny number given that
the motivation for studying axion models is
to explain the small value $\bar{\theta} < 10^{-9}$.
\subsection{Randall's model}
There is only one axion model in the literature which does not involve
scalars and avoids large Planck scale effects \cite{model}.
To achieve this, Randall's model includes
another gauge interaction, which is weak, in addition to
the confining axicolor.
The left-handed fermions transform under
the SU(N)$\times$SU(m)$\times$SU(3)$_C$ gauge group as:
\begin{equation}
\psi: (N,m,3) \, , \; \phi_i: (N,\overline{m},1) \, , \; \chi_j:
(\overline{N},1,\bar{3}) \, , \; \omega_k: (\overline{N},1,1) ~,
\end{equation}
where $i=1,2,3$, $j=1,...,m$, and $k=1,...,3 m\,$ are flavor indices.
Axicolor SU(N) becomes strong at the $\Lambda_{\rm a}$
scale and the fermions condense.
If the SU(m) gauge coupling, $g_m$, is turned off, the vacuum will
align as in Kim's model and will preserve color.
When $g_m$ is non-zero, the SU(m) gauge interaction will
tend to change the vacuum alignment and break the SU(N) gauge
symmetry.
However, since $g_m$ is small, this will not happen,
as we know from QCD where the weak interactions of the quarks do not
affect the quark condensates. Therefore, the
\begin{equation}
\frac{1}{3} \langle \psi \chi_j \rangle =
\langle \phi_i \omega_k \rangle \approx
4 \pi f_{\rm a}^3 \sqrt{\frac{3}{N}} ~.
\end{equation}
condensates are produced, breaking the SU(m) gauge group and
preserving color.
A global U(1)$_{\rm PQ}$, under which $\psi$ and $\chi_j$ have
charge +1 while $\phi_i$ and $\omega_k$ have charge $-1$, is
spontaneously broken by the condensates, so that an axion arises.
The lowest dimensional gauge invariant and PQ-breaking
operators involving only fields that acquire vevs are
\begin{equation}
c^{ijk}_{m^{\prime}}
\frac{1}{M_{\rm P}^{3 m - 4}}
\left(\psi \chi_j\right)^{m - m^{\prime}}
(\overline{\phi_i} \overline{\omega_k})^{m^{\prime}}~,
\end{equation}
with $m^{\prime} = 1,...,m$ ($m^{\prime} \neq m/2$). The
$c^{ijk}_{m^{\prime}}$ coefficients are assumed to be of order one.
The solution to the strong CP problem requires
\begin{equation}
\frac{C(m)}{M_{\rm P}^{3 m - 4}}
\left(4 \pi f_{\rm a}^3 \sqrt{\frac{3}{N}}\, \right)^{\!\! m}
< 10^{-9} M^2_{\pi} f^2_{\pi} ~,
\label{ineq}
\end{equation}
where
\begin{equation}
C(m) \equiv \sum\limits_{ijkm^{\prime}} 3^{m - m^{\prime}}
\left| c^{ijk}_{m^{\prime}} \right| ~.
\end{equation}
A necessary condition that follows from inequality (\ref{ineq}) is
$3 m \geq 10$.
Note that the window $\pq \sim 10^7$ GeV discussed in \cite{model}
has been closed \cite{ressell}.
This constraint on $m$,
combined with the condition of asymptotic freedom for SU(N)
gives a lower bound for $N$,
\begin{equation}
\frac{11}{12} N > m \ge 4 ~.
\label{integ}
\end{equation}
We will see shortly that $m = 4$, $N = 5$ are the only values that may
not lead to a Landau pole for QCD much below $M_{\rm P}$.
For these values of $m$, inequality (\ref{ineq}) yields an upper limit
for the axi-pion decay constant:
\begin{equation}
f_{\rm a} < \frac{1.9 \times 10^{11} \, {\rm GeV}}{C(4)^{1/12}} ~.
\label{falim}
\end{equation}
For random values of order one of the $c^{ijk}_{m^{\prime}}$
coefficients, we expect $C(4)^{1/12}$ to be between 1.5 and 2.5.
For example, if $c^{ijk}_{m^{\prime}}$ = 1,
then $C(m) = 9 m^2 (3^{m + 1} - 1)/2$,
which gives $C(4)^{1/12} \approx 2.25$.
Therefore, $f_{\rm a} \begin{array}{c}\,\sim\vspace{-21pt}\\< \end{array} 10^{11}$ GeV is necessary for
avoiding fine-tuning of the higher dimensional operators
in the low energy effective Lagrangian.
We note that $m = 4$
allows dimension-9 gauge invariant operators which
break U(1)$_{\rm PQ}$:
\begin{equation}
(\overline{\phi_i}\psi)^2 (\overline{\chi_j} \omega_k) \, , \;
(\psi\omega_k)^2 (\overline{\phi_i}\psi) \, , \;
(\overline{\psi}\phi_i) (\phi_l\chi_j)^2 ~.
\end{equation}
However, these are not harmful because they are not
formed of the fields
which acquire vevs, i.e. $(\psi \chi_j)$ and $(\phi_i \omega_k)$.
They will just induce suppressed interactions of the axion with the
fermions. Hence, this model is an example where the redundant
condition of avoiding {\it all} the operators of dimension less than
10 is not satisfied.
Randall's model has a non-anomalous
SU(3m)$\times$SU(m)$\times$SU(3)$\times$U(1)$_{{\rm axi-}(B-L)}$
global symmetry under which the fermions transform as:
\begin{equation}
\psi: (1,1,1)_{+1} \, , \; \phi: (1,1,3)_{-1} \, , \; \chi:
(1,m,1)_{-1} \, , \; \omega: (3 m,1,1)_{+1} ~.
\end{equation}
This global symmetry, combined
with the SU(m) gauge symmetry, is
spontaneously broken down to an [SU(m)$\times$SU(3)$]_{\rm global}
\times$U(1)$_{{\rm axi-}(B-L)}$ global symmetry by the condensates.
Thus, there are $10 m^2 - 2 = 158$ Goldstone bosons: $m^2 - 1$
of them are eaten by the SU(m) gauge bosons
which acquire a mass $g_m \pq/2$, while the other $9 m^2 - 1$
get very small masses from higher dimensional
operators. These are color singlets,
very weakly coupled to the standard model
particles, and, as pointed out in \cite{model},
their energy density might not pose cosmological problems.
The Goldstone bosons have $\psi\chi$ and $\phi\omega$ content
and transform in the $(m,1)$ and $(m,3)$ representations
of the unbroken [SU(m)$\times$SU(3)$]_{\rm global}$,
respectively. Therefore,
these symmetries do not prevent heavy resonances from decaying
into Goldstone bosons, and are cosmologically safe. However,
as in Kim's model, the lightest particles carrying
axi-$(B-L)$ number are the color triplet PGB's,
which have $\psi\omega_k$ and
$\phi_i\chi_j$ content and are heavy due to gluon exchange.
Hence, there are $18 m^2$ stable ``aximesons''
with masses given by eq.~(\ref{triplet}), which
pose cosmological problems.
Besides heavy stable particles, there are meta-stable
states, with very long lifetimes, incompatible with the thermal
evolution of the early universe. To show this we observe
that there is an axibaryon
number symmetry, U(1)$_{\rm V}$, broken only by the SU(m) anomaly.
The $\psi$ and $\chi$ fermions have U(1)$_{\rm V}$ charge +1
while $\phi$ and $\omega$ have charge $-1$.
The lightest axibaryons are the color singlet
$\, \psi^{3 p_1}\phi^{3 p_2}\chi^{p_3}\omega^{p_4}\, $
states \cite{tcbar},
with $p_l \ge 0$ ($l = 1,...,4$) integers satisfying
$\sum p_l = N$.
These can decay at low temperature
only via SU(m) instantons, with a rate proportional to $\exp(- 16
\pi^2/g_m^2)$, which is extremely small given that the SU(m) gauge
coupling is small.
At a temperature of order \pq\ transitions between vacua with
different axibaryon number via sphalerons will affect
the axibaryon energy density. Nonetheless, this thermal effect
is exponentially suppressed as the universe cools down such that
the order of magnitude of the axibaryon energy density is unlikely
to have time to change significantly.
As in Kim's model, unification of axicolor with other gauge groups
will allow axihadron decays if the axicolored fermions belong to the
same multiplet as some light fermions.
In this model though it seems even more difficult
to unify axicolor with other groups.
Inflation with reheating temperature below the axibaryon mass $M_{\rm
aB}$ [see eq.~(\ref{abar})]
appears then a necessary ingredient.
Another problem may be the existence of a large number of
colored particles. QCD not only loses asymptotic freedom
but in fact the strong coupling constant may hit the Landau pole
below $M_{\rm P}$. To study this issue we need to evaluate
$\alpha_s(M_{\rm P})^{-1}$. Below the scale set by the
mass of the PGB's, the effects of the ``axihadrons'' on the
running of $\alpha_s$ are negligible and we can use eq.~(\ref{rge}).
Above some scale $\Lambda_{\rm pert}$ larger than
$4 \pi f_a/\sqrt{N}$ the perturbative renormalization group evolution
can again be used, with $m N + 6$ flavors.
However, at scales between the
mass of the PGB's and $\Lambda_{\rm pert}$, besides the perturbative
contributions from the gluons and the six quark flavors,
there are large
non-perturbative effects of the axicolor interaction
which are hard to estimate. We can write
\begin{equation}
\frac{1}{\alpha_s(M_{\rm P})} = 32.0 + \frac{7}{2 \pi}
\log\left(\frac{M_{\rm P}} {10^{11} \, {\rm GeV}}\right)
- \frac{m N}{3 \pi}
\log\left(\frac{M_{\rm P}}{\Lambda_{\rm pert}}\right)
- \delta_{\rm PGB} - \delta_{\rm axicolor} ~.
\label{evol}
\end{equation}
Here $\delta_{\rm PGB}$ is the contribution from colored PGB's,
and $\delta_{\rm axicolor}$ is the non-perturbative contribution
of the axicolored fermions, which
can be interpreted as the effect of the
axihadrons on the running of $\alpha_s$. Axicolor interactions
have an effect on the size of
these two non-perturbative contributions, but it is unlikely that
they change the signs of the one-loop contributions from PGB's
and axihadrons. Therefore, we expect $\delta_{\rm PGB}$ and
$\delta_{\rm axicolor}$ to be positive.
This is confirmed by the estimate of the hadronic contributions
to the photon vacuum polarization \cite{hlm} within relativistic
constituent quark models, and by the study of the running of
$\alpha_s$ in technicolor theories \cite{hl}, which indicate
\begin{equation}
\delta_{\rm axicolor} > \delta_{\rm PGB} > 0 ~.
\label{nonpert}
\end{equation}
From eq.~(\ref{evol}) one then can see that $\alpha_s^{-1}(M_{\rm P})$
is negative for any $m$ and $N$ larger than the smallest values
allowed by eq.~(\ref{integ}): $m = 4,\, N = 5$.
With these values eq.~(\ref{evol}) becomes
\begin{equation}
\frac{1}{\alpha_s(M_{\rm P})} = 13.4 + \frac{20}{3\pi}
\log\left(\frac{\Lambda_{\rm pert}}{10^{11} \, {\rm GeV}}\right)
- \delta_{\rm PGB} - \delta_{\rm axicolor} ~.
\label{evol1}
\end{equation}
At low energies compared to $\Lambda_{\rm a}$,
$\delta_{\rm PGB}$ can be evaluated
using chiral perturbation theory \cite{hlm}. Furthermore, as discussed
in ref.~\cite{hl} for the case of technicolor theories, the result
can be estimated up to a factor of 2 by computing the
one-loop PGB graphs.
At energies larger than $\Lambda_{\rm a}$
chiral perturbation theory is not
useful and the contribution to $\delta_{\rm PGB}$ is unknown. Keeping
this important caveat in mind we will evaluate the one-loop
PGB contributions. The leading log term from the
$3 m^2$ color triplet PGB's
with mass $M_{(3,\bar{3})}$ [see eq.~(\ref{triplet})]
and the $m^2$ color octet PGB's with mass $(9/4)
M_{(3,\bar{3})}$ is given by
\begin{equation}
\delta_{\rm PGB} \approx K \frac{m^2}{\pi}
\log\left(\sqrt{\frac{2}{3}}
\frac{\Lambda_{\rm pert}}{M_{(3,\bar{3})} }\right) ~.
\label{pgbev}
\end{equation}
$K$ is a constant between 1 and 2 which accounts for higher
order corrections. Using eqs.~(\ref{nonpert})-(\ref{pgbev})
we can write
\begin{equation}
\frac{1}{\alpha_s(M_{\rm P})} < 11.8 - \frac{76 }{3\pi}
\log\left(\frac{\Lambda_{\rm pert}}{f_{\rm a}}\right)
- \frac{20}{3\pi}
\log\left(\frac{10^{11} \, {\rm GeV}}{f_{\rm a}}\right) ~,
\label{con}
\end{equation}
where we used $K = 1$.
The right-hand side of this inequality is negative
because $f_a \begin{array}{c}\,\sim\vspace{-21pt}\\< \end{array} 10^{11}$ GeV [see eq.~(\ref{falim})]
and $ \Lambda_{\rm pert}/f_a
> 4\pi/\sqrt{5} $,
which means that the strong coupling constant hits the
Landau pole below $M_{\rm P}$.
Although the estimate of the non-perturbative effects
on the RGE is debatable, this conclusion seems quite robust.
A possible resolution would be to embed SU(3)$_C$ in a larger gauge
group. In doing so, axicolor will lose asymptotic freedom
unless it is also embedded in the larger group.
Such an unification of color and axicolor would solve
both problems discussed here: heavy stable particles and
the Landau pole of QCD. However, it remains to be proved
that this unification is feasible, given
the large groups already involved.
\section{Supersymmetric axion models}
\label{sec:susy}
\setcounter{equation}{0}
\subsection{Planck scale effects in supersymmetric models}
Apparently it is easier to build supersymmetric models in which
the axion is protected against Planck scale effects because
the holomorphy of the superpotential eliminates many of the higher
dimensional operators. In practice, susy is broken so that
the holomorphy does not ensure $\bar{\theta} < 10^{-9}$.
For example, consider the model presented in \cite{ten}. This is
a GUT model with E$_6\times$U(1)$_{\rm X}$ gauge symmetry under which
the chiral superfields transform as
\begin{equation}
\Phi : \, \overline{351}_0 \; , \;\; \Psi_+ : \, 27_{+1} \; , \;\;
\Psi_- : \; 27_{-1} ~.
\end{equation}
The renormalizable superpotential,
\begin{equation}
W = \kappa\Phi\Psi_+\Psi_- ~,
\label{sup}
\end{equation}
has a U(1)$_{\rm PQ}$ under which $\Phi$ has charge $-2$ and
$\Psi_+$, $\Psi_-$ have charge +1.
This is broken by dimension-6 and higher operators in the
superpotential:
\begin{equation}
W_{\rm nr} = \frac{1}{M_{\rm P}^3}
\left(\frac{\kappa_1}{6}\Phi^6 +
\frac{\kappa_2}{3}
(\Psi_+ \Psi_-)^3 + \frac{\kappa_3}{4}\Phi^4\Psi_+\Psi_-\right)
+ ...
\end{equation}
where the coefficients $\kappa_j$ are expected to be
of order one.
As observed in ref.~\cite{dudas}, their interference with the
renormalizable superpotential gives dimension-7 operators
in the Lagrangian:
\begin{equation}
\frac{1}{M_{\rm P}^3} \Phi^5 \Psi_+^{\dagger} \Psi_-^{\dagger} \; , \;\;
\frac{1}{M_{\rm P}^3} \Psi_{\pm}^5 \Psi_{\mp}^{\dagger} \Phi^{\dagger}
\; , \;\; \frac{1}{M_{\rm P}^3} \Phi^4 \Phi^{\dagger}
\left| \Psi_{\pm} \right|^2
~,
\label{dim7}
\end{equation}
where we use the same notation for the
scalar components as for the corresponding chiral superfields.
The only fields which acquire vevs are the
scalar components of $\Phi$ (the Higgs). Therefore, according to the
arguments of section 2.1, the operators (\ref{dim7})
do not affect the solution to the strong CP
problem because they involve the scalar components of $\Psi_+$ and
$\Psi_-$, which have no vevs.
The lowest dimensional operator in the
supersymmetric Lagrangian formed only of the $\Phi$ scalars
is $\Phi^{11}\Phi^{\dagger 5}$, and is given by the interference
of the $\Phi^6$ and $\Phi^{12}$ terms in the superpotential.
However, the situation changes when soft susy breaking terms
are introduced.
Consider the
\begin{equation}
\kappa^{\prime} m_s \Phi\Psi_+\Psi_-
\label{tril}
\end{equation}
trilinear scalar term, where $\kappa^{\prime}$ is a dimensionless
coupling constant, and $m_s$ is the mass scale of
susy breaking in the supersymmetric standard model.
The exchange of a $\Psi_+$ and a $\Psi_-$ scalar between this operator
and the first operator in (\ref{dim7}) leads at one loop
to a six-scalar effective term in the Lagrangian:
\begin{equation}
- \frac{2\kappa^*\kappa_1\kappa^{\prime}}{(4\pi)^2}
\frac{m_s}{M_{\rm P}^3} \log \left(\frac{M_{\rm P}}{m_s}\right) \Phi^6 ~.
\label{danger}
\end{equation}
The constraint from $\bar{\theta}$ given by eq.~(\ref{first})
yields
\begin{equation}
|\kappa\kappa_1\kappa^{\prime}| < {\cal O}(10^{-18}) ~,
\end{equation}
where we have used $m_s \sim {\cal O}(250$ GeV).
Note that there are also one-loop $\Phi^{6}\left|\Phi\right|^{2 n}$
terms which can be summed up.
In addition, once soft susy breaking masses are introduced,
the unwanted five-scalar term
$ \Phi^4 \Phi^{\dagger}$ is induced at one-loop by contracting
the $\Psi$ legs of the
third operator in (\ref{dim7}). This term is independent
of the trilinear soft term (\ref{tril}).
Thus, the coupling constants that appear
in the renormalizable superpotential,
in the soft terms, or in the non-renormalizable terms from the
superpotential have to be very small, contrary to the goal
of this model.
In ref.~\cite{dudas} it is suggested that an additional chiral
superfield, $\Upsilon$, which transforms non-trivially under both
E$_6$ and U(1)$_{\rm X}$, may allow different $X$ charges for
the $\Phi$, $\Psi_1$ and $\Psi_2$ superfields while satisfying
the gauge anomaly cancellation and
avoiding the dangerous PQ breaking operators.
We point out that $\Upsilon$ should transform in a real representation
of E$_6$ to preserve the (E$_6$)$^3$ anomaly cancellation.
The lowest real representation is the adjoint 78 and has index 2
in the normalization where the fundamental 27 and the antisymmetric
351 have indices 1/2 and 25/2, respectively.
The gauge invariance of the renormalizable superpotential
(\ref{sup}) requires $X_{\Psi_1} + X_{\Psi_2} = - X_{\Phi}$,
which, together with the (E$_6$)$^2 \times $U(1)$_{\rm X}$
anomaly cancellation gives $X_{\Upsilon} = - 6 X_{\Phi}$.
Using these equations, we can write the (U(1)$_{\rm X})^3$
anomaly cancellation condition as a relation between the
U(1)$_{\rm X}$ charges of $\Psi_1$ and $\Psi_2$:
\begin{equation}
(X_{\Psi_1} + X_{\Psi_2})
\left(X_{\Psi_1}^2 + \frac{407}{204} X_{\Psi_1}
X_{\Psi_2} + X_{\Psi_2}^2 \right) = 0 ~.
\end{equation}
The only real solution of this equation is $X_{\Psi_1} = - X_{\Psi_2}$
which does not prevent the dangerous operator (\ref{danger}).
The next real representation, 650, looks already too large
to allow a viable phenomenology.
Another proposal suggested in \cite{dudas} assumes fermion
condensation which now it is known not to occur in supersymmetric theories
\cite{susynp}.
\subsection{The problem of PQ symmetry breaking scale}
If susy is relevant at the electroweak scale, and
susy breaking is transmitted to the fields of the standard model
by non-renormalizable interactions suppressed by powers of $M_{\rm
P}$,
such as supergravity, then susy should be broken dynamically
at a scale $M_S \sim 10^{11}$ GeV.
This will give scalar masses of order $M_W$.
A gauge singlet in the dynamical susy breaking
(DSB) sector with a vev for the $F$-term of order $M_S^2$
would produce gaugino masses of order $M_W$ \cite{ads}.
However, any gauge singlet is likely to have a mass of order
$M_{\rm P}$, so that its vev would need to be highly fine-tuned.
Nonetheless, gluino, neutralino
and chargino masses of order $M_W$ can be produced without need
for gauge singlets if there are new non-Abelian
gauge interactions which
become strong at $\sim$ 1 TeV \cite{gluino}.
The success of this scheme makes physics at the $M_S$ scale an
important candidate for spontaneously breaking a PQ-symmetry.
More important, the existence of $M_S$ in the rather narrow window
allowed for $f_{\rm PQ}$ is worth further exploration.
Nevertheless, models which break both susy and the PQ symmetry
face serious challenges, which were not addressed in the past
\cite{link}.
One obstacle is that the inclusion of colored fields in a model
of dynamical susy breaking
typically results in a strongly coupled
QCD right above $M_S$ \cite{ads}. This problem can be solved
by constructing a PQ sector that communicates with the DSB sector
only through a weak gauge interaction in a manner analogous
to the gauge mediated susy breaking models \cite{dns}.
A more serious problem is the following:
if colored superfields could be included in the
DSB sector, they would have masses of order $10^{11}$ GeV
and a non-supersymmetric spectrum. This will lead to large masses
for the squarks, which in turn will destabilize the electroweak scale.
The troublesome colored fields from the DSB sector
can be avoided if the fields of both the DSB sector and
the visible sector transform under the same global U(1)$_{\rm PQ}$,
which is
spontaneously broken in the DSB sector and explicitly broken in the
visible sector by the color anomaly.
As pointed out in ref.~\cite{relax}, this may be possible because
the axion can be identified with one of the complex phases from
the soft susy breaking terms of the supersymmetric standard model.
However,
it appears very difficult to protect the axion against
Planck scale effects. For example, the $\mu$ and $B$ terms break this
U(1)$_{\rm PQ}$ which means that they should be generated by the vevs
of PQ-breaking products of fields from the DSB sector.
These products of fields are gauge invariant and therefore
Planck scale induced PQ-breaking operators may be induced.
The naturalness of the axion solution is preserved provided
these operators are suppressed by many powers of $M_{\rm P}$,
which in turn requires the vevs from the DSB sector to be much above
$M_S$ in order to generate large enough $\mu$ and $B$ terms.
Thus, this situation seems in contradiction with the cosmological
bounds on the PQ scale.
It should be mentioned though that a larger
\pq\ might be allowed in certain unconventional cosmological
scenarios suggested by string theory \cite{string}.
Note, however, that for larger \pq\ the constraints on PQ-breaking
operators are significantly stronger [see eq.~(\ref{first})].
Another possibility, discussed in refs.~\cite{cla}, is to
relate \pq\ to the susy breaking scale from the visible sector.
The idea is to induce negative squared masses at one-loop for some
scalars, and to balance these soft susy breaking mass terms
against some terms
in the scalar potential coming from the superpotential which are
suppressed by powers of $M_{\rm P}$.
By choosing the dimensionality of these terms, one can ensure
that the minimum of the potential is in the range allowed for \pq.
This mechanism is also very sensitive to Planck scale effects
because it assumes the absence of certain gauge invariant
PQ breaking operators of dimension one, two and three from the
superpotential.
Given these difficulties in
relating \pq\ to the susy breaking scale while avoiding
the harmful low-dimensional operators,
one may consider producing the PQ scale naturally by introducing
some gauge interactions which become strong at about $10^{11}$ GeV
and break the PQ symmetry without breaking susy.
Because this scenario is less constrained, it may be easier to avoid
the PQ-breaking operators.
\section{Conclusions}
\label{sec:conc}
We have argued that an axion model has to satisfy two naturalness
conditions in order to solve the strong CP problem:
\noindent
a) the absence of low-dimensional
Planck-scale induced PQ-breaking operators
formed of fields which acquire vevs;
\noindent
b) the absence of fundamental mass parameters much smaller than
the Planck scale.
If these conditions are not satisfied, the models may
not be ruled out
given that the Planck scale physics is unknown, but
the motivation for the axion models (i.e. avoiding fine-tuning) is
lost.
Non-supersymmetric composite axion models satisfy condition b)
easily. The only phenomenological problem is that they predict heavy
stable particles which are ruled out by the thermal evolution of the
early universe. However, this problem disappears if there is inflation
with reheating temperature below the PQ scale.
Condition a) is more troublesome. It is satisfied by
only one composite axion model \cite{model}, and our estimate shows
that it leads to a Landau pole for QCD. One may hope though that
the uncertainty in the value of $M_{\rm P}$, i.e. the possibility
of quantum gravitational effects somewhat below $10^{19}$ GeV,
combined with unknown non-perturbative effects of axicolor
on the running of the
strong coupling constants, might push the Landau pole just above $M_{\rm
P}$ where is irrelevant for field theory.
But because this does not seem to be a probable scenario,
it would be useful to study in detail the
possibility of unifying color with axicolor.
By contrast, the existing supersymmetric models do not satisfy condition
a).
The models which attempt to eliminate the PQ-breaking operators
rely on the holomorphy of the superpotential. We have shown that
once susy breaking is taken into account, the PQ-breaking operators
are reintroduced with sufficiently large coefficients (in the absence
of fine-tuning) to spoil the solution to the strong CP problem.
Also, the models that satisfy condition b) by relating the PQ scale
to the susy breaking scale are particularly sensitive
to gauge invariant PQ-breaking operators.
These results suggest the need for further model building efforts.
\section*{Acknowledgements}
I am grateful to Sekhar Chivukula for many helpful discussions
about strongly coupled theories and axions.
I would like to thank Lisa Randall for very useful observations
on the manuscript. I also thank Indranil Dasgupta,
Ken Lane, Martin Schmalz and John Terning for useful discussions,
and Emil Dudas, Tony Gherghetta and Scott Thomas for
valuable correspondence.
{\em This work was supported in part by the National Science
Foundation under grant PHY-9057173, and by the Department of Energy
under grant DE-FG02-91ER40676.}
| proofpile-arXiv_065-394 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Without the cancellations induced by a Higgs resonance, the scattering
amplitudes of massive vector bosons grow with rising energy,
saturating the unitarity bounds in the TeV region~\cite{Uni}. Thus
there is a strongly interacting domain which lies within the reach of
the next generation of collider experiments. One usually expects new
resonances which manifest themselves as peaks in the invariant mass
distribution of massive vector boson pairs $VV$ in reactions which
contain the nearly on-shell scattering $V'V'\to VV$ as a subprocess.
As Barger et al.\ have shown~\cite{BCHP}, with a suitable set of
kinematical cuts different resonance models (in particular, a $1 {\rm
TeV}$ scalar and a $1\ {\rm TeV}$ vector) can clearly be distinguished
by analyzing the two modes $e^+e^-\to W^+W^-\bar\nu\nu$ and $e^+e^-\to
ZZ\bar\nu\nu$ at a Linear Collider with $1.5\ {\rm TeV}$ CMS energy.
A number of similar analyses for hadron, $e^-e^-$, and muon collisions
have also been performed~\cite{LHC,eminus,muon}.
This result encourages one to consider the same processes in the more
difficult case when resonances do not exist or are out of reach of the
particular experiment. In the following we present results for the
sensitivity on details of the strong interactions as a two-parameter
analysis, carried out in the framework of a complete tree-level
calculation.
\section{Chiral Lagrangian}
Below a suspected resonance region the electroweak theory is properly
parameterized in terms of a gauged chiral Lagrangian which
incorporates the spontaneous breaking of the electroweak symmetry.
This Lagrangian induces a low-energy approximation of scattering
amplitudes organized in powers of the energy~\cite{ChPT}.
Models for strong vector boson scattering are usually embedded in
Standard Model calculations via the Equivalence Theorem~\cite{ET}
and/or the Effective $W$ Approximation~\cite{EWA}. However, they are
not needed for our purpose, since the very nature of chiral Lagrangians
as effective low-energy theories allows a complete calculation without
approximations. For an accurate estimate of the sensitivity the
correct treatment of transversally polarized vector bosons and
interference effects is essential, and the full kinematics of the
process must be known in order to sensibly apply cuts necessary to
isolate the signal.
In our study we made use of the automated calculation package
CompHEP~\cite{CompHEP}. For technical reasons, the chiral Lagrangian
has been implemented in 'tHooft-Feynman gauge:
\begin{equation}
{\cal L} = {\cal L}_{\rm G} + {\cal L}_{\rm GF} + {\cal L}_{\rm FP}
+ {\cal L}_e
+ {\cal L}_0 + {\cal L}_4 + {\cal L}_5
\end{equation}
where
\begin{eqnarray}
{\cal L}_{\rm G} &=& -\textstyle\frac18{\rm tr}[W_{\mu\nu}^2]
- \textstyle\frac14 B_{\mu\nu}^2\\
{\cal L}_{\rm GF} &=&
- \textstyle\frac12 \left(\partial^\mu W_\mu^a
+ i\frac{gv^2}{4}{\rm tr}[U\tau^a]\right)^2\nonumber\\
&&
-{} \textstyle\frac12 \left(\partial^\mu B_\mu
- i\frac{g'v^2}{4}{\rm tr}[U\tau^3]\right)^2\\
{\cal L}_e &=& \bar e_{\rm L}iD\!\!\!\!/\,\,e_{\rm R}
+ \bar\nu_{\rm L}iD\!\!\!\!/\,\,e_{\rm R}
+ \mbox{h.c.}\\
{\cal L}_0 &=& \textstyle\frac{v^2}{4}{\rm tr}
[D_\mu U^\dagger D^\mu U]\\
{\cal L}_4 &=& \alpha_4\,{\rm tr}[V_\mu V_\nu]^2 \\
{\cal L}_5 &=& \alpha_5\,{\rm tr}[V_\mu V^\mu]^2
\end{eqnarray}
with the definitions
\begin{eqnarray}
U &=& \exp(-iw^a\tau^a/v) \\
V_\mu &=& U^\dagger D_\mu U
\end{eqnarray}
\section{Parameters}
To leading order the chiral expansion contains two independent
parameters which give rise to $W$ and $Z$ masses. The fact that they
are related, \emph{i.e.}, the $\Delta\rho$ (or $\Delta T$) parameter
is close to zero, suggests that the new strong interactions respect a
custodial $SU_2^L\times SU_2^R$ symmetry~\cite{SU2c}, spontaneously
broken to the diagonal $SU_2$.
In next-to-leading order there are eleven CP-even chiral parameters.
Two of them correspond to the $S$ and $U$ parameters~\cite{STU}. Four
additional parameters describe the couplings of three gauge bosons.
They can be determined, e.g., at $e^+e^-$ colliders by analyzing $W$
boson pair production~\cite{TGV}. In our study we assume that these
parameters are known with sufficient accuracy. For simplicity, we set
them to zero.
The remaining five parameters are visible only in vector boson
scattering. If we assume manifest custodial symmetry, only two
independent parameters $\alpha_4$ and $\alpha_5$ remain. They can be
determined by measuring the total cross section of vector boson
scattering in two different channels.
In the present study we consider the two channels $W^+W^-\to W^+W^-$
and $W^+W^-\to ZZ$ which are realized at a Linear Collider in the
processes $e^+e^-\to W^+W^-\bar\nu\nu$ and $e^+e^-\to ZZ\bar\nu\nu$.
In the limit of vanishing gauge couplings the amplitudes for the two
subprocesses are related:
\begin{eqnarray}
a(W^+_L W^-_L\to Z_LZ_L) &=& A(s,t,u)\\
a(W^+_L W^-_L\to W^+_LW^-_L) &=& A(s,t,u) + A(t,s,u)
\end{eqnarray}
where
\begin{equation}
A(s,t,u) = \frac{s}{v^2} + \alpha_4\frac{4(t^2+u^2)}{v^4}
+ \alpha_5\frac{8s^2}{v^4}
\end{equation}
with $v=246\ {\rm GeV}$. These relations hold only for the
longitudinal polarization modes. Although in the present study all
modes are included, they lead us to expect an increase in the
rate for both processes with positive $\alpha_4$ and $\alpha_5$.
Negative values tend to reduce the rate as long as the leading term is
not compensated.
\section{Calculation}
Using the above Lagrangian, the full squared matrix elements for the
processes $e^+e^-\to W^+W^-\bar\nu\nu$ and $e^+e^-\to ZZ\bar\nu\nu$
have been analytically calculated and numerically integrated at
$\sqrt{s}=1600\ {\rm GeV}$ (omitting $Z$ decay diagrams, see below).
The backgrounds $e^+e^-\to W^+W^- e^+e^-$ and $e^+e^-\to W^\pm Z
e^\mp\nu$ are relevant if the electrons escape undetected through the
beampipe. In that region they receive their dominant contribution
through $\gamma\gamma$, $\gamma Z$ and $\gamma W$ fusion which has
been calculated within the Weizs\"acker-Williams
approximation~\cite{EPA}.
A set of optimized cuts to isolate various strongly interacting $W$
signals has been derived in~\cite{BCHP}. It turns out that similar
cuts are appropriate in our case:
\begin{center}
$|\cos\theta(W)|<0.8$ \nonumber\\
$150\ {\rm GeV}<p_T(W)$ \nonumber\\
$50\ {\rm GeV} < p_T(WW) < 300\ {\rm GeV}$ \nonumber\\
$200\ {\rm GeV} < M_{\rm inv}(\bar\nu\nu)$ \nonumber\\
$700\ {\rm GeV} < M_{\rm inv}(WW) < 1200\ {\rm GeV}$
\end{center}
The lower bound on $p_T(WW)$ is necessary because of the large $W^+W^-
e^+e^-$ background which is concentrated at low $p_T$ if both
electrons disappear into the beampipe. We have assumed an effective
opening angle of $10$ degrees. The cut on the $\bar\nu\nu$ invariant
mass removes events where the neutrinos originate from $Z$ decay,
together with other backgrounds~\cite{BCHP}. For the $ZZ$ final state
the same cuts are applied, except for $p_T^{\rm min}(ZZ)$ which can be
reduced to $30\ {\rm GeV}$.
The restriction to a window in $M_{\rm inv}(WW)$ between $700$ and
$1200\ {\rm GeV}$ keeps us below the region where (apparent) unitarity
violation becomes an issue. Furthermore, it fixes the scale of the
measured $\alpha$ values, which in reality are running parameters, at
about $1\ {\rm TeV}$. In any case, including lower or higher
invariant mass values does not significantly improve the results.
For the analysis we use hadronic decays of the $W^+W^-$ pair and
hadronic as well as $e^+e^-$ and $\mu^+\mu^-$ decays of the $ZZ$ pair.
In addition, we have considered $WW\to jj\ell\nu$ decay modes which
are more difficult because of the additional neutrino in the final
state. We find that with appropriately modified cuts the backgrounds
can be dealt with also in that case, although the resulting
sensitivity is lower than for hadronic decays. In the following
results the leptonic $W$ decay modes are not included.
We adopt the dijet reconstruction efficiencies and misidentification
probabilities that have been estimated in~\cite{BCHP}. Thus we assume
that a true $W$ ($Z$) dijet will be identified as follows:
\begin{eqnarray}
W &\to& 85\%\;W,\ 10\%\;Z,\ 5\%\;\mbox{reject}\\
Z &\to& 22\%\;W,\ 74\%\;Z,\ 4\%\;\mbox{reject}
\end{eqnarray}
With $b$ tagging the $Z\to W$ misidentification probability could be
further reduced, improving the efficiency in the $ZZ$ channel.
Including the branching ratios and a factor $2$ for the $WZ$
background, we have the overall efficiencies
\begin{eqnarray}\label{eps}
\epsilon(WW) &=& 34\% \nonumber\\
\epsilon(ZZ) &=& 34\% \\
\epsilon(WZ) &=& 18\%\;\mbox{id.~as $WW$},\ 8\%\;\mbox{as $ZZ$}\nonumber
\end{eqnarray}
\begin{figure}[htb]
\unitlength1mm
\leavevmode
\begin{center}
\begin{picture}(80,80)
\put(10,50){\includegraphics{ptWW.1}}
\put(10,5){\includegraphics{mWW.1}}
\end{picture}
\end{center}
\caption{Differential distributions in $p_T$ and $M_{\rm inv}$ of the
$W$ pair (after cuts). The dark area shows the background from $WWee$
and $WZe\nu$ final states; the light area is the rate after the signal
process $e^+e^-\to W^+W^-\bar\nu\nu$ with $\alpha_4=\alpha_5=0$ has
been added; the upper curve denotes the corresponding distribution for
$\alpha_4=0$, $\alpha_5=0.005$. The $WW$ reconstruction efficiency
has not been included.}
\label{WWplots}
\end{figure}
\section{Results}
The simulations have been carried out for a number of different values
of the two parameters $\alpha_4$ and $\alpha_5$, such that a
two-parameter analysis was possible for all observables.
Fig.~\ref{WWplots} shows the differential distributions in the
transverse momentum and invariant mass of the $WW$ pair for $e^+e^-\to
W^+W^-\bar\nu\nu$ including backgrounds after all cuts have been
applied. The shown signal distribution is similar in shape to a
broad scalar (Higgs) resonance; however, the total signal rate is
smaller.
Both channels are enhanced by positive values of the two parameters,
the $ZZ$ channel being less sensitive to $\alpha_4$ than the $WW$
channel. With actual data at hand one would perform a
maximum-likelihood fit to the various differential distributions. In
our analysis, however, we only use the total cross sections after
cuts. For $\alpha_4=\alpha_5=0$ we find $80$ $WW$ and $67$ $ZZ$
events if $200\ {\rm fb}^{-1}$ of integrated luminosity with
unpolarized beams and the efficiencies~(\ref{eps}) are assumed.
In Fig.~\ref{contour} we show the $\pm 1\sigma$ bands resulting from
the individual channels as well as the two-parameter confidence region
centered at $(0,0)$ in the $\alpha_4$-$\alpha_5$ plane. The total
event rate allows for a second solution centered roughly at
$(-0.017,0.005)$ which corresponds to the case where the
next-to-leading contributions in the chiral expansions are of opposite
sign and cancel the leading-order term. This might be considered as
unphysical; in any case, this part of parameter space could be ruled
out by performing a fit to the differential distributions or by
considering other channels such as $WZ$ [possibly including results
from the LHC].
\begin{figure}[hbt]
\unitlength 1mm
\leavevmode
\begin{center}
\begin{picture}(80,74)
\put(10,3){\includegraphics{contour.1}}
\end{picture}
\end{center}
\caption{Exclusion limits for unpolarized beams. The shaded bands
display the $\pm 1\sigma$ limits resulting from either one of the two
channels; the lines show the combined limits at the $\chi^2=1,3,5$ level.
[For Gaussian distributions, this corresponds to a $39\%$, $78\%$,
$92\%$ confidence level, respectively.]}
\label{contour}
\end{figure}
Since in both channels the signal part is generated only by the
combination of left-handed electrons and right-handed positrons,
polarizing the incident beams enhances the sensitivity of the
experiments. Assuming $90\%$ electron and $60\%$ positron
polarization, the signal rate increases by a factor $3$. For
the $WZ$ background the enhancement is $1.75$, whereas the
$W^+W^-e^+e^-$ background remains unchanged. We now find $182$ $WW$
and $193$ $ZZ$ events.
\begin{figure}[hbt]
\unitlength 1mm
\leavevmode
\begin{center}
\begin{picture}(80,74)
\put(10,3){\includegraphics{contour.2}}
\end{picture}
\end{center}
\caption{Exclusion limits for polarized beams.}
\label{contour-pol}
\end{figure}
Here we have not taken into account that part of the intrinsic
background to $e^+e^-\to W^+W^-(ZZ)\bar\nu\nu$ is not due to $WW$
fusion diagrams and will therefore not be enhanced, and that the cuts
could be further relaxed in the polarized case. Thus the actual
sensitivity will be improved even more.
\section{Summary}
As our analysis shows, a Linear Collider is able to probe the chiral
parameters $\alpha_4$ and $\alpha_5$ down to a level of $10^{-3}$
which is well in the region where the actual values are expected by
dimensional analysis. Full energy ($\sqrt{s}=1.6\ {\rm TeV}$) and
full luminosity ($200\ {\rm fb}^{-1}$) is needed to achieve that goal.
Electron and positron beam polarization both improve the sensitivity.
With several years of running time a precision measurement of chiral
parameters seems to be a realistic perspective, rendering a meaningful
test of strongly interacting models even in the pessimistic case where
no resonances can be observed directly.
| proofpile-arXiv_065-395 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Over 50 blazars have been detected as {$\gamma$-ray\ } sources in
the GeV energy range by the {\it EGRET} detector on the {\it Compton Gamma-Ray
Observatory} (Fichtel, {\it et al.\ } 1994; Thompson, {\it et al.\ } 1995, 1996).
In contrast,
only two or three blazars have been detected at TeV energies, only one of
which is a detected GeV source. There are many {\it EGRET} blazars with
differential photon spectra which are $E^{-2}$ power-laws or flatter. These
sources would be detectable by telescopes such as the Whipple telescope
in the TeV energy range, assuming that their spectra extrapolate to TeV
energies. In this paper, we address the questions: (1) Why has only one of the
{\it EGRET} sources has been detected at TeV energies?, and (2) Which blazars
are likely to be TeV sources?
We have already addressed part of this problem by pointing out
the critical effect of absorption of high energy {$\gamma$-rays\ } between the source
and the Earth by pair-production interactions with the intergalactic infrared
background (Stecker, De Jager \& Salamon 1992)
In a series of papers (Stecker \& De Jager 1997 and references therein),
we have shown that {$\gamma$-rays\ } with
energies greater than $\sim 1$ TeV will be removed from the spectra of sources
with redshifts $>$ 0.1.
Absorption
effectively eliminates flat spectrum radio quasars (FSRQs) as TeV sources.
The nearest {\it EGRET}
quasar, 3C273, lies at a redshift of 0.16. This source is also a
``mini-blazar'' which, in any case, has a steep spectrum at GeV energies.
The next closest {\it EGRET} quasar, 1510-089, has a redshift of 0.361.
At this redshift, we estimate that more than $\sim$ 99\% of the original flux
from the source will be absorbed at TeV energies (Stecker \& De Jager 1997).
Although the source spectra of FSRQs may not extend to TeV energies,
their distance alone makes them unlikely candidates as TeV sources. Therefore,
we consider here the more nearby blazars, which are all BL Lacerate
objects.
\section{Synchrotron and Compton Spectra of XBLs and RBLs}
An extensive exposition of blazar spectra has recently been given by
Sambruna, Maraschi \& Urry (1996). The spectral energy distributions
(SEDs) of blazars were considered by type. With the sequence FSRQs,
RBLs, XBLs, they found a decreasing bolometric luminosity in the radio to
X-ray region and an increasing frequency for the peak in the SED of the
source. Two alternative explanations have been proposed the explain this.
There is the ``orientation hypothesis'', which states that these sources
(or at least the BL Lacs) have no significant physical differences
between them; rather the differences in luminosity and spectra result from
relativistic beaming effects, with XBLs jets being observed with larger
angles to the line-of-sight than RBLs (Maraschi, {\it et al.\ } 1986;
Ghisellini \& Maraschi 1989; Urry, Padovani \& Stickel 1991;
Celotti, {\it et al.\ } 1993). In the alternative interpretation, the differences
between RBLs and XBLs must be attributed, at least in part, to real physical
differences (Giommi \& Padovani 1994; Padovani and Giommi 1995; Kollgaard,
Gabuzda \& Feigelson 1996; Sambruna, {\it et al.\ } 1996).
To understand the spectra of blazars, their SEDs are
broken into two parts. The lower
frequency part, which can be roughly described by a convex parabolic
$\nu F_{\nu}$ spectrum, is generally considered to be produced by synchrotron
radiation of relativistic electrons in the jet. The higher energy part, which
includes the {$\gamma$-ray\ } spectrum, is usually considered to
be produced by Compton radiation
from these same electrons. In the SEDs of XBLs, the X-ray emission comes from
the high energy end of the synchrotron emission, whereas in RBLs the X-ray
emission is from Compton scattering. This situation produces a bimodal
distribution in the broad-range radio to X-ray spectral index, $\alpha_{rx}$,
which can be used to classify BL Lac objects as XBL-like or RBL-like, or
alternatively HBLs (high frequency peaked BL Lacs) and LBLs (low frequency
peaked BL Lacs) (Padovani \& Giommi 1995, 1996; Sambruna, {\it et al.\ } 1996;
Lamer, Brunner \& Staubert 1996).
If real differences exist between RBLs and XBLs, one might suspect that XBLs
are more likely to be TeV sources than RBLs.
This is because in XBLs (HBLs), there is evidence from the synchrotron
SEDs that relativistic electrons are accelerated to higher energies than in
RBLs (LBLs) ({\it e.g.}, Sambruna, {\it et al.\ } 1996). These electrons,
in turn, should Compton scatter to produce higher energy {$\gamma$-rays\ } in XBLs
than in RBLs.
In fact, of the over 50 blazars seen by {\it EGRET} in the GeV range,
including 14 BL Lacs (based on the observations given by Thompson,
{\it et al.\ } 1995, 1996; Vestrand, Stacy \& Sreekumar 1995 and Fichtel, {\it et al.\ } 1996),
only two, {\it viz.} Mrk 421 and PKS 2155-304, are
XBLs.\footnote{It is not clear whether the physics of the sources favors
RBLs as GeV sources or whether this is a demographic effect. Observed
RBLs may be an order of magnitude more abundant than XBLs (Padovani \&
Giommi 1995); however this may be due to selection effects
(Urry \& Padovani 1995; see also Maraschi, Ghisellini, Tanzi \& Treves 1986).}
In contrast, {\it only} XBLs have been seen at TeV energies.
Thus, the {$\gamma$-ray\ } observations lend further support to the LBL-HBL
spectral difference hypothesis. We will consider this point quantitatively
below.
\section{BL Lacertae Objects as TeV Gamma-Ray Sources}
In accord with our estimates of intergalactic absorption,
the only extragalactic TeV
{$\gamma$-ray\ } sources which have been reported are nearby BL Lac objects.
The GeV {$\gamma$-ray\ } source Mrk 421,
whose redshift is 0.031, was the first blazar detected at TeV energies
(Punch, {\it et al.\ } 1992). A similar BL Lac object, Mrk 501, whose redshift is
0.034, was detected more
recently (Quinn, {\it et al.\ } 1996), although it was too weak at GeV energies to
be detected by {\it EGRET}. Another BL Lac object, 1ES2344+514, whose
redshift is 0.044, was recently
reported by the Whipple group as a tentative detection (Schubnell 1996). This
could be the third BL Lac object at a redshift less than 0.05 detected at
TeV energies.
These observations are suggestive when considered in the
context of radio and X-ray observations of BL Lac objects.
If $\log (F_{X}/F_{r}) < -5.5$ for a BL Lac object,
the source falls in the observational category of a radio-selected
BL Lac object (RBL), whereas if $\log (F_{X}/F_{r}) > -5.5$, the object
is classified as an X-ray selected BL Lac (XBL) (Giommi \& Padovanni
1994). Using this criterion,
{\it only XBLs have been detected
at TeV energies, whereas the RBL ON231 (z=0.1), with the hardest observed GeV
spectrum (Sreekumar, {\it et al.\ } 1996), was not seen at TeV energies.}
We will show below that this result may be easily
understood in the context of simple SSC models. We further predict that
only nearby XBLs will be extragalactic TeV sources.
\section{SSC Models of BL Lacs}
The most popular mechanisms proposed
for explaining blazar {$\gamma$-ray\ } emission have involved either (1)
the SSC mechanism, {\it viz.}, Compton
scattering of synchrotron radiation in the jet with the same electrons
producing both radiation components (Bloom \& Marscher 1993 and references
therein), or (2) Compton scattering
from soft photons produced external to the jet in a hot accretion disk around
a black hole at the AGN core (Dermer \& Schlickheiser 1993), possibly
scattered into the jet by surrounding clouds (Sikora, Begelman
\& Rees 1994).
During the simultaneous X-ray and TeV flaring of the XBL Mrk421 in May
of 1994, it was observed
that the flare/quiescent flux ratios were similar for both X-rays and
TeV $\gamma$-rays, whereas the flux at and below UV frequencies
and that at GeV energies remained constant. This observation can be understood in the context of an SSC model with the high energy tail of the radiating
electrons being enhanced during the flare and the low energy electron spectrum
remaining roughly constant (Macomb, {\it et al.\ } 1995, Takahashi, {\it et al.\ } 1996).
It is plausible to assume that the SSC mechanism operates generally in BL
Lac objects, since these objects (by definition) usually do not show evidence
of emission-line clouds to scatter external seed photons.
The fact that the TeV photons did not flare much more dramatically than the
X-rays implies that the enhanced high-energy electrons were scattering off
a part of the synchrotron SED which remained constant (Takahashi, {\it et al.\ } 1996).
This leads to the important conclusion that
the TeV $\gamma$-rays are not the result of the inverse Compton scattering off
the X-rays, even though the synchrotron-produced luminosity peaked in the
X-ray range.
This observation can be understood if the TeV
$\gamma$-rays were produced by Compton scattering off
photons in the UV and optical regions of the SED in which the luminosity
remained constant during the flare. This situation could have occurred during
the flare if scatterings off optical and UV photons
occurred in the Thomson regime whereas scatterings off the more
dominant X-rays would have been suppressed by being in the
Klein-Nishina (KN) range.
We therefore deduce that during the flare the transition between
the Thomson and KN regimes occurred at a soft photon
energy of $\sim$ 10 eV.
Thus, scatterings off X-ray photons would have occurred in the extreme
KN limit.
The boundary between Compton scattering in the Thomson and KN
limits is given by the condition
$\epsilon E_{\gamma}/\delta ^2m^2c^4 \sim 1$, where $\epsilon$ is the energy
of the soft photon being upscattered and $E_{\gamma}$ is the energy of the
high-energy {$\gamma$-ray\ } produced and $\delta = [\Gamma(1-\beta\cos\theta)]^{-1}$
is the Doppler factor of the blazar jet.
(We denote quantities in the rest system of the
source with a prime. Unprimed quantities refer to the observer's frame.)
The factor of $\delta ^2$ results from the
Doppler boosting of both photons from the rest frame of the emitting
region in the jet.
According to the above condition, the Doppler factor which
produces a Thomson-KN transition
for soft photons near 10 eV is given by
\begin{equation}
\label{doppler}
\delta \approx 6\epsilon_{10}^{1/2}E_{\rm TeV}^{1/2}
\end{equation}
where $\epsilon_{10}=(\epsilon/10\: {\rm eV})$ and $E_{\rm TeV}=
(E_{\gamma}/1\: {\rm TeV}) $.\footnote{This value of $\delta$ is
consistent with the condition that the jet be transparent to {$\gamma$-rays\ }
(see, {\it e.g.}, Mattox, {\it et al.\ } 1996).}
From this condition, it follows that the Lorentz factor of the scattering
electron in the source frame $\gamma_e'$, and the magnetic field
strength $B^{\prime}$, obtained from
the expression for the characteristic synchrotron frequency
$\nu_{\rm s}' \simeq 0.19 (eB'/m_{e}c)\gamma_{e}'^2$ of the soft photon,
are given by
\begin{equation}
\label{gamma}
\gamma_{e}'\simeq 3\times 10^5\epsilon_{10}^{-1/2}E_{\rm TeV}^{1/2}
\qquad\mbox{from}\qquad E_{\gamma}\sim\frac{4}{3}\gamma_{e}'^{2}\epsilon
\end{equation}
and
\begin{equation}
\label{B}
B^{\prime}
\simeq 0.2\epsilon_{\rm keV}\epsilon_{10}^{1/2}E_{\rm TeV}^{-3/2}\;\;{\rm G}
\end{equation}
where $\epsilon_x=1\epsilon_{\rm keV}$ keV is the characteristic
X-ray synchrotron photon energy $h\nu_{s}$, resulting
from electrons with energy
$\gamma_{e}'mc^2$ in a B-field of strength $B^{\prime}$.
Taking $\epsilon_{10}$, $\epsilon_{\rm keV}$ and $E_{\rm TeV}$
equal to unity in
eq.(\ref{B}),
we obtain a value of
$B^{\prime}\sim 0.2$ G, which is
consistent with other estimates (Takahashi, {\it et al.\ } 1996).
For Mrk 421 we find that the ratio of bolometric Compton to synchrotron
luminosities $L_{\rm C}/L_{\rm syn}=U_o'/U_B'\sim 1$,
where $U_o'$ is the rest
frame energy density in the IR to UV range (that of the seed photons),
and $U_B'=B'^2/8\pi$ is the magnetic energy density.
From this analysis we can also obtain an estimate for the
size of the optical emitting region, $r^{\prime}$, by noting that
\begin{equation}
\label{uo}
U_o'=\delta^{-4}L_o/4\pi r'^2c
\end{equation}
({\it e,g}, Pearson \& Zensus 1987), where $L_{o}$ is
the luminosity
of the source in the optical-UV range $\sim 2\times 10^{44}$ erg s$^{-1}$.
From this, one obtains
\begin{equation}
\label{r'}
r' \sim 2\times 10^{16}
\epsilon_{10}^{-3/2}E_{\rm TeV}^{1/2}
\epsilon_{\rm keV}^{-1}\;\;{\rm cm},
\end{equation}
The optical variability
timescale, given by $\tau_o\sim r'/c\delta$,
is much longer than the X-ray and TeV flare timescales. This implies that
during the flare, impulsive acceleration of the high-energy tail of the
relativistic electron
distribution occurred over a much smaller region than that occupied by the
bulk of the relativistic electron population.
\section{XBL TeV Source Candidates}
Within the SSC scenario justified above for Mrk 421,
we have used simple scaling arguments to predict the $\gamma$-ray fluxes in
different energy bands.
A general property of the SSC mechanism is that the Compton component has
a spectrum which is similar to the synchrotron component, but upshifted
by $\sim\gamma_{e,max}'^2$ (up to the KN limit),
where $\gamma_{e,max}'$ is the maximium electron Lorentz factor.
Thus, by comparing the synchrotron and Compton spectral components of Mrk 421,
which are both roughly parabolic on a logarithmic
$\nu F_{\nu}$ plot (Macomb, {\it et al.\ } 1995),
we find an upshifting factor $\sim 10^9$ is required.
The implied value of $\gamma_{e,max} \sim 10^{4.5}$ is consistent with
that given in eq.(\ref{gamma}).
We note that the radio to optical and 0.1 to 1 GeV photon spectral indices
of the {\it EGRET} source XBLs are flatter than $E^{-2}$ (Vestrand, {\it et al.\ } 1995;
Sreekumar, {\it et al.\ } 1996) and the X-ray and Mrk 421 TeV
spectra are steeper than $E^{-2}$ (Mohanty, {\it et al.\ } 1993; Petry, {\it et al.\ } 1996), as
expected for the parabolic spectral shapes.
We assume for simplicity that all XBLs have the same
properties as those found for Mrk 421.
Both XBLs which have been detected by {\it EGRET}, Mrk421 and
PKS2155-304, have $L_{\rm C}/L_{\rm syn}\sim 1$. We will assume that this
ratio is the same for all XBLs.
The similarity between the synchrotron and Compton components,
with the upshifting factor of $\sim 10^9$ discussed
above, allows us to derive the following scaling law:
\begin{equation}
\label{scale}
\frac{\nu_oF_o}{L_{\rm syn}}\simeq\frac{\nu_{\rm GeV}F_{\rm GeV}}{L_{\rm C}}
\;\;{\rm and}\;\;
\frac{\nu_xF_x}{L_{\rm syn}}\simeq\frac{\nu_{\rm TeV}F_{\rm TeV}}{L_{\rm C}},
\end{equation}
From this equation, and assuming that $L_{\rm C}/L_{\rm syn}\sim 1$,
we obtain the energy fluxes for the GeV and TeV
ranges,
\begin{equation}
\label{ef}
\nu_{\rm GeV}F_{\rm GeV} \sim \nu_oF_o \;\;{\rm and}\;\;
\nu_{\rm TeV}F_{\rm TeV} \sim \nu_xF_x
\end{equation}
In order to select good candidate TeV sources, we have used the {\it
EINSTEIN} slew survey sample given by Perlman, {\it et al.\ } (1996) to choose
low-redshift XBLs.
Using Eq.(\ref{ef}), we then calculated fluxes above 0.1
GeV for these sources.
We have normalized our calculations to the observed {\it EGRET}
flux for Mrk 421.
The energy fluxes $F_o$ and $F_x$ which we used in
the calculation are from Perlman, {\it et al.\ } (1996).
The prime uncertainties in our calculations stem
from our assumption that $(L_{C}/L_{syn}) \sim 1$ for all XBLs, from
the non-simultaneity of the data in different energy bands, and from the
fact that the synchrotron and Compton SEDs are not identical.
In order to calculate integral fluxes for these sources, we have assumed that
they have $E^{-1.8}$ photon spectra at energies between 0.1 and
10 GeV, the average spectral index for BL Lacs in this energy range.
We have also assumed an $E^{-2.2}$ photon source spectrum above 0.3 TeV
for all of these sources, based on preliminary data on Mrk 421 from
the Whipple collaboration (Mohanty, {\it et al.\ } 1993).
We have taken account of
intergalactic absorption by using an optical depth which is an average
between Models 1 and 2 of Stecker \& de Jager (1997).
Table 1 lists 23 XBLs at redshifts less than 0.2, giving our calculated
fluxes for these sources for energies above 0.1 GeV, 0.3 TeV and 1 TeV.
\section{Conclusions}
Within the context of a simple physical model,
we have chosen 23 candidate TeV sources which are all nearby XBLs and have
predicted fluxes for these sources for energies above 0.1 GeV,
0.3 TeV and 1 TeV.
Our calculations give fluxes which agree with all of the existing GeV and
TeV {$\gamma$-ray\ } observations, including {\it EGRET} upper limits, to within
a factor of 2 to 3.
Having normalized the Mrk 421 flux to a value of $1.43 \times 10^{-7}$
cm$^{-2}$s$^{-1}$
for $E_{\gamma} > 0.1$ GeV (Sreekumar, {\it et al.\ } 1996),
we predict a flux of $2.3\times10^{-11}$cm$^{-2}$s$^{-1}$ above 0.3 TeV.
This prediction is within 20\% of the average flux observed by the
Whipple collaboration over a four year time period (Schubnell, {\it et al.\ } 1996).
For Mrk 501, we predict a flux above 0.3 TeV which
should be observable with the Whipple telescope (as is indeed the case),
whereas the corresponding
0.1 GeV flux is predicted to be on the threshold of detection by {\it EGRET}.
(Mrk 501, as of this writing, has not been detected by {\it EGRET}.)
We predict a flux for PKS 2155-304 of $3.9\times10^{-7}$cm$^{-2}$s$^{-1}$
above 0.1 GeV.
For this source, a flux of $(2.7\pm0.7)\times10^{-7}$cm$^{-2}$s$^{-1}$
above 0.1 GeV
was detected during a single {\it EGRET} viewing period (Vestrand, {\it et al.\ } 1955),
close to our predicted value.
The tentative Whipple source 1ES2344+514 is one of our stronger source
predictions.
According to our calculations, PKS 2155-304, a southern hemisphere source
which has not yet been looked at, should be relatively
bright above 0.3 TeV, but not above 1 TeV, owing to intergalactic absorption.
Thus, TeV observations of this particular source may provide evidence for
the presence of intergalactic infrared radiation.
As Sambruna, {\it et al.\ } (1996) have pointed out, is is difficult to explain
the large differences in peak synchrotron frequencies between XBLs and
RBLs on the basis of jet orientation alone. The recent {$\gamma$-ray\ } evidence
discussed here suggests that similar large differences in peak Compton
energies
carry over into the {$\gamma$-ray\ } region of the spectrum via the SSC mechanism,
supporting the hypothesis that real physical differences exist between
XBLs (HBLs) and RBLs (LBLs).
\acknowledgments
We wish to acknowledge very helpful discussions with Carl Fichtel and Rita
Sambruna.
| proofpile-arXiv_065-396 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The present article is strongly related to the papers \cite{gihemorpar} and
\cite{gihemopar}. Whereas the algorithms developed in these references are
related to the algebraically closed case, here we are concerned with the real
case. Finding a real solution of a polynomial equation $f(x)=0$ where $f$ is a
polynomial of degree $d\ge 2$ with rational coefficients in $n$ variables is for
practical applications more important than the algebraically closed case. Best
known complexity bounds for the problem we deal with are of the form $d^{O(n)}$
due to \cite{hroy}, \cite{rene}, \cite{basu}, \cite{sole}. Related complexity
results can be found in \cite{canny}, \cite{grigo1}. \par Solution methods for
the algebraically closed case are not applicable to real equation solving
normally. The aim of this paper is to show that certain {\em polar varieties\/}
associated to an affine hypersurface possess a geometric invariant, {\em the
real degree\/}, which permits an adaptation of the algorithms designed in the
papers mentioned at the beginning. The algorithms there are of "intrinsic
type", which means that they are able to distinguish between the semantical and
the syntactical character of the input system in order to profit both for the
improvement of the complexity estimates. Both papers \cite{gihemorpar} and
\cite{gihemopar} show that the {\em affine degree\/} of an input system is
associated with the complexity when measured in terms of the number of
arithmetic operations. Whereas the algorithms in \cite{gihemorpar} still need
algebraic parameters, those proposed in \cite{gihemopar} are completely
rational. \par We will show that, under smoothness assumptions for the case of
finding a real zero of a polynomial equation of degree $d$ with rational
coefficients and $n$ variables, it is possible to design an algorithm of
intrinsic type using the same data structure, namely straight-line programs
without essential divisions and rational parameters for codifying the input
system, intermediate results and the output, and replacing the affine degree by
the real degree of the associated polar varieties to the input equation. \par
The computation model we use will be an arithmetical network (compare to
\cite{gihemorpar}). Our main result then consists in the following. {\em There
is an arithmetical network of size $(nd\delta^*L)^{O(1)}$ with parameters in the
field of rational numbers which finds a representative real point in every
connected component of an affine variety given by a non-constant square-free
$n$-variate polynomial $f$ with rational coefficients and degree $d\ge 2$
$($supposing that the affine variety is smooth in all real points that are
contained in it.$)$. $L$
denotes the size of the straight-line program codifying the input and $\delta^*$
is the real degree associated to $f$.} \\
Close complexity results are the ones
following the approach initiated in \cite{ShSm93a}, and further developed in
\cite{ShSm93b}, \cite{ShSm93c}, \cite{ShSm93d}, \cite{ShSm1}, see also
\cite{Dedieu1}, \cite{Dedieu2}. \par For more details we refer the reader to
\cite{gihemorpar} and \cite{gihemopar} and the references cited there.
\newpage
\section{Polar Varieties and Algorithms }
As usually, let $\mbox{}\; l\!\!\!Q, \; I\!\!R$ and $l\!\!\!C$ denote the field of rational, real and
complex numbers, respectively. The affine n--spaces over these fields are denoted by
$\mbox{}\; l\!\!\!Q^n, \; I\!\!R^n$ and $l\!\!\!C^n$, respectively. Further, let $l\!\!\!C^n$ be endowed
with the Zariski--topology, where a closed set consists of all common zeros of
a finite number of polynomials with coefficients in $\mbox{}\; l\!\!\!Q$.
Let $W \subset {l\!\!\!C}^n$ be a closed subset
with respect to this topology and let $W= C_1\cup\cdots \cup C_s$ be its
decomposition into irreducible components with respect to the same topology.
Thus $W, \; C_1,\ldots,C_s$ are algebraic subsets of ${l\!\!\!C}^n$.
Let $1\le j \le s$, be arbitrarily fixed and consider the irreducible component
$C_j$ of $W$.
In the following we need the notion of degree of an affine algebraic variety.
Let $W \subset l\!\!\!C^n$ be an algebraic subset given by a regular sequence
$f_1, \cdots f_i \in \mbox{}\; l\!\!\!Q[x_1, \cdots, x_n]$ of degree at most $d$. If
$W \subset l\!\!\!C^n$ ist zero--dimensional the {\it degree} of $W, \; deg W$, is
defined to be the number of points in $W$ (neither multiplicities nor points at
infinity are counted). If $W \subset l\!\!\!C^n$ is of dimension greater than zero
(i.e.
$dim W = n-i \ge 1$), then we consider the collection ${\cal M}$ of all affine varieties
of dimension $i$ given as the solution set in $l\!\!\!C^n$ of a linear equation
system $L_1 = 0, \; \cdots, L_{n-i} = 0$ with $L_{k} = \sum_{j=1}^{n} a_{kj} x_j
+ a_{k0}, \; a_{ki} \in \mbox{}\; l\!\!\!Q, 1 \le i \le n$. Let ${\cal M}_{W}$ be the subcollection of
${\cal M}$ formed by all varieties $H \in {\cal M}$ such that the affine variety $H \cap W$
satisfies $H \cap W \not= \emptyset$ and $dim(H \cap W) = 0$. Then the affine
degree of $W$ is defined as $max\{ \delta | \delta = deg(H \cap W), \;
H \in M_W \}$.
\begin{definition}\label{def1}
The component $C_j$ is called a {\rm real component} of $W$ if the real variety
$C_j\cap I\!\!R^n$ contains a smooth point of $C_j$. \par
\noindent If we denote
\[
I = \{ j \in I\!\!N | 1 \le j \le s, \hskip 3pt C_j \hskip
3pt{\hbox {\rm is a real component of $W$}} \}.
\]
then the affine variety $W^\ast := \bigcup \limits_{j \in I} C_j \; \subset
l\!\!\!C^n$
{\it is called the real part} of $W$.
By $deg^{\ast} W := degW^{\ast} = \sum\limits_{j \in I}deg C_j$ we define
the {\it real degree} of the set $W$.
\end{definition}
\begin{remark}
{\rm Observe that $deg^{\ast} W= 0$ holds if and only if the real part $W^{\ast}$ of $W$
is empty.}
\end{remark}
\begin{proposition}\label{prop3}
Let $f \in \mbox{}\; l\!\!\!Q[X_1,\cdots,X_n]$ be a non-constant
and square-free polynomial and let $\widetilde{V}(f)$
be the set of real zeros of the equation $f(x) =0$.
Assume $\widetilde{V}(f)$ to be bounded.
Furthermore,
let for every fixed $i, \; 0 \le i <n$, the real variety
$$\widetilde{V}_i := \{ x \in I\!\!R^n | \; f(x) = {{\partial f(x)} \over {\partial X_1}} =
\ldots,{{\partial f(x)} \over {\partial X_i}} = 0 \}$$
be non-empty (and $\widetilde{V}_0$ is understood to be $\widetilde{V}(f)$).
Suppose the variables to be in generic position. Then any point of $\widetilde{V}_i$
that is a smooth point of $\widetilde{V}(f)$ is also a smooth point of
$\widetilde{V}_i$. Moreover, for every such point the Jacobian of the equation
system $f=\frac{\partial f}{\partial X_1} = \cdots = \frac{\partial f}{\partial
X_i} =0$ has maximal rank.
\end{proposition}
\bigskip
{\bf Proof}
Consider the linear transformation $x \longleftarrow A^{(i)} y$, where
the new variables are $y = (Y_1, \cdots, Y_n)$. Suppose that
$A^{(i)}$ is given in the form
\[
\left( \begin{array}{ll}
I_{i,i} & 0_{i,n-i} \\
(a_{kl})_{n-i,i} & I_{n-i,n-i} \end{array} \right) ,
\]
where $I$ and 0 define a unit and a zero matrix, respectively, and\\
$a_{kl} \in I\!\! R $ arbitrary if $k,l$ satisfy $i+1 \le k \le n, \;\;
1 \le l\le i$.\\
The transformation $x \longleftarrow A^{(i)} y$ defines a linear change of coordinates, since the
square matrix $A^{(i)}$ has full rank.
In the new coordinates, the variety $\widetilde{V}_i$ takes the form
$$\widetilde{V}_i := \{ y \in I\!\!R^n | \; f(y) = {{\partial f(y)} \over {\partial Y_1}} +
\sum_{j = i+1}^n a_{j1} {{\partial f(y)} \over {\partial Y_j}} = \ldots =
{{\partial f(y)} \over {\partial Y_i}}
+\sum_{j = i+1}^n a_{ji} {{\partial f(y)} \over {\partial Y_j}}= 0 \}$$
This transformation defines a map
$\Phi_i \; : I\!\!R^n \times I\!\!R^{(n-i) i} \longrightarrow I\!\!R^{i+1}$ given by
$$\Phi_i \left ( Y_1, \cdots, Y_i, \cdots, Y_n, a_{i+1, 1}, \cdots
a_{n 1}, \cdots a_{i+1,i}, \cdots, a_{n, i} \right ) = $$
$$\left ( f,\; {{\partial f} \over {\partial Y_1}} +
\sum_{j = i+1}^n a_{j 1} {{\partial f} \over {\partial Y_j}}, \ldots,\;
{{\partial f} \over {\partial Y_i}}
+\sum_{j = i+1}^n a_{j i} {{\partial f} \over {\partial Y_j}} \right )$$
For the moment let
$$\alpha := (\alpha_1, \cdots, \alpha_{(n-i) i} ) :=
(Y_1, \cdots, Y_n, a_{i+1, \; 1}, \cdots a_{n,i}) \in I\!\!R^n \times I\!\!R^{(n-i) i}$$
Then the Jacobian matrix of $\Phi_i ( \alpha )$ is given by\newline
$J \left (\Phi_i (\alpha ) \right ) = \left (
{{\partial \Phi_i ( \alpha )} \over {\partial \alpha_j}}
\right )_{(i+1)\times (n + (n-i) i) } = $
$$\left ( \begin{array}{ccllclccl}
{{\partial f}\over{\partial Y_1}}& \cdots &
{{\partial f} \over {\partial Y_n}} & 0 & \cdots & 0 & \cdots & \cdots & 0 \\
\ast & \cdots& \ast & {{\partial f} \over {\partial Y_{i+1}}} & \cdots & {{\partial f} \over {\partial Y_n}}
& 0 \cdots & \vdots & 0\\
\vdots & & \vdots & \ddots & \ddots & 0 & \cdots & \ddots & 0 \\
\ast & \cdots& \ast & 0 \cdots & 0 \cdots & \cdots &{{\partial f} \over {\partial Y_{i+1}}}
& \cdots & {{\partial f} \over {\partial Y_n}}
\end{array} \right ) $$
If $\alpha^0 = (Y_1^0, \cdots, Y_n^0, a_{i+1, \; 1}^0, \cdots a_{n, \; i}^0)$
belongs to the fibre $\Phi_i^{-1} (0)$, where $(Y_1^0, \cdots, Y_n^0)$ is a
point of the hypersurface $\widetilde{V} (f)$ and if there is an index
$j \in \{ i+1, \cdots, n \}$ such that
${{\partial f} \over {\partial Y_j}} \not= 0$ at this point, then the Jacobian matrix
$J \left (\Phi_i (\alpha^0) \right )$ has the maximal rank $i + 1$.\\
Suppose now that for all points of $\widetilde{V} (f)$
$${{\partial f(y)} \over {\partial Y_{i+1}}} = \cdots =
{{\partial f(y)} \over {\partial Y_n}} = 0$$
and let $C := I\!\!R^n \setminus \{ {{\partial f(y)} \over {\partial Y_1}}
= \cdots =
{{\partial f(y)} \over {\partial Y_n}} = 0 \}$, which is an open set.
Then the restricted map
\[
\Phi_i : C \times I\!\! R^{(n-i)i} \longrightarrow I\!\! R^{i+1}
\]
is transversal to the subvariety $\{ 0\}$ in $I\!\! R^{i+1}$.\\
By weak transversality due to {\sl Thom/Sard} (see e.g. \cite{golub})
applied to
the diagram
$$ \begin{array}{lcc}
\Phi_i^{-1} (0) & \hookrightarrow & I\!\!R^n \times I\!\!R^{(n-i) i} \\
& \searrow & \downarrow \\
& & I\!\!R^{(n-i) i}
\end{array} $$
\noindent one concludes that the set of all $A \in I\!\!R^{(n-i) i}$ for which transversality
holds is dense in $I\!\!R^{(n-i) i}$.
Since the hypersurface $\widetilde{V}(f)$ is bounded by assumption, there is an open and
dense set of matrices $A$ such that the corresponding coordinate transformation
leads to the desired smoothness. \hfill $\Box$\\
Let $f \in \mbox{}\; l\!\!\!Q[X_1,\cdots,X_n]$ be a non--constant squarefree polynomial and let
$W := \{ x \in l\!\!\!C^n | \; f(x) = 0 \}$ be the hypersurface defined by $f$.
Consider the real variety $V := W \cap I\!\!R^n$ and suppose:
\begin{itemize}
\item $V$ is non-empty and bounded,
\item the gradient of $f$ is different from zero in all points of $V$\\
(i.e. $V$ is a compact smooth hypersurface in $I\!\!R^n$ and $f = 0$ is its
regular equation)
\item the variables are in generic position.
\end{itemize}
\begin{definition}[Polar variety corresponding to a linear space]
Let $i, \; 0\leq i<n$, be arbitrarily fixed. Further, let
$X^i := \{ x \in l\!\!\!C^n | \; X_{i+1} = \cdots = X_n = 0 \}$ be the corresponding
linear subspace of $l\!\!\!C^n$. Then, $W_i$ defined to be the Zariski closure of
$$ \{ x \in l\!\!\!C^n | \; f(x) = {{\partial f(x)} \over{\partial X_1}} =
\cdots = {{\partial f(x)} \over {\partial X_i}} = 0, \; \Delta (x) :=
\sum_{j = 1}^{n} \left ( {{\partial f(x)} \over {\partial X_j}} \right )^2
\not= 0 \} $$
is called the {\it polar variety} of $W$ associated to the linear
subspace $X^i$.
The corresponding real variety of $W_i$ is denoted by $V_i
:= W_i \cap I\!\!R^n$.
\end{definition}
\bigskip
\begin{remark}
{\rm
Because of the hypotheses that $V \not= \emptyset$ is a smooth hypersurface and
that $W_i \not= \emptyset$ by the assumptions above, the real variety $V_i
:= W_i \cap I\!\!R^n, \;
0 \le i <n$, is not empty and by smoothness of $V$, it has the description
$$V_i = \{ x \in I\!\!R^n | f(x) = {{\partial f(x)} \over {\partial X_1}} =
\ldots = {{\partial f(x)} \over {\partial X_i}} = 0 \} .$$
($V_0$ is understood to be $V$.)\\
According to Proposition 3, $V_i$ is smooth if the coordinates are chosen to
be in generic position. Definition 4 of a polar variety is slightly different from the
one introduced by L\^{e}/Teissier \cite{le}. }
\end{remark}
\bigskip
\begin{theorem}
Let $f \in \mbox{}\; l\!\!\!Q[X_1,\cdots,X_n]$ be a non--constant squarefree polynomial and let
$W := \{ x \in l\!\!\!C^n | \; f(x) = 0 \}$ be the corresponding hypersurface.
Further, let $V := W \cap I\!\!R^n$ be a non--empty, smooth, and bounded
hypersurface in $I\!\!R^n$ whose regular equation is given by $f = 0$. Assume the
variables $X_1, \cdots, X_n$ to be generic. Finally, for
every $i, \; 0 \le i < n$, let the polar varieties $W_i$ of $W$ corresponding
to the subspace $X^i$ be defined as above. Then it holds~:
\begin{itemize}
\item $V \subset W_0$, with $W_0 = W$ if and only if $f$ and $\Delta :=
\sum_{j=1}^n \left( \frac{\partial f}{\partial X_j} \right)^2$ are coprime,
\item $W_i$ is a non--empty equidimensional affine variety of dimension
$n-(i+1)$ that is smooth in all its points that are smooth points of $W$,
\item the real part $W_i^\ast $ of the polar variety $W_i$ coincides with
the Zariski closure in $l\!\!\!C^n$ of
$$V_i = \left\{ x \in I\!\!R^n | f(x) =
{{\partial f(x)} \over {\partial X_1}} = \ldots =
{{\partial f(x)} \over {\partial X_i}} = 0 \right\} ,$$
\item for any $j$, $i<j \le n$ the ideal
$$\left(f, {{\partial f} \over {\partial X_1}}, \ldots,{{\partial f}
\over {\partial X_i}}\right)_{{{\partial f} \over {\partial X_j}}}$$ is
radical.
\end{itemize}
\end{theorem}
{\bf Proof:}
Let $i, 0\le i < n$, be arbitrarily fixed. The first item is obvious
since $W_0$ is the union of all irreducible components of $W$ on which
$\Delta$ does not vanish identically. \\
Then
$W_i$ is non-empty by the assumptions. The sequence $f,
\frac{\partial f}{\partial X_1} , \ldots , \frac{\partial f}{\partial X_i}$ of
polynomials of $\mbox{}\; l\!\!\!Q[X_1,\ldots , X_n]$ forms a local regular sequence
with respect to the smooth points of $W_i$ since
the affine varieties $\big\{ x \in l\!\!\!C^n | f(x) = \frac{\partial f(x)}{\partial X_1}
= \cdots = \frac{\partial f(x)}{\partial X_k} = 0 \big\}$ and
$\big\{ x \in l\!\!\!C^n | \frac{\partial f(x)}{\partial X_{k+1}} = 0\big\} $ are transversal
for any $k, 0\le k\le i-1$, by the generic choice of the coordinates, and hence
the sequence $f, \frac{\partial f}{\partial X_1} ,\cdots , \frac{\partial f}{\partial X_i}$
yields a local complete intersection with respect to the same points.
This implies that $W_i$ are equidimensional
and $dim_{l\!\!\!C} W_i = n-(i+1)$ holds. We observe that every smooth point of $W_i$
is a smooth point of $W$, which completes the proof of the second item.\\
The Zariski closure of $V_i$ is contained in $W_i^\ast$, which is a simple
consequence of the smoothness of $V_i$. One obtains the reverse inclusion
as follows. Let $x^\ast \in W_i^\ast$ be an arbitrary point, and let $C_{j\ast}$
be an irreducible component of $W_i^\ast$ containing this point, and $C_{j\ast}
\cap V_i \not= \emptyset$. Then
\[
\begin{array}{rl}
n-i-1 &= dim_{I\!\! R}(C_{j\ast}\cap V_i)= dim_{I\!\! R}
R(C_{j\ast}\cap V_i)=\\
&=dim_{l\!\!\!C} R((C_{j\ast}\cap V_i)')\le dim_{l\!\!\!C} C_{j\ast} = n-i-1,
\end{array}
\]
where $R(\cdot)$ and $(\,\, )'$ denote the corresponding sets of smooth points
contained in $( \cdot )$ and the associated complexification, respectively. Therefore,
$dim_{l\!\!\!C} (C_{j\ast} \cap V_i)' = dim_{l\!\!\!C} C_{j\ast} = n-i-1$ and, hence,
$C_{j\ast} =
(C_{j\ast}\cap V_i)'$, and the latter set is contained in the Zariski closure of $V_i$.\\
We define the non-empty affine algebraic set
\[
\widetilde{W}_i := \left\{ x\in l\!\!\!C^n | f(x) =
\frac{\partial f(x)}{\partial X_1} = \cdots = \frac{\partial f(x)}{\partial
X_i}
= 0 \right\} .
\]
Let $j, i<j\le n,$ be arbitrarily fixed. Then one finds a smooth
point $x^\ast$ in $\widetilde{W}_i$ such that $ \frac{\partial f(x^\ast)}{\partial X_j}
\not= 0$; let $x^\ast$ be fixed in that way. The hypersurface $W = \{ x \in l\!\!\!C^n |
f(x) =0 \}$ contains $x^\ast$ as a smooth point, too. Consider the local ring
${\cal O}_{W,x^\ast} $ of $x^\ast$ on the hypersurface $W$. (This is the ring
of germs of functions on $W$ that are regular at $x^\ast$. The local ring
${\cal O}_{W,x^\ast}$ is obtained by dividing the ring $\mbox{} \; l\!\!\!C [X_1, \ldots , X_n ]$ of
polynomials by the principal ideal $(f)$, which defines $W$ as an affine variety,
and then by localizing at the maximal ideal $(X_1 - X_1^\ast,\ldots , X_n-X_n^\ast)$,
of the point $x^\ast = (X_1^\ast, \ldots , X_n^\ast)$ considered as a single
point affine variety.) Using now arguments from Commutative Algebra and Algebraic Geometry,
see e.g. Brodmann \cite{brod}, one arrives at the fact that ${\cal O}_{W,x^\ast}$ is an integral
regular local ring.\\
The integrality of ${\cal O}_{W,x^\ast}$ implies that there is a uniquely determined irreducible
component $Y$ of $W$ containing the smooth point $x^\ast$ and locally this component
corresponds to the zero ideal of ${\cal O}_{W,x^\ast}$, which is radical. Since
the two varieties $\widetilde{W}_i \cap Y$ and $W\cap Y$ coincide locally, the
variety $\widetilde{W}_i \cap Y$ corresponds locally to the same ideal.
Thus, the desired radicality
is shown. This completes the proof. \linebreak
$\mbox{} \hfill \Box$\\
\begin{remark}
{\rm If one localizes with respect to the function $ \Delta(x) = \sum\limits^n_{j=1}
\big( \frac{\partial f(x)}{\partial X_j}\big)^2$, then one obtains, in the same way
as shown in the proof above, that the ideal
\[
\big( f, \frac{\partial f}{\partial X_1} , \ldots , \frac{\partial f}{\partial X_i}
\big)_\Delta
\]
is also radical.}
\end{remark}
\begin{remark}\label{rem8}
{\rm Under the assumptions of Theorem 6, for any $i,\;\; 0\le i<n$, we observe the
following relations between the different non-empty varieties introduced up to now.
\[
V_i \subset V,\quad V_i \subset W^\ast_i \subset W_i \subset \widetilde{W}_i ,
\]
where $V$ is the considered real hypersurface, $V_i$ defined as in Remark 5,
$W_i$ the polar variety due to Definition 4, $W^\ast_i$ its real part according
to Definition 1, and $\widetilde{W}_i$ the affine variety introduced in the proof of
Theorem 6. With respect to Theorem 6 our settings and assumptions imply that
$n-i-1 = dim_{l\!\!\!C} \widetilde{W}_i = dim_{l\!\!\!C} W_i = dim_{l\!\!\!C} W^\ast_i =
dim_{I\!\! R} V_i $ holds. By our smoothness assumption and the generic choice of the
variables we have for the respective sets of smooth points (denoted as before
by $R(\cdot ))$
\[
V_i = R(V_i)\subset R(W_i) \subset R(\widetilde{W}_i) \subset R(W) ,
\]
where $W$ is the affine hypersurface.\\
For the following we use the notations as before, fix an $i$ arbitrarily,$ \;\; 0\le i<n$,
denote by $\delta^\ast_i$ the real degree of the polar variety $W_i$
(compare with Definition 1, by smoothness one has that the real degree of the
polar variety $W_i$ is equal to the real degree of the affine variety $\widetilde{W}_i$),
put $\delta^\ast := \max \{ \delta^\ast_k | 0 \le k \le i \}$ and let $ d :=
\deg f$. Finally, we write for shortness $r := n-i-1$.\\
We say that the variables $X_1,\ldots , X_n$ are in Noether position with respect
to a variety $\{ f_1= \cdots = f_s =0\}$ in $l\!\!\!C^n, \; f_1, \ldots , f_s \in
\mbox{}\; l\!\!\!Q [ X_1, \ldots , X_n],$ if, for each $r<k \le n$, there exists a polynomial of
$\mbox{}\; l\!\!\!Q [X_1, \ldots , X_r, X_k]$ that is monic in $X_k$ and vanishes on
$\{ f_1=\cdots =f_s=0\}.$ \\
Then one can state the next, technical lemma according to \cite{gihemorpar},
\cite{gihemopar},
where the second reference is important in order to ensure that the occurring
straight-line programs use parameters in $\mbox{}\; l\!\!\!Q$ only.}
\end{remark}
\begin{lemma}\label{lem9}
Let the assumptions of Theorem 6 be satisfied. Further, suppose that the polynomials
$f, \frac{\partial f}{\partial X_1}, \ldots , \frac{\partial f}{\partial X_i}
\in \mbox{}\; l\!\!\!Q [ X_1,\ldots , X_n] $ are given by a straight-line program $\beta$ in
$\mbox{}\; l\!\!\!Q [X_1, \ldots , X_n]$ without essential divisions, and let $ L$ be the
size of $\beta$. Then there is an arithmetical network with parameters in $\mbox{}\; l\!\!\!Q$
that constructs the following items from the input $\beta$
\begin{itemize}
\item a regular matrix of $\mbox{} \mbox{}\; l\!\!\!Q^{n\times n}$ given by its elements that transforms the
variables $X_1, \ldots , X_n$ into new ones $Y_1, \ldots , Y_n$
\item a non-zero linear form $U \in \mbox{}\; l\!\!\!Q [Y_{r+1}, \ldots , Y_n]$
\item a division-free straight-line program $\gamma$ in $\mbox{}\; l\!\!\!Q[Y_1, \ldots ,Y_r, U]$
that represents non-zero polynomials $\varrho \in \mbox{}\; l\!\!\!Q[Y_1, \ldots , Y_r] $ and
$q,p_1,\ldots , p_n \in \mbox{}\; l\!\!\!Q [Y_1,\ldots , Y_r, U]$.
\end{itemize}
These items have the following properties:
\begin{itemize}
\item[(i)] The variables $Y_1,\ldots ,Y_n$ are in Noether position with respect to
the variety $W^\ast_{n-r}$, the variables $Y_1, \ldots , Y_r$ being free
\item[(ii)] The non-empty open part $(W^\ast_{n-r})_\varrho$ is defined by the
ideal $(q, \varrho X_1-p_1, \ldots ,$
$\varrho X_n-p_n)_\varrho$ in the localization
$\mbox{}\; l\!\!\!Q[X_1,\ldots ,X_n]_\varrho$ .
\item[(iii)] The polynomial $q$ is monic in $u$ and its degree is equal to\linebreak
$\delta^\ast_{n-r}= \deg^\ast W_{n-r}= \deg W^\ast_{n-r} \le \delta^\ast$.
\item[(iv)] $\max \{ \deg_up_k | 1\le k \le n \} < \delta^\ast_{n-r},\quad
\max \{ \deg p_k | 1\le k \le n \} = (d \delta^\ast)^{0(1)},$\linebreak
$\deg \varrho = (d \delta^\ast)^{0(1)}$.
\item[(v)] The nonscalar size of the straight-line program $\gamma$ is given
by $(sd\delta^\ast L)^{0(1)}$.
\end{itemize}
\end{lemma}
The proof of Lemma 9 can be performed in a similar way as in
\cite{gihemorpar}, \cite{gihemopar} for establishing
the algorithm. For the case handled here, in the i-th
step one has to apply the algorithm to the localized sequence
$\left( f, \frac{\partial f}{\partial X_1}, \ldots , \frac{\partial f}{\partial X_i}
\right)_{\Delta}$ as input.
The only point we have to take care of is the process of cleaning
extranious $\mbox{}\; l\!\!\!Q$-irreducible components. Whereas in the proofs of the
algorithms we refer to it suffices to clean out components lying in a
prefixed hypersurface (e.g. components at infinity), the cleaning process we
need here is more subtile.
We have to clean out all non-real $\mbox{}\; l\!\!\!Q$-irreducible components that appear
during our algorithmic process. The idea of doing this is roughly as follows.
Due to the generic position of the variables $X_1,\ldots,X_n$ all
$\mbox{}\; l\!\!\!Q$-irreducible components of the variety $\widetilde{W}_{n-r}$ can be
visualized as $\mbox{}\; l\!\!\!Q$-irreducible factors of the polynomial $q(X_1,
\ldots,X_r,U)$. If we specialize {\em generically} the variables
$X_1,\ldots,X_r$ to {\em rational} values $\eta_1,\ldots,\eta_r$, then
by Hilbert's Irreducibility Theorem (in the version of \cite{lang})
the $\mbox{}\; l\!\!\!Q$-irreducible factors of the {\em multivariate} polynomial
$q(\eta_1,\ldots,\eta_r,U)$ correspond to the $\mbox{}\; l\!\!\!Q$-irreducible factors
of the {\em one--variate} polynomial $q(\eta_1,\ldots,\eta_r,U) \in \mbox{}\; l\!\!\!Q[U]$.
In order to explain our idea simpler, we assume that we are able to
choose our specialization of $X_1,\ldots,X_r$ into $\eta_1,\ldots,\eta_r$
in such a way that the hyperplanes $X_1 - \eta_1 = 0,\ldots,X_r - \eta_r = 0$
cut any {\em real} component of $\widetilde{W}_{n-r}$ (This condition is
open in the strong topology and doesn't represent a fundamental restriction
on the correctness of our algorithm. Moreover, our assumption doesn't
affect the complexity). Under these assumptions the $\mbox{}\; l\!\!\!Q$-irreducible factors
of $q(X_1,\ldots,X_r,U)$, which correspond to the real components
of $\widetilde{W}_{n-r}$, reappear as $\mbox{}\; l\!\!\!Q$-irreducible factors of
$q(\eta_1,\ldots,\eta_r,U)$ which contain a real zero. These
$\mbox{}\; l\!\!\!Q$-irreducible factors of $q(\eta_1,\ldots,\eta_r,U)$ can be found by a
factorization procedure and by a real zero test of standard features of
polynomial complexity character. Multiplying these factors
and applying to the result the lifting-fibre process of \cite{gihemorpar},
\cite{gihemopar} we find the product $q^*$ of the $\mbox{}\; l\!\!\!Q$-irreducible factors of $q(
X_1,\ldots,X_r,U)$, which correspond to the union of the real
components of the variety $\widetilde{W}_{n-r}$, i.e. to the real part of
$\widetilde{W}_{n-r}$. The ideal $(q^*,\varrho X_1-p_1,\ldots,\varrho
X_n-p_n)_\varrho$ describes the localization of the real part of
$\widetilde{W}_{n-r}$ at $\varrho$. All we have pointed out is executible in
polynomial time if a factorization of univariate polynomials over
$\mbox{}\; l\!\!\!Q$ in polynomial time is available and if our geometric assumptions on
the choice of the specialization is satisfied.
\begin{theorem}
Let the notations and assumptions be as in Theorem 6. Suppose
that the polynomial $f$ is given by a straight-line program
$\beta$ without essential divisions in $\mbox{}\; l\!\!\!Q [X_1,\ldots ,X_n]$,
and let $L$ be the nonscalar size of $\beta$. Further,
let $\delta^\ast_i := \deg^\ast W_i, \;
\delta^\ast := \max \{ \delta^\ast_i | 0
\le i < n \}$ be the corresponding real degrees of the polar
varieties in question, and let $d :=
\deg f$. Then there is an arithmetical network of size $(n d
\delta^\ast L)^{0(1)}$ with parameters in $\mbox{}\; l\!\!\!Q$ which
produces, from the input $\beta$, the coefficients of a non-zero
linear form $u \in \mbox{}\; l\!\!\!Q [X_1, \ldots ,X_n]$ and non-zero
polynomials $q,p_1, \ldots , p_n \in \mbox{}\; l\!\!\!Q[U]$ showing the following properties:
\begin{enumerate}
\item For any connected component $C$ of $V$ there is a point $\xi \in C$ and
an element $ \tau \in I\!\! R$ such that $q (\tau)=0$ and $\xi = (p_1(\tau), \ldots, p_n(\tau))$
\item $\deg (q) = \delta^\ast_{n-1} \le \delta^\ast$
\item $\max \{ \deg (p_i) | 1 \le i \le n \} < \delta^\ast_{n-1}$.
\end{enumerate}
\end{theorem}
\newpage
| proofpile-arXiv_065-397 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The internal structure of the colour flux tube (CFT) joining a quark pair in
the confining phase of any gauge model provides an
important test of the dual superconductivity (DS) conjecture \cite{tmp},
because it should show, as the dual of an Abrikosov vortex,
a core of normal, hot vacuum as contrasted with the surrounding
medium, which is in the dual superconducting phase. A general way to study
the internal structure of the flux tube is to test it with suitable gauge
invariant probes. More specifically, the vacuum state of a lattice gauge model
is modified by the insertion in the action of a quark source (for instance a
Wilson loop). In this modified vacuum (called W-vacuum) one can evaluate the
expectation value of various probes as a function of their position with respect
the quark sources. Some general results of such an analysis has been already
reported in Ref.2. Here I will describe some new results which are
specific of the $3D$ $\hbox{{\rm Z{\hbox to 3pt{\hss\rm Z}}}}_2$ gauge model.
\section {The Disorder Parameter around Quark Sources}
The location of the core of the CFT is given in DS conjecture by the
vanishing of the disorder parameter
$\langle\Phi_M(x)\rangle$, where $\Phi_M$ is some effective magnetic Higgs field.
In a pure gauge theory, the formulation of this property poses some problems,
because in general no local, gauge invariant, disorder field $\Phi_M(x)$ is
known. In the special case of $3D$ $\hbox{{\rm Z{\hbox to 3pt{\hss\rm Z}}}}_2$ gauge model there is an exact
duality, namely the Kramers-Wannier transformation, which maps the gauge
theory in the Ising model. The spontaneous magnetization $\mu=\langle\sigma\rangle$ is
precisely the wanted disorder parameter: it vanishes in the deconfined phase,
while it is different from zero in the confining phase.
As an example, in Fig.1 the spontaneous magnetization in a W-vacuum generated
by a pair of parallel Polyakov loops is reported.
One can clearly see the formation of a flux tube with a core where the disorder
parameter vanishes, as required by the DS conjecture.
\vskip -2.3cm
\hskip 1cm\epsfig{file=cfig1.ps,height=10.5cm}
\vskip -1.9cm
{\hskip1.4cm\footnotesize Figure 1. Spontaneous magnetization around a
quark pair.}
\vskip0.2cm
\vskip -2.3 cm
\hskip 1cm\epsfig{file=cfig2.ps,height=10.5cm}
\vskip -1.6cm
\noindent
{\footnotesize Figure 2. Total magnetization as a function of the loop area.
The black dots are square Wilson loops, the open symbols are pairs of Polyakov
loops.}
\vskip0.2cm
The total thickness of the flux tube is the sum of two different contributions:
one is due to quantum fluctuations of string-like modes of the CFT, which
produce an effective squared width growing logarithmically with the interquark
distance \cite{lmw,width}; the other is the intrinsic thickness of the flux
tube, which according to the DS conjecture is non-vanishing.
The total magnetization of the W-vacuum provides us with a method to evaluate
such an intrinsic thickness: describing the CFT approximately as a cylinder of
vanishing magnetization immersed in a mean of
magnetization $\mu\not=0$ we get that the total
magnetization of the W-vacuum in a finite volume decreases linearly with the
volume $V$ spanned by the CFT as shown in Fig.2 , with $V= A L_c$, where
$L_c$ is the intrinsic thickness of the tube and $ A$ is the area
of the minimal surface bounded by the Wilson loop (black dots) or by a
Polyakov pair (open squares).
The slope of such a linear behaviour yields an intrinsic thickness
$L_c\sqrt{\sigma}=0.98(2)$ ($\sigma$ is the string tension) in reasonable
agreement with the theoretical value of $\sqrt{\pi/3}$ suggested by a
conformal field theory argument \cite{cg}.
\section{The Colour Flux Tube at Criticality}
According to the widely tested Svetitsky-Yaffe conjecture \cite{sy}, any
gauge theory in $d+1$ dimensions with a continuous deconfining transition
belongs to the same universality class of a $d$-dimensional $C(G)$-symmetric
spin model, where $C(G)$ is the center of the gauge group.
It follows that at the critical
point all the critical indices describing the two transitions and all the
adimensional ratios of correlation functions of corresponding observables
in the two theories should coincide.
In particular, since the order parameter the gauge theory is
mapped into the corresponding one of the spin model, the correlation functions
among Polyakov loops should be proportional to the corresponding correlators
of spin operators:
\begin{equation}
\langle P_1\dots P_{n}\rangle_{T=T_c}\propto \langle \sigma_1\dots \sigma_{n}\rangle~~.
\end{equation}
The crucial point is that for $d=2$ the form of these universal functions is
exactly known. Then one can use these analytic results to get useful
informations on the internal structure\cite{gv} of the colour flux tube at
$T=T_c$. For instance, the correlator
\begin{equation}
\langle P_1\dots P_{n+2}\rangle=\langle P(x_1,y _1)\dots P(x,y)P(x+\epsilon,y)\rangle~~,
\end{equation}
thought as a function of the spatial coordinates $x,y$ of the last two
Polyakov loops (used as probes), describes, when $\epsilon$ is chosen small
with respect to the other distances entering into the game, the distribution
of the flux around $n$ Polyakov loops with spatial coordinates
$x_i,y_i$ $(i=1,\dots n)$.
In Fig.3 the contour lines of the flux distribution
$\rho(x,y)=\langle P_1\dots P_6\rangle/\langle P_1\dots P_4\rangle-\langle P_5 P_6\rangle$
in a critical gauge system with $C(G)=\hbox{{\rm Z{\hbox to 3pt{\hss\rm Z}}}}_2$ are reported. The Polyakov lines
are located at the corners of a rectangle $d\times r$ with $d>r$. Denoting by
$r_i$ the distance of the probe $(P_5 P_6)$ from the source $P_i$ one has
simply
\begin{equation}
\rho(x,y)\propto\epsilon^{\frac34}\left\{\frac{rd}{\sqrt{r^2+d^2}}
\frac{\sum_i r_i^2}{\prod_ir_i}+O(\epsilon)\right\}~~~.
\end{equation}
One clearly sees the formation of two flux tubes connecting the two pairs of
nearest sources.
Comparison with the distribution obtained by the sum of the fluxes generated by
two non-interacting ($i.e.\;d=\infty$) flux tubes (dotted contours) indicates an
attractive interaction between them, as expected.
\vskip -2.3 cm
\hskip1cm\epsfig{file=cfig3.ps,height=10.5cm}
\vskip -1.6cm
\noindent
{\footnotesize Figure 3. Contours of flux density around two pairs of
parallel Polyakov loops at criticality. The dot\-ted lines correspond to the
contours in the non-interacting case.}
\section*{References}
| proofpile-arXiv_065-398 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
\indent\indent
The dynamical evolution of globular clusters has been extensively investigated
[see Spitzer (1987) for a review].
In many of those investigations, Cohn's (1980) direct integration scheme for the Fokker-Planck (hereafter FP) equation was used as a main numerical tool.
Recently, many studies have been made particularly to reveal the {\it realistic} evolution of globular clusters and
to compare the theoretical models with observations.
In such studies various effects were incorporated in numerical simulations: the stellar-mass spectrum, binaries, the galactic tidal field, stellar evolution, etc.
(e.g., Chernoff, Weinberg 1990; Drukier 1995).
On the other hand,
the anisotropy of the velocity distribution was almost always neglected,
although it is obvious that the anisotropy develops at least in the halo
(it is expected that the radial velocity dispersion exceeds the tangential one).
This neglect was mainly due to a numerical difficulty involving anisotropic FP models.
The evolution of anisotropic clusters can be described by a two-dimensional (2D) orbit-averaged FP equation in energy--angular momentum space (Cohn 1979),
while the evolution of isotropic ones can be described by a one-dimensional (1D) orbit-averaged FP equation in energy space (Cohn 1980).
The direct integration code for the 2D FP equation had a numerical problem
in that the energy conservation was insufficient to continue the run
beyond a factor of $10^3$ increase in the central density (Cohn 1979; Cohn 1985).
On the other hand, an integration of the 1D FP equation can be performed with much higher numerical accuracy (Cohn 1980),
largely due to the adoption of the Chang-Cooper finite differencing scheme (Chang, Cooper 1970).
Recently, Takahashi (1995, hereafter Paper I) has developed a numerical method for solving the 2D FP equation.
The method is essentially the same as Cohn's (1979) method.
A main difference between the two methods exists concerning discretization schemes of the FP equation.
Cohn (1979) used a finite-difference scheme in which
simple centered-differencing was adopted for the spatial discretization.
Cohn (1985) reported that he investigated several heuristic
generalizations of the Chang-Cooper scheme, and that all of these improved
the energy conservation,
though the details of these schemes were not explained.
In Paper I, two different discretization schemes were employed:
one was a finite-difference scheme where the Chang-Cooper scheme is simply applied for only the energy direction;
the other was the finite-element scheme, where the test and weight functions
implied by the generalized variational principle (Inagaki, Lynden-Bell 1990; Takahashi 1993) are used.
Using those schemes, the gravothermal core collapse was followed until the central density increased by a factor of $10^{14}$ with a 1\% numerical accuracy
concerning the total-energy conservation.
This was a big advance compared with previous calculations;
the central density growth factors in the calculations of Cohn (1979) and Cohn (1985) were $10^3$ and $10^6$, respectively.
It should be noted that
a numerical error originates not only in the integration of the FP equation,
but also in other calculation procedures, e.g., the calculation of the diffusion coefficients and the potential-recalculation steps.
It should also be noted that
2D FP calculations require a rather large computational time (see section \ref{sec:calc}).
Thus, ten years ago it was not easy to perform such calculations
as those which we present here.
Besides the FP models,
anisotropic gaseous models of star clusters have recently been successfully
applied (e.g., Giersz, Spurzem 1994; Spurzem 1996).
In Paper I, the pre-collapse evolution of single-mass clusters was studied.
In particular, Paper I revealed the evolution during self-similar phases of core collapse in anisotropic clusters.
The density profile left outside the collapsing core is the same as that
in isotropic clusters; i.e. $\rho \propto r^{-2.23}$.
In the self-similar regions, a slight velocity anisotropy exists:
i.e. $\sigma_{\rm t}^2/\sigma_{\rm r}^2 = 0.92$, where $\sigma_{\rm r}$ and $\sigma_{\rm t}$ are the one-dimensional radial and tangential velocity dispersions, respectively.
The core collapse rate, $\xi \equiv t_{\rm r}(0)d\ln\rho(0)/dt$, where $t_{\rm r}(0)$ and $\rho(0)$ are the central relaxation time and density, is $\xi = 2.9\times10^{-3}$, which is 19\% smaller than the value of $\xi = 3.6\times10^{-3}$ for an isotropic model.
That is, the core collapse proceeds slightly more slowly in the anisotropic model than in the isotropic model.
When the initial model is Plummer's model,
the collapse occurs at time $17.6\,t_{\rm rh,i}$ in the anisotropic model and
$15.6\,t_{\rm rh,i}$ in the isotropic model, where $t_{\rm rh,i}$ is the initial half-mass relaxation time.
The halo soon becomes dominated by radial orbits, even if the velocity distribution is initially isotropic everywhere.
The ratio of the radial velocity dispersion to the tangential one
increases monotonically as the radius increases.
Following Paper I, this paper examines the post-collapse evolution of single-mass clusters.
The effect of three-body binaries is incorporated into FP models as a heat source.
We are particularly interested in the development of the anisotropy in the halo after the core collapse.
Does the anisotropy continue to increase even after the collapse,
or come to be constant ?
We are also interested in whether there are any differences concerning the nature of gravothermal oscillations between isotropic and anisotropic models.
In section 2, the models and numerical methods are described.
In section 3, the calculation details are described and the numerical accuracy is discussed.
Section 4 presents the results of the calculations.
The conclusions and discussion are given in section 5.
\section{The Models and Methods}\label{sec:model}
\subsection{\normalsize \it Fokker-Planck Models}
\indent
We consider the collisional evolution of spherical single-mass star clusters in dynamical equilibrium.
In such clusters the distribution function of stars, $f$, is a function of the energy per unit mass $E$, the modulus of the angular momentum per unit mass $J$, and time $t$; i.e. $f=f(E,J,t)$.
The evolution of $f$ due to the two-body relaxation can be described by an orbit-averaged FP equation in $(E,J)$-space (Cohn 1979).
Numerical integration of the FP equation is performed in the same manner as in Paper I,
but a binary heating term is included in the equation.
For problems concerning post-collapse evolution,
we must specify the number of stars in the cluster, $N$, and the numerical constant, $\mu$, in the Coulomb logarithm $\ln (\mu N)$ (e.g. Spitzer 1987, p30).
In all of the calculations we adopted $\mu=0.11$,
which was obtained by Giersz and Heggie (1994a) for the pre-collapse evolution of single-mass clusters.
We note that Giersz and Heggie (1994b) found a smaller value of $\mu$
for the post-collapse evolution (their best value was $\mu=0.035$).
However, we fixed the value of $\mu$ throughout all evolutionary phases.
A small difference in $\mu$ does not seriously affect the nature of cluster evolution.
Although the determination of an appropriate value of $\mu$ is an interesting subject in collisional stellar dynamics,
it was beyond the scope of this study.
A future careful comparison between $N$-body, gaseous, and FP models may give further information concerning the Coulomb logarithm.
\subsection{\normalsize \it Three-Body Binary Heating}
\indent
The three-body binary heating rate per unit mass is given by
\begin{equation}
\dot{E}_{\rm b}=C_{\rm b} G^5 m^3 \rho^2 \sigma^{-7} \label{eq:lhr}
\end{equation}
(Hut 1985), where $m$ is the stellar mass, $\rho$ the mass density, $\sigma$ the one-dimensional velocity dispersion, and $C_{\rm b}$ a numerical coefficient.
In this paper we choose the standard value of $C_{\rm b}=90$.
The local heating rate (\ref{eq:lhr}) is orbit-averaged as
\begin{equation}
\langle \dot{E}_{\rm b} \rangle_{\rm orb}
= \left. \int_{r_{\rm p}}^{r_{\rm a}} \frac{dr}{v_{\rm r}} \dot{E}_{\rm b}
\right/ \int_{r_{\rm p}}^{r_{\rm a}} \frac{dr}{v_{\rm r}} \,, \label{eq:oahr}
\end{equation}
where $v_{\rm r}=\left\{2[\phi(r)-E]-J^2/r^2\right\}^{1/2}$ is the radial velocity of a star of energy $E$ and angular momentum $J$ at radius $r$,
and $r_{\rm p}$ and $r_{\rm a}$ are the pericenter and apocenter radii of the star, respectively.
The orbit-averaged heating rate (\ref{eq:oahr}) is added to the usual
first-order diffusion coefficient
$\langle \Delta E \rangle_{\rm orb}$ (cf. Cohn et al. 1989).
Furthermore, we assume that the scattering by binaries does not produce the net changes of the scaled angular momentum $R$,
i.e. $\langle\dot{R}_{\rm b}\rangle_{\rm orb}=0$.
Here, $R$ is defined as
$R=J^2/J_{\rm c}^2(E)$, where $J_{\rm c}(E)$ is the angular momentum of a circular orbit of energy $E$.
\section{Numerical Calculations}\label{sec:calc}
\indent\indent
Plummer's model (e.g. Spitzer 1987, p13) was chosen as the initial cluster model,
where the velocity distribution is isotropic everywhere.
Test calculations were carried out using both the finite-difference and
finite-element codes described in Paper I\@.
In calculations of the pre-collapse evolution,
the two codes achieved similar numerical accuracy concerning the total energy and mass conservation,
and the results obtained using them were generally in good agreement (Paper I).
In the present calculations of the post-collapse evolution,
the results obtained by the two codes were also generally in good agreement.
However, the numerical error in the energy conservation was considerably larger in the case of the finite-element code.
We note that the total energy of the cluster cannot be conserved in these calculations, but should increase,
because the binary heating is taken into account.
The total energy should increase by the amount of energy input.
To check the numerical error, the energy input was recorded during the calculations.
The energy input at each time step may be calculated by integrating
the product of the binary heating rate [equation (\ref{eq:oahr})]
and the distribution function over energy--angular momentum space.
The cumulative energy input is summed up over the time steps of the run.
There must be some degree of inaccuracy inherent in this estimation for the energy input.
However, this estimated energy input and the actual energy increase resulting from the FP-integration should be in agreement within some degree of numerical accuracy;
the degree of the agreement becomes better as the mesh sizes and the time step size become smaller.
When we estimated the numerical error in the energy conservation,
we assumed that the energy input estimated as above was exact.
For example,
at the end of the calculation for $N=5000$ with the finite-difference code (see figure 1a),
the relative energy error, which is defined as the ratio of the amount of the energy change due to numerical error to the initial total energy, was about 2\%;
at the end of a corresponding calculation with the finite-element code, the relative energy error was about 12\%.
There was a systematic energy drift during calculations of the post-collapse evolution in both the finite-difference and finite-element codes.
This error arose mainly from the FP-calculation steps.
In fact, in one FP step, the actual energy increase was always slightly smaller than the expected increase due to binary heating.
The degree of this disagreement was much larger in the finite-element code than in the finite-difference code.
An energy error arose also from the Poisson-calculation steps.
However, the sign of the energy change in one Poisson step was nearly random,
and the sum of the changes was small.
Therefore, the energy error stemming from the Poisson steps does not
contribute very much to a cumulative error.
The reason why the accuracy of the finite-element code for the energy conservation is not very good for the post-collapse calculations is not yet clear.
One way to improve the accuracy is to increase the mesh numbers, especially for the energy.
In fact, the energy error decreased as the energy-mesh number increased,
although the accuracy of the finite-difference code was better with the same mesh number.
Another promising way to improve the accuracy of the finite-element scheme is to use higher-order basis functions (see appendix 2 of Paper I).
In the present code two-dimensional piecewise bilinear polynomials are used as the basis functions.
The use of higher-order basis functions, however, introduces rather complicated computational procedures, and, as a result, a larger computational time.
In addition, we can obtain reasonably good accuracy with the finite-difference code.
Thus, we did not try higher-order basis functions in the present work.
As a result, we adopted the finite-difference code for the calculations presented in this paper because of its higher numerical accuracy in energy conservation.
The results of calculations for $N=$ 5000, 10000, 20000, and 40000 are shown in section \ref{sec:result}.
We denote the number of grid points in $X$, $Y$, and $r$ by $N_X$, $N_Y$,
and $N_r$, respectively.
[Variables $X$ and $Y$ are used instead of $E$ and $R$ in the code (Paper I).]
In these calculations, we set $N_X=151$, $N_Y=35$, and $N_r=91$.
The radial grid was constructed between $10^{-7}~r_0$ and $10^2~r_0$,
where $r_0$ is a length scale parameter of Plummer's model.
We carried out several test calculations with other sets of grid numbers,
and confirmed that the results converged.
The relative energy errors at the ends of the calculations shown in figure 1 were
2.3\% ($N=5000$), 2.5\% ($N=10000$), 1.1\% ($N=20000$), and 0.1\% ($N=40000$).
In the calculation of $N=40000$, the energy error reached its maximum (0.5\%) at the first collapse time.
However, since the sign of the energy error changed with the core oscillations,
there was some cancellation in the cumulative error.
The relative mass errors were
0.85\% ($N=5000$), 0.87\% ($N=10000$), 0.51\% ($N=20000$), and 0.57\% ($N=40000$).
One inevitable disadvantage of the 2D FP model relative to the 1D FP model is that 2D calculations take a much larger computational time than do 1D calculations.
One may expect that 2D calculations require about $N_Y$ times as large a
computational time as do 1D calculations.
In fact, however, it can happen that the computational time of 2D calculations increases faster than as $N_Y$.
The computational time required to solve a linear matrix equation for a discretized FP equation is not negligible, but, rather, a few tens of a percent of the
total computational time in 2D calculations.
In 1D calculations, in contrast, it is almost negligible compared with the
total computational time, because the matrix is tridiagonal and can be inverted very easily.
In 2D calculations, the matrix is a band matrix whose half-bandwidth is about $\min (N_X,N_Y)$, or $N_Y$ in our cases.
We can choose various direct or iterative schemes for solving the matrix equation.
In some direct schemes for band matrices, the number of required operations varies as $N_X N_Y^3$ for large $N_X$ and $N_Y$.
A kind of conjugate gradient method (iterative method) was actually used in our 2D FP code.
The number of operations varies as $N_X N_Y$ for this method.
We found by experience that the computational time required by a 2D calculation with our code is about $2N_Y$-times larger than that required by a corresponding 1D calculation (with the same $N_X$).
Most of the numerical calculations were performed on a HP 9000/715 workstation
(at 50~MHz clock cycle).
For example,
2D FP calculations for $N=$ 5000 and 40000 (cf. figure 1)
required about 29 and 140 hours of CPU time on this machine, respectively.
The total numbers of potential-recalculation time steps
(the FP time-step size was 1/10 of the potential-recalculation time-step size)
in these runs were 3000 and 15000, respectively,
and thus the 2D FP code required about 34 CPU sec per step.
\section{Results}\label{sec:result}
\indent\indent
The results are presented in standard units such that
$G=M=1$ and ${\cal E}_{\rm i}=1/4$, where $G$ is the gravitational
constant, $M$ is the total mass,
and ${\cal E}_{\rm i}$ is the initial total binding energy.
The time is usually measured in units of the initial half-mass relaxation time $t_{\rm rh,i}$ (Spitzer 1987, p40).
Figure 1 shows the evolution of the central density $\rho(0)$ for the cases of $N=$ 5000, 10000, 20000, and 40000 for the 1D and 2D models.
For each $N$, the features of the evolution in the 1D and 2D models are very similar.
An apparent difference between the two models exists in the core collapse times.
For every $N$, the core collapse (or bounce) occurs slightly earlier in the 1D model than in the 2D model, as found in Paper I\@.
Although an intuitive explanation for a slower collapse in the 2D (i.e. anisotropic) models is given in Paper I (see also Louis 1990),
a more convincing proof for it is desirable.
There is
a possibility that the slower collapse may be due to the numerical inaccuracy.
We may be able to test this possibility simply by repeating the calculations with finer grids.
We found that the collapse time was not affected by increasing the grid numbers.
This fact supports the conclusion that the slower collapse rate in the anisotropic models is real.
We also note that it is uncertain whether adopting the isotropic distribution function to calculate the diffusion coefficients (Paper I) has any noticeable effects on the collapse rate.
The core expansion after the core collapse is stable for $N=5000$,
and overstable for $N=10000$.
(For $N=10000$, the core expansion in the 1D model is really overstable, though the growth of the instability is slower than in the 2D model.)
For $N=20000$, the core expansion is unstable;
the central density oscillates chaotically with a large amplitude; that is,
gravothermal oscillations (Bettwieser, Sugimoto 1984) occur.
The gravothermal oscillations also occur for $N=40000$ with a larger amplitude than for $N=20000$.
Such a change in the nature of the post-collapse core evolution from monotonic expansion to chaotic oscillations with increasing $N$ was discussed in detail by Goodman (1987), Heggie and Ramamani (1989), Breeden et al. (1994), as well as Breeden and Cohn (1995), where isotropic models were used.
Spurzem (1994) presented long-lasting gravothermal oscillations
in his anisotropic gaseous model.
We see no qualitative difference concerning the features of the gravothermal oscillations between the 1D and 2D models.
The amplitudes and periods of the oscillations,
and the appearance of multiple-peaks in the two models are similar.
This is a reasonable result,
because the stability of the core expansion is determined by the degree of central concentration (Goodman 1987).
Furthermore, the velocity distribution is isotropic in the core, even in anisotropic models.
It is interesting that Spurzem (1994) suggested that gravothermal oscillations are more regular in the anisotropic model than in the isotropic one.
Figure 2a shows the evolution of the Lagrangian radii containing
1, 2, 5, 10, 20, 30, 40, 50, 75, and 90\% of the total mass for $N=$5000.
The core radius $r_{\rm c}$ is also plotted;
it is defined as
\begin{equation}
r_{\rm c} \equiv \left[\frac{3v_{\rm m}(0)^2}{4\pi G\rho(0)}\right]^{1/2}
\end{equation}
(Spitzer 1987, p16),
where $v_{\rm m}(0)$ is the total velocity dispersion at the center.
Because of the difference in the collapse time,
a comparison between the 1D and 2D models concerning the evolution of the spatial structure is somewhat complicated.
Thus, we plot figure 2b, where the time of the 1D calculation is scaled so that the collapse time in the 1D model should coincide with that in the 2D model.
Concerning the evolution before the core bounce, we see small difference
between the two models in figure 2b,
except for the 90\% radius.
After the core bounce, the core-halo structure is more developed in the 2D model;
that is, the 2D model has more concentrated inner Lagrangian radii and more extended outer radii.
We note that there is no big difference between the two models in the evolution of the half-mass radius.
The evolution of the half-mass radius is, roughly speaking, determined by the change in the total energy when the total mass is conserved.
In fact, the histories of the total energy changes in the two models are similar, if the time is scaled as in figure 2b.
In this respect, the coincidence of the evolution of the half-mass radius is reasonable.
The effects of the development of the anisotropy on the density is apparent in the outer half-mass region.
This is a consequence of the development of radial orbits
that the outer Lagrangian radii are more extended in the 2D model.
The more concentrated inner Lagrangian radii are a necessary reaction to that.
However, the evolution of the core radius in the two models is
again almost identical (if the time of the 1D models is scaled).
Figure 2b shows that
the post-collapse evolution well after the core bounce seems to be self-similar in both the 1D and 2D models;
all of the Lagrangian radii as well as the core radius expand nearly self-similarly.
A simple argument gives a self-similar expansion law, $r \propto t^{2/3}$, for isolated clusters with no mass-loss (H\'enon 1965; Goodman 1984).
Our 2D model as well as our 1D model is consistent with this law.
In the cases of other $N$'s, the evolution of the outer Lagrangian radii is similar to that in the case of $N=5000$.
When gravothermal oscillations occur, the inner Lagrangian radii oscillate,
while the mean trend of these radii is also an expansion (cf. figure 5).
Figures 3a and 3b show the evolution of the anisotropy parameter $A$,
\begin{equation}
A \equiv 2-2\frac{\sigma_{\rm t}^2}{\sigma_{\rm r}^2}
\end{equation}
at the 1, 2,..., and 90\% Lagrangian radii, for the case of $N=5000$.
During the core collapse, $A$ increases at every Lagrangian radius.
Even in the very inner regions (e.g. at the 1 and 2\% radii) the anisotropy increases at advanced stages of the collapse.
Just after the core bounce, the anisotropy at each of the inner (1--20\%) Lagrangian radii decreases rapidly.
This is due to a rapid core expansion.
The core expansion is faster than the expansion of the Lagrangian radii located outside the core at that time.
Then, the radial velocity dispersion decreases faster than the tangential one outside of the core,
because the former is influenced more by the core condition.
This is an exactly opposite process to that occurring during the core collapse.
After the rapid core expansion phase,
the cluster expands nearly self-similarly (as mentioned above),
and the anisotropy at each inner Lagrangian radius settles to roughly a
constant value.
The anisotropy at the outer Lagrangian radii continues to slowly increase after the core bounce.
In figure 3b we can see that the curve of the anisotropy at the 90\% radius flattens at late times.
This is partly because $A$ cannot exceed two, by definition.
In any case, it is true that the rate of the anisotropy increase at the outer radii slows down.
The development of the anisotropy in the outer regions is a consequence of the emergence of radial-orbit stars which have gained energy as the result of relaxation in the inner regions (Paper I).
Therefore, we expect that the rate of the anisotropy increase is related to the relaxation time in the inner regions.
Figure 4 shows the evolution of the anisotropy at the Lagrangian radii for
$N=5000$ as a function of the elapsed number of actual central relaxation times,
\begin{equation}
\tau (t) \equiv \int_0^t \frac{dt'}{t_{\rm rc}(t')} \label{eq:tau}
\end{equation}
(Cohn et al. 1989), where $t_{\rm rc}$ is the central relaxation time.
The core bounce occurs at $\tau \approx 2400$.
This figure indicates that the anisotropy at the outer Lagrangian radii increases roughly linearly with $\tau$ after a rapid increase at the very initial stages ($\tau {\;{\ds\lower.7ex\hbox{$<$}\atop\ds\sim}\;} 1000$).
While we see in figure 3b that the rate of increase of the anisotropy at the outer radii change sharply at the time of the core bounce,
we do not see any such sharp changes in figure 4b.
These facts tell us that the slowing down of the increase rate of the anisotropy after the core bounce appearing in figure 3a is mainly due to
the fact that the central relaxation time becomes longer and longer as the cluster expands.
In the statistical data from $N$-body simulations for $N=1000$ by Giersz and Spurzem (1994, figure 11),
we can see that the anisotropy at outer Lagrangian radii reaches its maximum around the core bounce time, and then decreases.
Such a decrease does not occur in our 2D FP models.
Giersz and Spurzem (1994) as well as Giersz and Heggie (1994b) argued that the anisotropy in the outer regions is determined (at least partially) by binary activity:
interactions of binaries with single stars, and the expulsion of stars and binaries from the core to the outer parts of the system.
Such effects are not completely included in our models, but binaries only play the role of a continuous heat source.
The effects of binaries on the anisotropy may be important for small-$N$ systems
and responsible for the fact that the anisotropy reaches its maximum around the core bounce in the 1000-body model.
For $N=10000$ clusters, there is a good agreement in the evolution of the anisotropy in the outer regions
between the 2D FP and $N$-body models (Spurzem 1996; Takahashi 1996).
It is not clear whether or not the anisotropy at the outer radii decreases after the core bounce in a 10000-body simulation (by Spurzem),
because the simulation has not yet been continued enough beyond the core bounce.
Figure 5 shows the evolution of the Lagrangian radii and the core radius for
$N=20000$.
In this case, the post-collapse evolution of the inner Lagrangian radii is not self-similar, but gravothermal oscillations occur.
However, the outer Lagrangian radii expand nearly self-similarly after the first collapse, just as in the case of $N=5000$.
The evolution of the anisotropy $A$ for $N=20000$ is shown in figure 6.
The anisotropy at each of the inner Lagrangian radii reaches a higher value at the first collapse time for $N=20000$ than for $N=5000$.
This is because the core collapse proceeds to more advanced stages
and the anisotropy penetrates into more inner regions (cf. figure 3 of Paper I) for $N=20000$.
After the first core collapse the anisotropy at the inner Lagrangian radii oscillates with the core oscillations.
The anisotropy increases as the core contracts and decreases as the core expands.
One may be interested in the fact that
the anisotropy even at 30 and 40\% radii shows a sign of oscillations,
while these radii, themselves, show no clear sign of oscillations in figure 5.
If we magnify figure 5, however, we can see that the radii actually oscillate with very small amplitudes.
That is, we can hardly see the oscillations of the 30\% and 40\% radii in figure 5, simply because their amplitudes are too small.
Next, we consider a gravothermal expansion phase.
The mechanism of gravothermal oscillations was already clearly explained
in a seminal work by Bettwieser and Sugimoto (1984).
The key feature of the gravothermal oscillations is the appearance of a temperature inversion (i.e., an outward increase of the temperature) which causes a gravothermal expansion.
Very recently, it has been clearly shown that a temperature inversion actually appears in a real $N$-body system of $N=32000$ (Makino 1996).
Figures 7 and 8 show the evolution of the velocity dispersion (or temperature) and density profiles
when a gravothermal expansion occurs in the case of $N=40000$.
At $t=18.74~t_{\rm rh,i}$, a temperature inversion has just appeared, and the gravothermal expansion has started.
At $t=18.84~t_{\rm rh,i}$ the amount of the temperature inversion is nearly maximum.
At this time, $\sigma_{\rm t}^2$ slightly exceeds $\sigma_{\rm r}^2$ in the region of the temperature hump.
This is because the radial velocity dispersion at the hump reflects a lower central temperature more than the tangential one.
The temperature inversion almost disappears at $t=19.04~t_{\rm rh,i}$.
This indicates the end of the gravothermal expansion,
and a normal isothermal core appears again.
Paper I showed that
the density profile in the outer halo is approximated by a power law,
$\rho \propto r^{-3.5}$ (cf. Spitzer, Shapiro 1972),
after the rapid development of anisotropy in the halo from the isotropic initial conditions.
Figure 9 shows the density profiles at three epochs after the core collapse for $N=5000$.
It seems that the halo density profile further approaches the power law $\rho \propto r^{-3.5}$ after the collapse.
As we can see in figure 2,
the density profile evolves self-similarly well after the core collapse.
(Even when gravothermal oscillations occur, the halo expands nearly self-similarly.)
Therefore, in well-relaxed isolated clusters, the halo density profile is always approximated by a $r^{-3.5}$ power law.
Paper I also showed that
the tangential velocity dispersion profile in the outer halo is reasonably
approximated by a power law, $\sigma_{\rm t}^2 \propto r^{-2}$,
though this approximation is not as good as the approximation for the density (see figures 7 and 8 of Paper I).
This power law can be applied for post-collapse clusters as well.
Actually, the velocity dispersion profile in the halo changes little after the collapse.
\section{Conclusions and Discussion}\label{sec:conclusion}
\indent\indent
In the previous paper (Paper I) an improved numerical code for solving the orbit-averaged FP equation in energy--angular momentum space was developed
in order to study the evolution of star clusters which have {\it anisotropic} velocity distributions.
Numerical simulations were performed by using the code for the pre-collapse evolution of single-mass globular clusters in Paper I\@.
In this paper, following Paper I, the post-collapse evolution of single-mass clusters was considered.
The effect of three-body binaries was incorporated in the code as a heat source.
Actually, two partially different codes were developed in Paper I\@.
They differ in the scheme for solving the FP equation, itself:
one uses the finite-difference scheme and the other does the finite-element scheme.
The two codes have similar numerical accuracy for pre-collapse calculations.
For post-collapse calculations, however, the finite-element code is worse concerning energy conservation than the finite-difference code.
Although this difficulty of the finite-element code may be removed by using higher-order basis functions,
such efforts were not made in this study.
By using the finite-difference code we can perform post-collapse calculations with reasonable numerical accuracy (within a few percent error in energy conservation).
There is no significant difference in the evolution of the central density between the 1D and 2D FP models as far as we studied for $N=$ 5000, 10000, 20000, and 40000.
(However, there is a difference in the core collapse time, as described in Paper I;
the collapse time is a little longer in the 2D model.)
In particular, the qualitative features of gravothermal oscillations are common to the 1D and 2D models.
The appearance of a temperature inversion in the 2D model is similar to that in the 1D model.
However, a slight anisotropy appears in the region of the temperature hump:
the tangential velocity dispersion exceeds the radial one.
This is an opposite anisotropy to the usual one,
and a consequence of the lower central temperature.
The opposite anisotropy disappears along with the disappearance of the temperature inversion.
Clusters expand nearly self-similarly as a whole well after a core collapse.
In fact, the expansion is consistent with the self-similar expansion law,
$r \propto t^{2/3}$.
The core-halo structure is more developed in the 2D model than in the 1D model.
However, the evolution of the half-mass radius in the two models is almost identical, if the time of one model is scaled so that the collapse times in the two models should coincide.
The density profile in the outer halo is approximated by a power law,
$\rho \propto r^{-3.5}$, after the core collapse as well as before it.
The anisotropy at the inner Lagrangian radii decreases during a rapid core-expansion phase just after the core bounce.
When the core expansion is stable (e.g. for $N=5000$), the anisotropy at each of the inner radii settles to a roughly constant value, because the inner radii expand self-similarly as well as the half-mass and outer radii.
When gravothermal oscillations occur (e.g. for $N=$ 20000, 40000), the anisotropy at the inner radii oscillates with the core oscillations.
The anisotropy at the outer Lagrangian radii continues to increase slowly after the core bounce, whether the core oscillations occur or not.
The rate of anisotropy increase at the outer radii slows down as the cluster expands.
This is mainly because the central relaxation time gets longer.
If we measure time in units of the central relaxation time [see equation (\ref{eq:tau})],
the anisotropy increase rate at the outer radii is almost constant, except for the initial epochs of the calculations when the anisotropy increases very rapidly.
There are other currently-working numerical codes which can deal with the
velocity anisotropy:
one of them is Spurzem's code, which is based on the anisotropic gaseous model (e.g. Spurzem 1996);
the other is Giersz's code, which solves the FP equation by a Monte-Carlo technique (Giersz 1996).
On the other hand, Giersz and Heggie (1994a, b) showed that
the combination of a large number of $N$-body simulations leads to results of high statistical quality
which can give valuable information concerning the theory of stellar dynamics.
Some comparisons between $N$-body, isotropic/anisotropic gaseous, and isotropic FP models were made for isolated one- or two-component clusters
(Giersz, Heggie 1994a, b; Giersz, Spurzem 1994; Spurzem, Takahashi 1995).
Those comparisons showed that the results of the FP and gaseous models are generally in good agreement with the statistical data of $N$-body simulations.
However, differences between the statistical models and the $N$-body models remain in some other respects.
2D FP models may give a better agreement with $N$-body models.
Comparisons of the 2D FP models with the anisotropic gaseous and $N$-body models are now in progress.
A preliminary result of such comparisons for $N=10000$ models is presented by Spurzem (1996) and Takahashi (1996).
For example, concerning the evolution of anisotropy in the halo,
the agreement between the 2D FP and $N$-body models is very good.
This fact supports the reliability of our 2D FP models.
Through Paper I and this paper, we have investigated the evolution of {\it realistic} anisotropic models of globular clusters.
However, these models are {\it unrealistic} in some respects.
They do not consider the stellar-mass spectrum, the galactic tidal field, and the effects of tidal and primordial binaries.
The stellar mass-loss may also have an important effect on the initial evolutionary stages of clusters (Chernoff, Weinberg 1990).
More realistic models incorporating some or all of these various effects will be studied in the future.
\bigskip\bigskip
I would like to thank Professor S. Inagaki for valuable comments.
I also wish to thank the referee, Professor H. Cohn, for useful comments which greatly helped to improve the presentation of the paper.
This work was supported in part by the Grand-in-Aid for Encouragement of Young
Scientists by the Ministry of Education, Science, Sports and Culture of Japan
(No. 1338).
\begin{reference}
\item Bettwieser E., Sugimoto D. 1984, MNRAS\ 208, 493
\item Breeden J.L., Cohn H.N., Hut P., 1994, ApJ\ 421, 195
\item Breeden J.L., Cohn H.N., 1995, ApJ\ 448, 672
\item Chang J.S., Cooper G. 1970, J. Comp. Phys. 6, 1
\item Chernoff D.F., Weinberg M.D. 1990, ApJ\ 351, 121
\item Cohn H. 1979, ApJ\ 234, 1036
\item Cohn H. 1980, ApJ\ 242, 765
\item Cohn H. 1985, in Dynamics of Star Clusters, IAU Symp No.113, ed
J.~Goodman, P.~Hut (D.~Reidel Publishing Company, Dordrecht) p161
\item Cohn H., Hut P., Wise M., 1989, ApJ\ 342, 814
\item Drukier G.A. 1995, ApJS\ 100, 347
\item Giersz M. 1996, in Dynamical Evolution of Star Clusters, IAU Symp No.174, ed P.~Hut, J.~Makino (Kluwer, Dordrecht) in press
\item Giersz M., Heggie D.C. 1994a, MNRAS\ 268, 257
\item Giersz M., Heggie D.C. 1994b, MNRAS\ 270, 298
\item Giersz M., Spurzem R. 1994, MNRAS\ 269, 241
\item Goodman J. 1984, ApJ\ 280, 298
\item Goodman J. 1987, ApJ\ 313, 576
\item Heggie D.C., Ramamani N. 1989, MNRAS\ 237, 757
\item H\'enon M. 1965, Ann. Astrophys. 28, 62
\item Hut P. 1985, in Dynamics of Star Clusters, IAU Symp No.113, ed
J.~Goodman, P.~Hut (D.~Reidel Publishing Company, Dordrecht) p231
\item Inagaki S., Lynden-Bell D. 1990, MNRAS\ 244, 254
\item Louis P.D. 1990, MNRAS\ 244, 478
\item Makino J. 1996, in Dynamical Evolution of Star Clusters, IAU Symp No.174, ed P.~Hut, J.~Makino (Kluwer, Dordrecht) in press
\item Spitzer L.Jr, 1987, Dynamical Evolution of Globular Clusters (Princeton
University Press, Princeton)
\item Spitzer L.Jr, Shapiro S.L. 1972, ApJ\ 173, 529
\item Spurzem R. 1994, in Ergodic Concepts in Stellar Dynamics,
ed D. Pfenniger, V.A. Gurzadyan (Springer, Berlin) p170
\item Spurzem R. 1996, in Dynamical Evolution of Star Clusters, IAU Symp No.174, ed P.~Hut, J.~Makino (Kluwer, Dordrecht) in press
\item Spurzem R., Takahashi K. 1995, MNRAS\ 272, 772
\item Takahashi K. 1993, PASJ\ 45, 789
\item Takahashi K. 1995, PASJ\ 47, 561 (Paper I)
\item Takahashi K. 1996, in Dynamical Evolution of Star Clusters, IAU Symp No.174, ed P.~Hut, J.~Makino (Kluwer, Dordrecht) in press
\end{reference}
\clearpage
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfverbosetrue
\epsfxsize=15cm \epsfbox{fig1ab.ps}
\end{center}
\caption{
Evolution of the central density for (a) $N=5000$, (b) $N=10000$, (c)
(c) $N=20000$, and (d) $N=40000$.
The solid curves are the results of the 2D FP calculations, and the
dotted curves are the results of the 1D FP calculations.
The time is measured in units of the initial half-mass relaxation time
$t_{\rm rh,i}$.
}
\end{figure}
\clearpage
\setcounter{figure}{0}
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfverbosetrue
\epsfxsize=15cm \epsfbox{fig1cd.ps}
\end{center}
\caption{
{\it continued}
}
\end{figure}
\clearpage
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfverbosetrue
\epsfxsize=15cm \epsfbox{fig2.ps}
\end{center}
\caption{
(a) Evolution of Lagrangian radii containing
1, 2, 5, 10, 20, 30, 40, 50, 75, and 90\% of the total mass, for
$N=5000$.
The solid curves are the result of the 2D FP calculation, and the dotted
curves are the result of the 1D FP calculation.
The core radii are also plotted by the dashed curve (2D) and the
dash-dotted curve (1D).
(b) Same as (a), but
the time of the 1D calculation is scaled
so that the collapse time in the 1D calculation should coincide with
that in the 2D calculation.
}
\end{figure}
\clearpage
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfverbosetrue
\epsfxsize=15cm \epsfbox{fig3.ps}
\end{center}
\caption{
Evolution of the anisotropy parameter,
$A \equiv 2-2\sigma_{\rm t}^2/\sigma_{\rm r}^2$,
at the (a) inner (1--20\%) and (b) outer (30--90\%) Lagrangian radii,
for $N=5000$.
}
\end{figure}
\clearpage
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfverbosetrue
\epsfxsize=15cm \epsfbox{fig4.ps}
\end{center}
\caption{
Same as figure 3, but the abscissa is the elapsed number of actual
central relaxation times, $\tau$ [see equation (5)].
The core bounce occurs at $\tau \approx 2400$.
}
\end{figure}
\clearpage
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfverbosetrue
\epsfxsize=15cm \epsfbox{fig5.ps}
\end{center}
\caption{
Evolution of the Lagrangian radii in the 2D model for $N=20000$.
The dashed curve represents the core radius.
}
\end{figure}
\clearpage
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfverbosetrue
\epsfxsize=15cm \epsfbox{fig6.ps}
\end{center}
\caption{
Same as figure 3, but for $N=20000$.
}
\end{figure}
\clearpage
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfverbosetrue
\epsfxsize=15cm \epsfbox{fig7.ps}
\end{center}
\caption{
Evolution of the velocity dispersion (or temperature) profile when a
temperature inversion appears (the times are indicated in the figure
in units of the initial half-mass relaxation time),
in the 2D FP model for $N=40000$.
The solid and dotted curves are the radial and tangential velocity
dispersions, respectively.
}
\end{figure}
\clearpage
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfverbosetrue
\epsfxsize=15cm \epsfbox{fig8.ps}
\end{center}
\caption{
Evolution of the density profile which corresponds to the velocity
dispersion profile shown in figure 7.
}
\end{figure}
\clearpage
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfverbosetrue
\epsfxsize=15cm \epsfbox{fig9.ps}
\end{center}
\caption{
Density profiles at three epochs after the core collapse in the 2D FP
model for $N=5000$.
The dotted, dashed, and solid curves represent the profiles
at $t/t_{\rm rh,i}=$ 17.9 (just after the collapse time), 28.4, and 42.8,
respectively.
The asymptotic line $\rho \propto r^{-3.5}$ is shown for a comparison.
}
\end{figure}
\end{document}
| proofpile-arXiv_065-399 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |